Modeling the Google Deschutes CDU in GT-SUITE: A Blueprint for Liquid Cooling Success
A Difficult Balance for Data Centers
As data centers push toward higher rack power densities and rapidly scaling AI workloads, liquid cooling has become essential for managing extreme thermal loads efficiently. Designing these next generation cooling systems is challenging – engineers must balance reliability, energy use, water consumption, and safety, all while navigating tight deployment timelines. Simulation plays a critical role in this environment. It enables teams to explore large design spaces, predict performance under dynamic conditions, and evaluate components and controls long before hardware exists.
Against this backdrop, Google led an Open Compute Project (OCP) initiative to define a standardized 2 MW, Next-Gen Coolant Distribution Unit (CDU) known as Project Deschutes. The project published a detailed specification and CAD reference to encourage broad industry adoption and interoperability. As a contributing OCP member, Gamma Technologies developed a complete GT-SUITE model of this CDU, faithfully reproducing the geometry and performance described in the specification. The result is a ready-to-use simulation asset that helps data center engineers evaluate system behavior, integrate the CDU into larger facility level models, and adapt the design for new products or scaling strategies.
From CAD to a High‑Fidelity 1D/3D Model
The released Deschutes model begins with a full 3D CAD assembly. Using GEM3D, the geometry is automatically converted into a GT‑SUITE simulation model, enabling accurate pressure‑drop prediction and fluid‑thermal behavior without manually extracting parameters. Engineers can visualize temperature distribution and flow variables directly on the original geometry, making results intuitive and accelerating design iteration.

Figure 1: Geometry conversion of original CAD file into a network of 1D components and visualization of results on top of original geometry
This preprocessing pipeline reduces both time and error, allowing users to move quickly from CAD to validated simulation while maintaining fidelity to the OCP reference design.
Matching OCP Performance and Enabling System Integration
The released Deschutes model reproduces the performance targets published in the OCP specification, offering a robust baseline for design studies. Because the system interfaces with chillers, buffer tanks, and IT coolant loops, the model can be directly integrated into broader facility architectures. Users can adapt component sizing, reconfigure piping, or explore alternative materials and fluids while still benefiting from the validated reference structure. This makes the model valuable not just for studying the OCP design but for developing future CDU generations.

Figure 2: Results of simulated performance compared to data from specification to validate model behavior
A Modular Simulation Environment for Technology Exploration
GT‑SUITE enables rapid comparison of components, materials, and cooling technologies. The modular environment supports evaluating single‑phase and two‑phase coolants, exploring different heat‑exchanger families, testing pumps and filters from multiple vendors, and studying the impact of dry coolers versus cooling towers. By adjusting geometry and operating conditions, teams can investigate trade‑offs related to PUE, WUE, heat‑recovery potential, and refrigerant selection. This flexibility allows engineers to tailor the CDU to site‑specific environmental conditions, sustainability targets, and operational constraints and ensure a future-proof design.

Figure 3: Illustration of modularity – heat exchangers from various suppliers can be directly compared to one another within a single simulation to optimize component selection. Other factors, such as fluid composition, can also be varied for quick and easy comparison.
Fast Optimization and Automated Design Exploration
Because data center cooling systems involve many interacting variables, optimization is essential. GT‑SUITE’s fast solvers make it possible to run extensive design‑of‑experiments studies, distributed computing sweeps, and optimization routines. In the Deschutes model, engineers can search for operating points that minimize electrical consumption while maintaining safe coolant temperatures at the rack outlets. This type of automated exploration helps identify optimal flow rates, component sizes, and control strategies, which can be performed even before physical hardware is available, thus supporting rapid product development and robust decision‑making.

Figure 4: Multi-factor, multi-objective optimization results using the built-in Design Optimizer. Primary and secondary flow rates are varied to minimize total electric power consumption and coolant return temperature from the racks.
Virtual Test Benches for Extreme Scenarios
Full‑scale physical testing of a 2 MW CDU is expensive and often impractical, especially for failure analysis. The Deschutes simulation model, applied as a virtual test bench, enables engineers to study transient events such as rapid IT load spikes, extreme ambient temperatures, or chiller power failures, which are difficult to obtain experimentally but essential for designing resilient cooling systems.

Figure 5: A thermal ride through analysis demonstrates how buffer tank volume affects supply temperatures when the chiller loses power, but pumps and IT load remain online.
Digital Twin and Fault Detection Capabilities
GT‑SUITE models can power machine‑learning metamodels to augment real‑time system monitoring. By comparing expected versus measured behavior, the digital twin can detect and classify anomalies such as valve failures (e.g. stuck or leaky valves), heat exchanger fouling, or pump cavitation to name a few. See this GT webinar for a deeper dive on fault detection: GT-SUITE For Increased Robustness of Fault Detection.

Figure 6: Demonstration of a valve fault that is properly identified by an anomaly detection ML model. When the valve cannot fully open, the flow rate is reduced which may not be obvious to the naked eye during dynamic operation. However, the anomaly detection model identifies deviation from expected conditions which is secretly costing multiple degrees on the coolant supply temperature.
The ability to generate large synthetic datasets makes GT-SUITE an efficient platform for training AI‑based condition‑monitoring tools. Coupled with the ability to interface with actual operational data via SCADA systems, etc., GT-SUITE lends itself well to the training of operators (e.g. what action to take when encountering a given fault). This ensures operators have confidence in the corrective action taken to ensure predictive maintenance and lowest possible downtime.

Figure 7: Illustration of real-time communication between GT model (either physics-based or from machine learning) and the system controls platform (e.g. SCADA) for live use during operation.
From Reference Design to Scalable Cooling Innovation
The GT‑SUITE model of the OCP Google Project Deschutes CDU provides a powerful foundation for engineers working on high‑density data center cooling. By combining a modular environment, rapid design exploration, and advanced digital‑twin capabilities, it accelerates innovation while reducing physical testing needs. The model helps teams design safer, more efficient, and more resilient cooling systems as data center demands continue to grow.
Ready to advance your data center thermal management strategy?
Visit our Data Center Solutions page to explore more data center simulation topics, learn more about our Digital Twin Solutions and Machine Learning capabilities, and browse our blog page for additional insights on digital twins and data center innovation or reach out to our team for a demo of the Deschutes CDU model and associated toolchain. Follow our LinkedIn channel to stay updated on the latest advancements in our solutions.
Transforming Data Center Cooling: From Physics-Based Simulation to AI-Powered Control
Data centers face an unprecedented thermal challenge that demands revolutionary approaches to cooling system design and control. As computational demands surge with AI workloads and high-performance computing applications, heat generation has increased dramatically with some next-generation systems producing heat fluxes many times higher than traditional data centers. This exponential growth in thermal loads is pushing conventional cooling approaches beyond their operational limits, making advanced thermal management strategies not just beneficial, but essential for maintaining system reliability and energy efficiency.

Model Predictive Control (MPC) framework using a Gamma Technologies’ GT-NARX model to predict system behavior and optimize control actions under constraints.
The complexity of modern data center cooling systems—encompassing liquid cooling loops, vapor compression cycles, and sophisticated air handling units—creates a perfect storm of engineering challenges. Traditional design approaches rely heavily on physical prototyping and experimental validation, leading to lengthy development cycles and limited optimization opportunities. Meanwhile, when it comes to controlling such complex cooling scenarios conventional PID controllers struggle with the nonlinear dynamics and multiple interacting variables inherent in these complex thermal systems. Model based predictive controllers usually offer more capabilities.
Digital twin technology emerges as the transformative solution, enabling engineers to create virtual replicas of cooling systems that continuously update with real-time operational data. These digital twins serve as comprehensive platforms for system design, optimization, and intelligent control development—Gamma Technologies’ GT-SUITE offers the required virtual environment required to create and deploy digital twins that dramatically reduces development time and costs while improving system performance.
The Challenge: Beyond Traditional Cooling Limits
Modern data centers house thousands of server racks generating extreme heat loads that conventional computer room air-conditioning (CRAC) systems struggle to manage efficiently. The challenge extends beyond simple heat removal—engineers must optimize energy consumption, maintain precise temperature control across varying loads, and ensure system reliability under dynamic operating conditions.
Many innovative cooling solutions are emerging to capture heat directly at board, server or racks. Today we are looking at a passive, refrigerant-based system with gravity-driven circulation and phase-change technology— a Rear Door Heat Exchanger system. Phase-change technology removes heat by using a fluid that absorbs heat when it evaporates and releases it when it condenses, enabling efficient heat transfer. Realizing its full potential requires sophisticated control strategies that can predict system behavior, optimize multiple inputs simultaneously, and adapt to dynamic thermal loads—capabilities that demand comprehensive digital twin implementations.

Top image consists of traditional datacenter air cooling setup with CRAC units, raised floor and ceiling and a cold/hot aisle concept. Bottom image consists of setup with rear door heat exchangers.
Model Predictive Control (MPC) offers several advantages over traditional PID control for datacenter cooling. Because MPC can predict future thermal loads and equipment behavior, it optimizes cooling resources proactively rather than reacting to temperature deviations after they occur. This leads to more stable temperature regulation, reduced energy consumption, and smoother operation of chillers, and airflow systems. MPC can also handle multivariable interactions—such as temperature, humidity, and airflow—more effectively than independent PID loops, making it well suited for the complex, tightly coupled environments found in modern datacenters.
Comprehensive Digital Twin Approach to Data Center Thermal Management
GT-SUITE provides the complete digital twin ecosystem needed to model, optimize, and control advanced data center cooling systems by combining physics-based simulation with AI-powered metamodeling and real-time integration capabilities.
This digital twin approach transforms how engineers tackle thermal management challenges, enabling virtual testing of countless operating scenarios, predictive maintenance strategies, and intelligent control system development—all before physical deployment. The platform’s capabilities span all the areas that revolutionize data center cooling system development.
1. Physics-Based Modeling of Arbitrary Cooling Technologies
For the rear door heat exchanger system in question, refrigerant is gravity fed to the evaporator that is positioned on the rear side of the rack and is spanning over most of the rack’s height and width. Due to the heat input, the refrigerant evaporates and is driven back to the facility water-connected heat condenser by buoyancy driven flow.
GT-SUITE excels at modeling the advanced cooling systems and architectures within a single integrated digital twin platform. For data center applications, this means seamlessly combining liquid cooling loops, vapor compression systems, and air handling units with arbitrary configurations. The software’s robust solvers handle all fluid types—from coolants to refrigerants—while accurately capturing the complex interactions between phase-change heat transfer, gravity-driven circulation, and forced convection.
Advanced cooling systems featuring phase-change tubes with fins and DC fan arrays require precise modeling of refrigerant evaporation, condensation, and gravity-driven flow. GT-SUITE’s advanced heat exchanger models and two-phase flow capabilities provide the accuracy needed to predict system performance across varying heat loads and operating conditions, creating the foundation for comprehensive digital twin implementations.

GT-SUITE system model of a data center cooling architecture integrating IT racks, airflow management, and a rear-door heat exchanger connected to the facility chilled water loop.
2. Machine Learning for Model Predictive Control
Built-in Machine Learning Assistant transforms physics-based simulations in GT-SUITE into fast-running metamodels ideal for model predictive control (MPC) applications within digital twin environments. A metamodel is the mathematical representation of the underlying physics-based model, an approximation that captures the input-output relationships and dynamic behavior of the high-fidelity simulation while significantly reducing computational complexity. Rather than relying on experimental data collection—which would require extensive physical testing across thousands of operating conditions—the ML Assistant generates training datasets directly from GT-SUITE’s validated physics models.
This approach creates nonlinear autoregressive exogenous (NARX) metamodels that capture the inherent dynamics of cooling systems, where outputs depend on historical values of both inputs and outputs. For advanced cooling systems, this means developing plant models that understand how chiller valve positions and fan speeds interact over time to control temperatures across multiple measurement points. The resulting metamodels achieve accuracy within 5% of the full physics simulation while running orders of magnitude faster—essential for real-time MPC implementation in digital twin applications.

Model Predictive Control (MPC) framework integrated with a GT-SUITE virtual system for predictive optimization and control.
3. Seamless Integration with External Control Platforms
Flexible export options in GT-SUITE’s Machine Learning Assistant enable seamless integration with external control development environments. NARX metamodels can be exported directly as C code for embedding in real-time control systems or as C MEX files specifically formatted for Simulink integration. This capability eliminates the need for manual model translation or complex interfacing protocols.
The C code export generates standalone files that can be compiled for deployment on embedded control hardware, while the MEX format enables direct integration with MATLAB/Simulink control development workflows. This flexibility allows control engineers to leverage GT-SUITE’s physics-based NARX models within their preferred development environments, accelerating the transition from simulation to deployed control systems.
Digital Twin Integration for Advanced Control Development
GT-SUITE digital twins serve as comprehensive virtual test beds for developing and validating sophisticated control algorithms. The platform’s integration capabilities with Simulink, or other common XiL platforms enable engineers to implement multiple-input, multiple-output (MIMO) nonlinear model predictive control (NMPC) systems directly within the digital twin environment.
This integration allows for extensive algorithm testing without physical hardware, enabling engineers to evaluate control performance under diverse scenarios—from single-rack cooling to complex multi-rack systems with uneven heat distributions. The digital twin approach enables rapid iteration on control strategies, testing edge cases, and validating robustness to disturbances that would be costly or impossible to replicate in physical systems.
For data center applications, this means developing NMPC systems that can drive cooling systems to new temperature setpoints in minutes rather than hours, while maintaining individualized control of each rack for optimal energy efficiency. The digital twin continuously updates with real-time operational data, enabling predictive maintenance, fault detection, and performance optimization strategies that maximize system reliability and minimize energy consumption.
Transforming Data Center Operations Through Digital Twins
The combination of GT-SUITE’s physics-based modeling, AI-enhanced metamodeling, and digital twin integration creates a powerful development pathway for next-generation data center thermal management. This comprehensive digital twin approach enables:
Rapid System Design: Model arbitrary cooling configurations without physical prototypes, accelerating time-to-market for innovative cooling solutions while reducing development costs.
Intelligent Control Development: Generate plant models for advanced MPC systems using simulation data rather than expensive experimental campaigns, enabling sophisticated control strategies from day one.
Virtual Validation: Test control algorithms across thousands of operating scenarios in the digital twin environment, ensuring robust performance before deployment and minimizing commissioning time.
Predictive Operations: Leverage real-time data integration to predict equipment failures, optimize maintenance schedules, and continuously improve system performance based on operational insights.
Energy Optimization: Develop control strategies that minimize energy consumption while maintaining thermal stability across dynamic heat loads, directly impacting operational costs and environmental compliance.
The Future of Data Center Cooling
As data centers continue evolving toward higher power densities and more complex thermal challenges, digital twin technology becomes the cornerstone of intelligent thermal management. GT-SUITE’s comprehensive platform—from physics-based modeling through AI-powered metamodeling to real-time digital twin integration—provides the foundation for developing the next generation of predictive, adaptive cooling systems.
The digital twin approach transforms data center operations from reactive maintenance to predictive optimization, enabling unprecedented levels of efficiency, reliability, and performance. By creating virtual replicas that continuously learn from operational data, engineers can optimize system performance in real-time while developing next-generation cooling technologies in parallel.
Ready to advance your data center thermal management strategy?
Visit our Data Center Solutions page to explore more data center simulation topics, learn more about our Digital Twin Solutions and Machine Learning capabilities, and browse our blog page for additional insights on digital twins and data center innovation. Follow our LinkedIn channel to stay updated on the latest advancements in our solutions.
Relevant Material
- https://www.gtisoft.com/blog-post/data-center-solutions-a-strategic-imperative-for-competitive-advantage/
- https://www.gtisoft.com/blog-post/digital-twin-simulation-with-gt-suite/
- https://www.gtisoft.com/blog-post/dynamic-machine-learning-for-modeling-and-simulation/
- https://www.gtisoft.com/blog-post/virtual-calibration-of-xev-thermal-management-systems/
- GT University course on MLA / Optimization
BMS Architecture Explained: How a BMS Protects, Balances & Optimizes Batteries
In the previous blog, we discussed how the underlying architecture of a battery management system (BMS) is very similar to our central nervous system. The BMS relies on sensors (sensory organs) to make real-time decisions to ensure safety and optimize performance of the battery pack. From a simulation perspective, this architecture provides the foundation for modeling how sensing, decision-making, and control interact across electrical and thermal domains. The following blog will explain how it does this successfully by giving a detailed description of its core functionalities, as shown in figure 1.
The Role of Sensors in Battery Management Systems (BMS)
The BMS relies on voltage, temperature, and current sensors to observe the battery’s condition. These sensors allow us to gain an understanding of the state of the battery. A crucial state that is predicted to use current sensors is the State of Charge (SoC). SoC is a normalized measurement of the instantaneous capacity of the battery pack. A 0% SOC means the battery is depleted; 100% SOC means the battery is fully charged. This state cannot be directly measured and is predicted using empirical techniques.
From a simulation perspective, sensor behavior such as measurement noise, signal delay, and sampling rates is explicitly modeled to evaluate their impact on state estimation accuracy and protection logic robustness.
If the sensors detect that the cells or modules in the battery pack are imbalanced or outside their safe operational range, the BMS will be alerted. Once alerted, it will take proper mediatory actions such as derating the current or adjusting the thermal management system.
How BMS Detects Faults and Protects Against Electromagnetic Interference (EMI) Risks
The BMS will have built-in systems that will use the inputs from the sensors to detect any sort of faults. Once the fault is recognized, the BMS will take proper mediatory actions such as derating the current or adjusting the thermal management system. These mechanisms are designed to catch potential failures early and prevent them from escalating into safety-critical events.
One of the most important tools for this is insulation resistance monitoring. Insulation resistance monitoring is the process of measuring electrical resistance between a high-voltage system and its ground (typically the chassis for EVs). It helps detect whether current is unintentionally “leaking” from the battery system to the grounded structure. A single insulation fault is usually okay – but if a second fault occurs within the system, it could create a short circuit. The BMS constantly measures insulation resistance and looks for signs of potential failure – such as an indication of moisture, mechanical damage, or degradation of insulation materials.
In simulation, insulation degradation, moisture ingress, and fault thresholds can be parameterized to test how quickly the BMS detects failures and whether protection actions are triggered within safe response times.
To further mitigate fault propagation, battery packs also consist of contactors . Contactors are electrically controlled switches that are used to connect or disconnect high-voltage parts within a system. They are robust enough to handle large amounts of current and are controlled by the BMS to decide when they should open (disconnect) and close (connect). For battery packs, these contactors are housed within a Battery Disconnect Unit (BDU). The BDU serves as the power gateway between the battery and the rest of the electrical system. It typically includes the main contactors, a pre-charge circuit, and a melting fuse for irreversible protection, as shown in figure 2.
During operation, the BMS uses the Battery Disconnect Unit (BDU) to safely sequence power delivery. It does this by closing the pre-charge contactor to prevent an inrush of current and then engaging the main contactors. In fault conditions, the BMS commands the contactors to open, allowing the battery pack to be immediately isolated to prevent further damage or risk to the user.
Together, these fault detection and isolation mechanisms are quietly and constantly scanning signs of trouble before they become dangerous. By monitoring insulation, enforcing physical design rules, and controlling high-voltage pathways through the BDU, the BMS plays a critical role in ensuring the safety, reliability, and resilience of the entire electrified system.
Communication Architecture in BMS: CAN, Ethernet & Master-Slave Control
Proper and frequent communication between the sensors, controllers, and other modules is crucial for the BMS to confirm, coordinate, and fine-tune its responses across the system. This communication network acts like the system’s neural messaging system, allowing it to coordinate between sensing and action.
At the pack level, the BMS typically uses a Controller Area Network (CAN) bus or an ethernet communication medium. The CAN bus is flexible, supports many members (connected components), and offers good noise immunity, making it ideal for connecting the BMS with inverters, chargers, vehicle control units, and thermal systems.
In advanced or distributed systems, battery packs are often divided into multiple segments, each monitored by their own local BMS module (these are referred to as slave BMS units). Each slave BMS is responsible for collecting measurements from its local set of cells or modules. These slave units then send their data upstream to a central coordinator called the master BMS.
The master BMS serves as the supervisory controller. It aggregates all data from the slave modules, performs high-level decision-making (such as charge/discharge limits, state estimations, and safety overrides), and interfaces with the rest of the vehicle or system. It may also issue commands back to the slave modules, such as initiating balancing or isolating a specific module.
For distributed BMS architectures, simulation enables evaluation of communication delays, data synchronization, and message loss to ensure stable control behavior and avoid false fault triggers.
A well-designed communication architecture ensures that every signal, every message, and every command flow through the system reliably, allowing the battery to operate safely, efficiently, and in harmony with its environment.
The Role of Simulation in Battery Management System (BMS) Development
Just as our senses feed information to our brain for interpretation and decision-making, all the voltage, temperature, current, fault detection, and communication inputs in a battery pack ultimately report back to the BMS—the brain of the operation. The BMS doesn’t just collect data; it continuously evaluates the health and performance of the battery and makes real-time decisions to ensure safety, efficiency, and longevity.
From estimating internal states like State of Charge (SOC), State of Health (SOH), and State of Power (SOP), to making protection decisions and issuing control commands, the BMS is responsible for orchestrating every aspect of battery behavior. It must process noisy signals, respond to rapid changes, and adapt to variations across cells, modules, and operating conditions.
By coupling electrical, thermal, aging, and control models, simulation allows engineers to observe how estimation algorithms, protection logic, and power limits interact dynamically under real-world operating scenarios.
This is where virtualization comes in. Before a single line of BMS firmware is deployed or hardware is built, engineers rely on simulation tools like GT-SUITE to develop, validate, and optimize BMS algorithms. A virtual battery system acts as a digital shadow, mimicking the real battery’s electrical, thermal, and aging behavior across a wide range of use cases and conditions. It enables fast iteration, early integration, and safer deployment by catching potential issues before they appear in hardware.
For example, simulation allows engineers to study how rising cell temperatures influence internal resistance, which in turn impacts State of Power (SOP) limits and real-time current derating decisions.
BMS Architecture Summary & Key Takeaways
This blog explored the internal architecture and core functions of a Battery Management System (BMS), and highlighted how it ensures the safety, performance, and longevity of lithium-ion batteries. We discussed the role of key components such as voltage, temperature, and current sensors, fault detection mechanisms including insulation resistance monitoring and contactors within the Battery Disconnect Unit (BDU), and the communication structure between master and slave BMS units via CAN or Ethernet networks. We showed how BMS can act as the central intelligence that monitors battery health, balances performance, prevents failures, and coordinates real-time decisions. Finally, it introduced the role of simulation in supporting BMS development, enabling virtual testing, control algorithm validation, and early validation of system-level interactions through digital models before hardware is built.
Next in the BMS Series
In the upcoming blogs, we’ll explore how simulation accelerates BMS development, including virtual validation, digital twins, and XiL (Model-, Software-, and Hardware-in-the-Loop) testing, to help engineers design safer, more efficient, and more intelligent battery systems. We’ll also showcase real industrial case studies demonstrating the impact of simulation-driven BMS development.
🎬 Watch our BMS short videos playlist on YouTube for quick visual explainers and engineering insights.
🔔 Follow our LinkedIn page for updates on the next chapters of this series and the latest trends in electrification.
Combining Physics and Machine Learning to Predict Battery Aging with Confidence
Battery technology is evolving rapidly to meet the growing demands of electric vehicles, large-scale energy storage systems, and portable electronics. A major challenge lies in reliably predicting long-term battery performance within practical development timelines. Because batteries degrade gradually during both use and storage, conventional testing methods take a long time to produce accurate lifetime estimates. These lengthy evaluation cycles slow down the process of validating new designs, assessing durability, and defining warranty limits. For battery engineers, this delay translates directly into longer development cycles, higher validation costs, and conservative design margins. This creates a need for faster and more predictive approaches to battery aging assessment.
Accurate prediction of battery aging is essential for effective design, testing, and system integration. Traditional experimental approaches are often time-consuming, while high-fidelity simulations, though precise, require significant computational resources. By combining physics-based tools like GT-AutoLion with machine learning (ML), virtual aging datasets can be generated to train machine learning metamodels. A metamodel represents a complex physical system model as a reduced mathematical form. These models enable engineers to predict battery state of health (SOH) much more rapidly. This blog explains how this workflow accelerates both calendar aging and cycle aging predictions, reducing or even eliminating the need for long-duration experiments.
Why Machine Learning for Battery Aging?
Classical physics-based battery models, such as the Pseudo-2D model, provide high reliability and precision but are often computationally expensive, especially when many scenarios need to be evaluated. Engineers are constantly looking for ways to accelerate these processes without sacrificing accuracy. Machine learning offers a promising solution: by creating a surrogate model, outputs can be predicted rapidly after training. ML models are particularly well-suited for aging applications, where results are sampled at intervals, such as after a few storage days or each cycle.
In many cases, engineers may not have extensive measurement data, especially during early development stages. This is where GT-AutoLion, a physics-based model that includes detailed aging mechanisms, comes into play. Once the model is calibrated against a limited set of measurements, it can generate large-scale virtual testing data across a wide range of operating conditions. This dataset becomes the foundation for training a reliable ML metamodel.
With the help of the ML Assistant tool in GT-SUITE, engineers can easily import data, whether from GT-Post, csv, Excel, TXT, or MAT files, and train metamodels for fast and accurate aging prediction. These models are useful for studying various types of battery aging, such as calendar and cycle aging.
To illustrate how this physics-based machine learning workflow works in practice, let’s look at two representative aging scenarios:
Case 1: Predicting Calendar Aging Using the ML Assistant Tool Within GT-SUITE
Calendar aging refers to the loss of battery capacity that occurs while the battery is resting, without any charge or discharge cycles. Even when not in active use, chemical reactions inside the cell continue, slowly reducing its capacity. The rate of this degradation is strongly influenced by factors such as state of charge (SOC) and temperature, making these key parameters for the calendar aging predictive model.
Generating virtual calendar aging data
To build a ML metamodel, we first used GT-AutoLion to generate virtual storage data based on a Design of Experiments (DoE). The DoE varied parameters such as state of charge (SOC) and initial temperature. GT-AutoLion then simulated long-term storage for up to five years, recording SOH values at regular intervals (e.g., every 30 days). SOC and temperature are employed as the input factors, while SOH values are used as outputs for training the ML model.
Training and testing the metamodel
The key challenge in predicting calendar aging is extrapolation, estimating degradation beyond the period covered by the training data. To determine the minimum fraction of storage days needed for effective training, we studied different subsets of the storage days, including 20%, 30%,…, and 60%. The best results were obtained using 60% of the total storage days for training, with the remaining 40% reserved to evaluate the model’s capability to predict future aging. The results, illustrated in Figure 1, highlight the metamodel’s ability to:
- achieve strong accuracy within the trained region
- demonstrate excellent extrapolation capability beyond the training range
- maintain close agreement with the physics-based GT-AutoLion reference results

Figure 1. A comparison of calendar aging prediction results using the ML Assistant tool versus the physics-based model using Gamma Technologies’ GT-AutoLion
Figure 1 shows that the machine learning model closely matches the physics-based results and successfully preserves the nonlinear degradation behavior captured by the physics-based model.
In practice, this means engineers can confidently estimate five years of degradation even if the metamodel is trained on just the first two to three years of data. This dramatically reduces development time, enabling faster iteration and earlier design decisions.
Case 2: Predicting cycle aging using the ML assistant tool within GT-SUITE
While calendar aging measures degradation at rest, cycle aging captures degradation from repeated charge–discharge cycles. This form of aging depends on several operating variables, including temperature, depth of discharge, and charge/discharge C-rates.
Virtual cycle aging simulations
Using GT-AutoLion, we simulated battery aging across a wide range of conditions. Each simulation is followed by a consistent cycling protocol designed to estimate capacity. The protocol began with a 10-minute rest period to allow the cell to reach equilibrium, followed by a constant-current (CC) discharge to deplete stored energy and a constant-current, constant-voltage (CCCV) charge to restore capacity. This cycle was repeated under varying conditions until the battery reached end-of-life (defined as 80% SOH). Inputs for the ML model include initial temperature, cycle number, charge C-rate, discharge C-rate, and depth of discharge, while the output is SOH, a scalar variable used for ML modeling.
Training and extrapolation results
To evaluate the metamodel extrapolation capability for cycle aging, we wanted to determine the minimum number of cycles required for effective training. We examined different numbers of cycles and found that using the first 300 cycles of each simulation for training provided the best results. The remaining cycles were reserved for testing the model to predict future degradation.

Figure 2. A comparison of cycle aging prediction results using the ML Assistant tool versus the physics-based model (GT-AutoLion)
Just like in the calendar aging case, the ML model performed significantly well, as shown in Figure 2:
- It reproduced the GT-AutoLion degradation trends with high accuracy
- It reliably predicted late-stage aging data not included in the training set
- It generalized across different operating conditions
This means accurate lifetime cycle-aging predictions can be made using only a fraction of the total cycles, significantly reducing simulation and validation effort.
Conclusion: Accelerating Battery Design via Machine Learning-physics Integration
Both case studies show an exciting trend in battery engineering: combining physics-based modeling with ML. GT-AutoLion creates detailed virtual data, and ML models turn that data into fast, easy-to-use tools that can predict beyond what they were trained on. This approach dramatically shortens development cycles, reduces reliance on lengthy experiments, and delivers accurate insights with minimal computational resources.
Learn More About ML-Accelerated Simulation
Discover how integrating physics-based models with ML metamodels can significantly reduce simulation time while preserving fidelity. Read our blog on Dynamic Machine Learning for Modeling and Simulation to explore real-world applications, and visit our Machine Learning and Optimization Solutions page for more related content. Join our LinkedIn community to stay updated on simulation-driven workflows, battery modeling insights, and upcoming technical content. You can also contact us to learn how Gamma Technologies can support your machine learning goals.
A Year of Engineering Insights: Our Top 7 Blogs You Shouldn’t Miss
From smarter thermal systems and next-generation batteries to digital twins, fuel cells, and advanced air mobility, we explored how engineering simulation is reshaping engineering decisions across industries. If you’re working at the intersection of innovation, performance, and efficiency, these seven blogs capture the most impactful ideas we shared in 2025, each addressing real-world engineering challenges with a simulation-first approach to help you kick-start an innovative year ahead.
Here’s a quick tour of our most-read and most-relevant blogs from the year.
Simulation for the HVACR Industry: How to Leverage Machine Learning
This blog explores how machine learning metamodels can be integrated into system-level HVACR simulations to accelerate decision-making. Using an EV thermal system as a case study, it compares physics-based models with ML-based approaches, showing how engineers can achieve faster simulations while maintaining temperature prediction accuracy, especially valuable during early design and control strategy development.
Digital Twin Simulation: Engineering Smarter, Faster, and More Reliable Products
A practical, step-by-step introduction to digital twin development, this blog explains why digital twins are no longer optional. It breaks down how virtual replicas help teams predict behavior, reduce risk, and improve product reliability, while shortening development cycles across industries.
Perfecting PEM Fuel Cell Water Management Strategies
Water management remains one of the most critical challenges in PEM fuel cell design. This blog dives into stack- and system-level strategies, highlighting how simulation helps engineers balance dry-out and flooding, reduce dependence on expensive physical testing, and optimize performance through integrated component and control modeling.
Solid-State Batteries: Why Virtual Modeling Is the Only Way Forward
As solid-state batteries gain momentum, this blog explains what’s driving the shift away from conventional lithium-ion technology and why simulation is essential. It highlights the complexities of interfacial behavior, ion transport, and mechanical degradation, showing how virtual modeling enables safer, faster, and more informed development of next-generation batteries.
Simulating Series Hybrid Tilt-Rotor Aircraft in GT-SUITE
Focused on the growing Urban Air Mobility (UAM) sector, this blog introduces series hybrid tilt-rotor architectures and explains how they combine VTOL capability with efficient forward flight. It walks through the modeling workflow for VTOL powertrains, helping engineers evaluate performance, range, and energy management early in development.
Data Center Solutions: A Strategic Imperative for Competitive Advantage
With rising energy demands and reliability expectations, this blog explains why data centers have become mission-critical infrastructure. It outlines how system-level simulation supports better cooling strategies, energy efficiency, and resilience the key factors in maintaining uptime and long-term competitiveness.
Introduction to Virtual Calibration: A Smarter Approach to Powertrain Development
This blog challenges traditional powertrain calibration approaches that rely heavily on physical testing. It shows how virtual calibration enables engineers to reduce cost, time, and risk by moving repetitive testing into simulation environments while still integrating physical tests where they add the most value.
🔍 Ready to Explore Deeper?
Each of these blogs offers a focused perspective on how simulation is transforming engineering workflows—across energy, mobility, thermal systems, and digital product development.
👉 Read the full blogs, share them with your team, and stay connected as we continue exploring what’s next in system-level simulation.
To stay updated, follow our LinkedIn channel for the latest posts, videos, and insights on engineering simulations.
If you’re interested in discussing how simulation can support your engineering challenges or would like more details, contact us.
Happy New Year and best wishes for an innovative year ahead!
Introduction to Virtual Calibration: A Smarter Approach to Powertrain Development
The Limitations of Conventional Calibration and Testing
Testing and calibrating powertrains is crucial for product development yet traditionally requires years of work and millions in investment. Over time, requirements have expanded considerably to address regulatory compliance (OBD/emissions), customer expectations (efficiency), and manufacturer standards (reliability). Meeting these demands involves extensive physical testing through in-house facilities, on-road vehicle testing, travel to extreme climate locations, and specialized test equipment.
Physical testing frequently encounters unexpected challenges—prototype components fail, and testing trips face delays especially if winter trips turn into a spring trip. These issues make relying entirely on physical testing both inefficient and risky.
Many repetitive engineering processes, particularly in calibration and controls development, can be effectively performed in virtual simulation environments. This approach significantly reduces both time and expenses compared to conventional physical testing methods, while still allowing physical tests to be integrated where necessary.
Virtual Calibration in Powertrain Development
Virtual calibration shifts traditional calibration and controls development to simulation environments before physical testing. This approach completes these tasks faster and cheaper than conventional methods.
Automotive and commercial vehicle manufacturers are increasingly adopting virtual powertrain calibration. The strategy leverages simulation for initial work like baseline calibration and controls development, reserving costly and time-intensive physical testing for validation and refinement. This approach strategically allocates resources—using affordable simulation for most tasks while minimizing the use of expensive testing.
Key advantages of virtual calibration include:
- Lower initial investment compared to physical test equipment
- Automation capabilities requiring minimal supervision
- Comprehensive exploration of design possibilities at reduced cost and time
- Easy creation of any environmental condition with minimal additional expense
- Prevention of costly prototype hardware damage during calibration
- Accessibility for many OEMs and suppliers who already possess the necessary infrastructure
Virtually all OEMs and manufacturers already utilize powertrain simulation capabilities that could be readily adapted for virtual calibration workflows. Implementing this approach delivers significant engineering process advantages while optimizing how engineering resources are utilized.
The automotive industry faces accelerating development timelines alongside increasingly stringent regulations and rising customer demands—virtual calibration provides an effective response to these challenges. Quickly adopting to market changes is key to success. Rather than replacing physical testing, virtual calibration serves as a complementary methodology that enhances the overall engineering process, making it more streamlined and efficient leading to a better, more reliable product.
Virtual Calibration Summary
Traditional powertrain calibration depends heavily on physical testing, which is expensive, time-consuming, and exposed to delays from prototype failures, scheduling issues, and extreme-climate testing logistics. With increasing regulatory pressure, tighter timelines, and growing customer expectations, this approach alone is no longer efficient. Virtual calibration enables much of the calibration and controls development work to occur in simulation before hardware is available, dramatically reducing development time, cost, and risk. Manufacturers are now using simulation for baseline and early-stage calibration tasks and reserving physical testing primarily for validation. This complementary approach improves resource efficiency, reduces prototype damage, expands testing possibilities, and accelerates market response. Virtual calibration is becoming essential for modern automotive engineering, and upcoming posts in this series will explore different types and implementation methods.
Start Your Virtual Calibration Journey
This blog is just the first in a series of blogs on virtual calibration that will be published in the coming weeks. Check back for future blogs that explain the different types of virtual calibration (open-loop and closed-loop). To stay updated, follow our LinkedIn channel for the latest posts, videos, and insights on virtual calibration.
If you are interested in discussing virtual calibration or would like more details please reach out and contact us.
What is a Battery Management System (BMS)? The Brain Behind Lithium-Ion Batteries
The Human Analogy: How BMS Brings Batteries to Life
Imagine walking into your favorite bakery. Your eyes scan the variety of treats in the glassed casing. Your ears pick up the humming and whirring of the ovens. Your skin senses the warmth in the room. Your nose picks up the smells of freshly baked bread coming out of the oven. Your tongue might tell you if the coffee you got with your treat is too hot. All these inputs you get from your senses are sent to your brain, which processes them and decides how to respond.
Now imagine a lithium-ion battery (LIB) as a living system. To keep itself safe, functional, and long-lasting, it depends on a combination of hardware, like sensors and wiring (its nerves), and software that processes information and makes decisions (its brain). This intelligent system is known as the Battery Management System (BMS)
LIBs are used in a wide range of applications—from consumer electronics to electric vehicles (EVs). They are favored over other rechargeable chemistries, such as Nickel Cadmium or Lead-acid, due to their high energy density, long cycle life, and low self-discharge rates. However, to ensure safety and longevity, LIBs must operate within strict voltage and temperature limits, as shown in Figure 1.
This is where the BMS comes in. Like a central nervous system, the BMS combines hardware and software to collect information from its “sensory organs” (sensors) and make real-time decisions to ensure safety and optimize performance. The complexity of a BMS depends on the application. Devices like e-readers or smartphones may use simpler architectures, while large-scale electrified systems – such as marine propulsion platforms, electric vertical take-off and landing (eVTOL) aircrafts, or electric vehicles (EVs) – require more advanced systems with additional sensors, advanced algorithms, and robust safety considerations.
The BMS Itself
Just as our senses feed information to our brain for interpretation and decision-making, all measurable inputs from a battery pack ultimately report back to the BMS—the brain of the operation. The BMS doesn’t just collect data; it continuously evaluates the health and performance of the battery and makes real-time decisions to ensure safety, efficiency, and longevity.
From estimating internal states like State of Charge (SOC), State of Health (SOH), and State of Power (SOP), to making protection decisions and issuing control commands, the BMS is responsible for orchestrating every aspect of battery behavior. It must process noisy signals, respond to sudden changes, and adapt to variations across cells, modules, and operating conditions—all while meeting tight constraints on size, cost, power consumption, and timing.
In the following blog, we will take a deeper dive into the underlying BMS architecture and all the different components that are involved.
What’s Next in Our BMS Journey
This blog marks the beginning of our educational series on Battery Management Systems (BMS). Throughout the series, we’ll explore what a BMS is, its architecture and applications, and how simulation supports its design, testing, and validation including XiL (Model-, Software-, and Hardware-in-the-Loop) approaches. We’ll also share real-world case studies demonstrating how BMS solutions are shaping electrified mobility and beyond.
🎬 Check out our BMS playlist on YouTube for quick insights and visual explainers.
To stay updated, follow our LinkedIn channel for the latest posts, videos, and insights on electrification and energy systems.
What Powers a Data Center? Understanding the Power Systems
In January 2023, a major Microsoft outage disrupted millions of users worldwide. People could not access Teams, Outlook, or other cloud services for hours. This event showed how quickly a small power interruption can ripple across the globe. Imagine if a cloud provider faces even a few minutes of downtime. Banking systems, hospitals, and AI-driven applications could all be affected.
Studies show that even a one-hour outage can cost large data centers over $500,000 in lost revenue. To prevent this, modern data centers rely on a robust power system that keeps systems running without interruption. Every part of this power system (power generation and conversion to storage and emissions control) must be designed for high reliability and efficiency. Simulation helps engineers test, validate, and optimize these systems virtually before deployment, ensuring performance under every possible condition.
With tools like GT-SUITE, engineers can create and analyze complete power system models that combine electrical, mechanical, and thermal systems in one environment. This allows them to predict real-world behavior and make data-driven design improvements.
Power Distribution and Microgrid Systems
At the heart of every data center is its power distribution system. It connects the grid, renewable energy sources, and local generators into a unified network that ensures stable power delivery. Many facilities now use microgrids to combine multiple sources and provide backup during grid instability or peak demand.
GT-SUITE, allows users to model the complete power distribution network. It can simulate how electricity flows through transformers, inverters, and converters, analyze load balancing, and predict system response to sudden failures. This helps design microgrids that maintain uninterrupted operation while improving efficiency and reducing costs.
Energy Storage and Battery Systems (UPS)
Energy storage is the safety net of a data center power system. Uninterruptible Power Supply (UPS) systems use high-capacity batteries that deliver instant power during outages until generators or alternate sources take over. These systems must be sized correctly, respond quickly, and maintain temperature and charge stability.
With GT-SUITE, engineers can simulate the full behavior of battery systems, including charge and discharge cycles, heat generation, and degradation over time. They can compare different battery chemistries, cooling methods, and control strategies to extend life and ensure instant response during critical moments.
Electrolysis and Hydrogen for Fuel Cell Applications
As sustainability goals become more important, data centers are exploring hydrogen-based power as a clean alternative to diesel. Electrolysis systems produce hydrogen, which can then power fuel cells to generate electricity with minimal emissions. However, integrating these systems requires careful balancing of energy flow, storage, and safety measures.
GT-SUITE enables engineers to model both electrolysis and fuel cell systems within the overall power system. They can study hydrogen production rates, fuel cell efficiency, and interactions with existing electrical systems, helping design reliable hybrid setups for future-ready data centers.
Engine and Emissions Modeling for Data Centers
Backup engines are the backbone of data center reliability, ensuring uninterrupted operation during power loss. Their performance directly affects fuel efficiency, emissions, energy consumption, and noise which are the key parameters for sustainability and compliance.
With GT-SUITE, engineers can virtually explore fuel options, optimize control strategies, evaluate different system configurations before implementation and more, saving costs and minimizing risk. In addition, validated GT-SUITE models from suppliers can be directly integrated into data center simulations, enabling faster, more accurate system-level studies that reflect real-world engine performance. The platform also enables analysis of exhaust behavior and emission control performance, helping design cleaner, more efficient backup systems that meet environmental standards without compromising reliability.
Building the Digital Twin of the Power System
By integrating all these systems into one simulation model, engineers can create a digital twin of the data center power system. This virtual environment mirrors real operations, allowing teams to test different what-if scenarios, optimize control strategies, and predict performance over time.
With GT-Play, results can be visualized interactively, helping engineering and operations teams collaborate better and make faster decisions. This approach leads to higher uptime, improved energy efficiency, and lower operational risk.
Powering the Digital World with Confidence
Data centers are the engines of our connected world. Keeping them running efficiently and reliably requires more than good hardware. It needs intelligent design backed by simulation. With GT-SUITE, engineers can model, test, and optimize the complete data center power system before it is built, ensuring it is ready for any challenge the digital future brings.
Curious how GT-SUITE can elevate your data operations? Contact us and let’s discuss how to build a future-ready architecture that aligns with your long-term growth and investment objectives. You can also explore our latest blogs covering HVACR system simulation, advanced battery modeling, and fuel cell technology.
Reference: Microsoft cloud outage hits users around the world
Simulating Series Hybrid Tilt-Rotor Aircraft in GT-SUITE
The rise in demand for electric and hybrid-electric aircraft, particularly in the Urban Air Mobility (UAM) sector, has fueled innovation in aircraft powertrain design. One promising configuration is the series hybrid tilt-rotor system, which combines the power and efficiency of hybrid-electric propulsion with the versatile flight dynamics of tilt-rotor technology. In this blog, we’ll dive into the basics of series hybrid tilt-rotor systems, why they’re gaining popularity, and how GT-SUITE, a multi-physics simulation platform from Gamma Technologies, can simulate and optimize them early in the design and development process.
A tilt-rotor aircraft features rotors that can transition between vertical and horizontal orientations, enabling both vertical takeoff and landing (VTOL) capabilities and efficient forward flight. By integrating a series hybrid system (where an internal combustion engine (ICE) drives a generator to supply electrical power to motors that turn the rotors) we can achieve the best of both worlds: the extended range of traditional fuel-powered engines and the flexibility and control of electric propulsion.
Components of a Series Hybrid Tilt-Rotor System:
- Internal Combustion Engine (ICE): Drives a generator to produce electrical energy, offering high energy density.
- Generator: Converts mechanical energy from the ICE into electrical energy.
- Battery Pack: Stores energy to supplement the generator output.
- Electric Motors: Drive the rotors and facilitate the tilt-rotor’s unique ability to transition between vertical and horizontal flight.
- Flight Control System: Generates Control Outputs based on the deviation between the target trajectory and the actual flight state.
System Workflow of a Series Hybrid Tilt-Rotor in GT-SUITE
The development and simulation of a series hybrid tilt-rotor aircraft involves tightly integrated subsystems working under closed-loop control to ensure stable and efficient operation throughout the flight envelope. The system architecture and workflow, as shown in the figure, can be described in the following sequence:
- Target Mission Definition
The simulation begins with defining the flight mission profile, which includes:
- Target altitude
- Target velocity
- Desired rate of climb
- Flight states over time
These mission parameters serve as reference inputs for the flight controller to guide the aircraft through various phases like hover, transition, cruise, and descent.
- Flight Controller
The flight controller continuously compares the current aircraft state with the mission-defined targets. Based on the error between actual and target parameters, it generates control commands such as:
- Elevator deflection
- Nacelle tilt angle (specific to tilt-rotors)
- Throttle setting
- Propeller blade pitch
These outputs ensure the aircraft maintains its trajectory and stability across dynamic flight conditions.
- Aircraft Body and Motion Calculation
The aircraft body module receives control input and evaluates the dynamic response. This involves:
- Force calculation: Derived from aerodynamics and thrust contributions (including vertical lift in VTOL modes and forward thrust in cruise)
- Equations of Motion (EoM): Motion quantities like velocity, angular rates, and position are updated based on net external forces and moments
This module represents the 3DOF rigid body physics of the aircraft.
- Electric Propulsion Subsystem
In a series hybrid configuration:
- The electric motor receives commands (e.g., torque or speed setpoints) from the flight controller.
- The motor drives the propeller, generating thrust needed for vertical lift or forward motion.
- The propeller model converts shaft power into aerodynamic thrust using blade element momentum or similar methods.
This closed-loop feedback ensures the thrust output aligns with what is needed for stable flight.
- System Integration Loop
All subsystems interact in a closed-loop fashion:
- Mission target → Controller → Actuator/motor response → Aircraft body dynamics → Updated flight state.
- The updated state feeds back to the controller, ensuring continuous correction using PID controllers and mission adherence.
Simulating a Series Hybrid Tilt-Rotor Model in GT-SUITE
Let’s look at a system-level example model of a series hybrid tilt-rotor aircraft developed in GT-SUITE. This model integrates electric propulsion components, 3DOF flight dynamics, nacelle actuation, and energy management systems into a unified simulation environment.
The tilt-rotor is modeled to perform a complete VTOL mission, from vertical takeoff through cruise and back to landing, while enabling detailed analysis of powertrain behavior and flight control responses under varying operational modes.
To explore the impact of different electrification strategies on flight performance and energy consumption, two distinct simulation cases were studied using a representative mission profile:
Flight Mission Profile
The simulated mission captures a complete VTOL flight cycle, including the following phases:
• Vertical Takeoff
• Transition to Climb
• Cruise Flight
• Descent
• Hover and Landing
A flight control system governs the nacelle angle throughout the mission. The nacelle starts at 90° (vertical) for takeoff and gradually transitions toward the horizontal (fixed-wing) position during the climb phase. This fixed-wing configuration is maintained during cruise. The nacelle then transitions back to vertical for the descent and hover/landing phases.
Case 1: Series Hybrid Mode
• Power Configuration: The electric motor is supported by a 64 kW hybrid assist during the Climb/Cruise Phase and Descent/Hover Phase, which can be modified accordingly in the ECU Generator controller
This configuration demonstrates how hybrid support can enhance performance and extend operational endurance during peak power demands.
Case 2: Pure Electric Mode
• Power Configuration: Fully powered by a battery-electric system, with no engine assistance.
This case showcases the aircraft’s behavior and energy consumption under pure electric propulsion for the full mission cycle.
Key Metrics and Results in Hybrid Tilt-Rotor Simulation
The simulation results include:
• Battery State of Charge (SOC): Tracks energy consumption and efficiency across flight phases.
• Battery Power Demand: Highlights real-time power draw during different maneuvers.
• Motor Power: Reflects electric propulsion load throughout the mission.
• Flight States: Includes velocity, altitude, and nacelle angle transitions to correlate system behavior with flight dynamics.
These two simulation cases provide insight into how powertrain configuration and flight phase control impact performance, range, and energy usage for a hybrid tilt-rotor aircraft. The results lay a foundation for further optimization of energy management strategies in hybrid-electric rotorcraft systems.
Learn More about Our Aerospace Simulation Capabilities
To learn more about how GT-SUITE can help you design, simulate, and optimize advanced aerospace systems, visit our Aerospace Industry Page for detailed insights, case studies, and technical resources. For any specific questions, project inquiries, or personalized guidance, don’t hesitate to contact our team of aerospace simulation experts.
🏢 Data Center Solutions: A Strategic Imperative for Competitive Advantage
In an era defined by rapid advances in AI, machine learning, and large-scale GPU-driven workloads, data centers are no longer passive utilities—they are strategic assets. Data center optimization is core to achieving sustainable competitive advantage, operational excellence, and environmental stewardship.
1. Market Forces Reshaping the Industry
The data center sector is currently undergoing a major transformation driven by surging demand for GPU compute power. AI workloads—from generative models to real-time analytics—are pushing boundaries on speed, density, and efficiency. To lead rather than follow, companies must anticipate these shifts now.
2. Operational Efficiency & Cost Leadership
“Power Usage Effectiveness” (PUE) and “Water Usage Effectiveness” (WUE) have become critical performance and cost metrics. Optimizing resource utilization yields two-fold benefits: reducing operating expenses and aligning with environmental mandates. Along with cooling, backup power (such as uninterruptible power supply (UPS) systems like battery energy storage, generators, fuel cells or hybrid systems) and power conversion losses also contribute to overhead energy use, further impacting PUE. It is critical to ensure data centers are equipped for fast-paced transformation through strategic planning and de-risking capital expenditure. A virtual systems simulation approach allows companies to right size components and fuel types to meet regulatory compliance for sustainability initiatives and optimize modern power architectures and emissions controls systems for fuel savings, low emissions, energy consumption and noise.
Gamma Technologies’ GT-SUITE can holistically simulate cooling systems, power distribution, and backup infrastructure—contributing to reduced PUE, lower WUE, and cost savings through engineering precision.
3. Resilience through Microgrid & Power Management
Downtime in a data center can cost hundreds of thousands per minute. Integrating microgrids enhances reliability, but these systems carry complexity and cost. Simulating microgrid controllers and diverse systems working together encompasses thermal, fluid, electrical and other domains. GT-SUITE helps determine optimal configurations, stress-test failure modes, prevent delays in time-to-market, and forecast ROI through techno-economic feasibility assessments and energy cost optimization. From a business perspective, you’re investing in risk mitigation and continuity of service—a hallmark of strategic upper hand.
4. Thermal Strategy as a Competitive Advantage
High-density GPU systems are driving increased complexity. Rising power densities require new thermal management solutions both at the system level (HVAC) and within individual racks and servers (coolant distribution units (CDUs)). GT-SUITE’s industry leading thermal management solution supports the cooling technologies of today and tomorrow including liquid/refrigerant direct-to-chip, jet spray, immersion, and air-cooled technologies—ensuring peak hardware performance, reduced energy burn, and enhanced asset longevity. Its Multi-Scale, Real-Time Thermal Management capabilities enable analyses across all levels, from chip to board, rack, room, and the entire building, supporting holistic thermal management strategic development. By optimizing cooling efficiency, leading companies can convert cooling cost savings into competitive leverage.
5. Digital Twins & Predictive Operations
Digital transformation is no longer optional. Digital twins offer real-time system visualization, predictive maintenance, advanced risk management and data-driven decision-making. Using GT-SUITE, engineers can build real-time digital twins of data center thermal, energy and emissions management systems to simulate behavior under real-world conditions, enable virtual sensors, anticipate failures and enable predictive maintenance, and optimize performance. For first movers, this means moving from reactive to proactive operations—reducing unplanned outages, maximizing asset utilization, and optimizing CapEx and OpEx across the lifecycle.
6. Data-Driven Design, Experimentation & Scalability
Quantitative strategy rules the day. GT-SUITE’s integration of machine learning, parametric design of experiments (DOE), and hybrid physics-based and machine learning/data-driven optimization helps firms iterate rapidly, stress-test edge cases, and roll out scalable solutions. This represents nimble innovation backed by engineering confidence.
7. Strategic Implications & Competitive Takeaways
Future-Ready Data Center Simulation
For today’s data-intensive enterprises, data centers are not just racks of hardware—they are strategic nerve centers. Embracing comprehensive, holistic, system simulation-based solutions like Gamma Technologies’ GT-SUITE equips you to outperform competitors in performance, resilience, and sustainability. By modeling the future today, you secure both operational efficiency and strategic leadership—a dual advantage only savvy leaders fully appreciate.
Explore our latest blogs covering HVACR system simulation, advanced battery modeling, and fuel cell technology. Curious how GT-SUITE can elevate your data operations? Contact us and let’s discuss how to build a future-ready architecture that aligns with your long-term growth and investment objectives.
Solid-State Batteries: Why Virtual Modeling and Simulation are the Only Way Forward
Solid-State vs Lithium-Ion Batteries | What’s Driving the Shift
The push for safer, longer-lasting, and higher-energy batteries is accelerating change across the energy storage industry. Solid-state batteries (SSB) are gaining attention as a promising solution to meet these growing demands. Unlike conventional lithium-ion batteries (LIB), which use flammable liquid electrolytes, SSB relies on solid materials, such as polymers, sulfides, or oxides, as both the electrolyte and separator. This shift offers the potential for improved safety, higher energy density, and better thermal stability, especially when paired with lithium metal anodes. However, these advantages come with challenges such as complex interfacial behavior, slower ion transport, and mechanical degradation, which make development and scale-up more demanding.
To illustrate the architectural differences between the two technologies, Figure 1 shows a schematic comparison of conventional lithium-ion and solid-state batteries.
Alongside the schematic, Table 1 presents a side-by-side comparison of key characteristics of traditional lithium-ion and solid-state batteries, highlighting the material and performance differences.
This fundamental shift in battery architecture creates new opportunities but also adds complexity, especially in material selection, interface engineering, and performance optimization. Exploring every possible configuration through lab testing alone is time-consuming, expensive, and often limited by material availability and manufacturing constraints.
To overcome these challenges and speed up innovation, battery developers are increasingly using virtual modeling and simulation tools. These tools provide a more efficient and scalable way to design and evaluate solid-state batteries.
From Concept to Optimization: The Role of Virtual Modeling and Simulation in Solid-State Battery Development
Virtual modeling and simulation provides valuable insights throughout the SSB battery development cycle:
- Move Beyond Trial-and-Error: Simulations predict cell behavior early, reducing reliance on costly and slow physical experiments. This accelerates material selection and design decisions from the start.
- Accelerate Material Discovery: High-throughput screening evaluates thousands of potential solid electrolytes based on predicted properties, enabling focused material design instead of random trial and error.
- Evaluate Electrolyte Behavior: Analyze electrolytes under various temperatures and loads to identify the best-performing candidates.
- Optimize Electrode and Cell Architecture: Adjust electrode thickness, porosity, and layer structure to maximize energy density and mechanical stability. Simulate mechanical constraints like applied pressure during operation.
- Gain Insight into Internal States: Modeling reveals internal variables, such as lithium-ion concentration and potential distribution, that are difficult to measure experimentally.
By enabling rapid design exploration and early-stage screening, virtual modeling shortens development timelines and deepens understanding of complex battery behaviors that physical testing alone cannot easily capture.
Solid-State Battery Modeling in GT-AutoLion
At Gamma Technologies, we’ve expanded GT-AutoLion to support solid-state batteries, enabling battery designers to simulate from material properties to full-cell performance.
Key Capabilities of GT-AutoLion for Solid-State Battery Modeling
- Electrolyte Flexibility: Supports polymer, gel, sulfide, oxide, and hybrid electrolytes, including mixtures of polymers with inorganic components.
- Advanced Ion Transport Simulation: Models both single-ion and dual-ion conductor mechanisms for accurate ion transport.
- Electrochemical-Mechanical Interaction: Captures stress buildup and pressure effects during cell assembly, affecting performance.
- Custom Cell Architecture: Allows assigning different electrolytes to separator, catholyte, and anolyte regions; supports bulk, hybrid solid-state batteries, and layered separators essential for lithium-metal interfaces.
- Reaction Surface Area Modeling: Considers reduced active surface area due to imperfect solid-solid contact.
- Ionic Diffusivity Estimation: Uses Nernst-Einstein relation to estimate diffusivity when experimental data is lacking.
- Extensive Electrolyte Database: Includes ~24 electrolyte types from literature and experiments (sulfide, oxide, polymer, hybrid).
- Experimentally Validated Model: The model is calibrated using experimental data from a 2 Ah pouch cell with an LCO/graphite/polymer solid electrolyte configuration (see Figure 2), and it shows good agreement with the measured results.
The Path Forward for Solid-State Batteries
Solid-state batteries hold great promise but introduce complexities that physical testing alone cannot fully address. As the demand for safer, higher-performance batteries grows, modeling and simulation become essential development tools.
Gamma Technologies’ simulation platform provides deep insights into SSB design and performance, from material selection to full-cell behavior. Virtual modeling allows you to safely explore “what-if” scenarios, reduce scale-up risks, and make faster, more informed decisions.
Whether you’re actively developing solid-state batteries or exploring their potential, integrating advanced modeling into your workflow can give you the competitive edge needed to succeed.
Read our Battery Simulation Solutions page to learn more about our battery modeling capabilities. You can also explore our blog How Multi-scale Models Are Enhancing Battery Performance and Design to learn how atomic-scale models can be integrated into continuum models for deeper insights. Or, click here to browse through more of our battery engineering blogs. Finally, feel free to contact us to see how Gamma Technologies can best support your needs.
Perfecting PEM Fuel Cell Water Management Strategies with GT-SUITE
The Rise of PEM Fuel Cell Technology in Mobility
The origin of fuel cells dates back to the 19th century, but interest in the technology has surged over the past decade, especially within the mobility industry. This can be attributed to their ability to generate power without harmful emissions, similar to batteries, while also maintaining large energy storage capacity via fuel tanks—like traditional gasoline and diesel engines. The term “fuel cell” in the mobility industry often implies low temperature proton exchange membrane (PEM) fuel cells. These devices electrochemically react hydrogen and oxygen to produce water and generate electrical power. However, the role of water in a fuel cell system goes far beyond a simple exhaust product.
The Need for Water Management in PEM Fuel Cells
One of the most critical components to the effectiveness of a PEM fuel cell is the semi-permeable membrane which allows H+ ions (i.e. protons) to cross from the anode (hydrogen) side to the cathode (air) side but forces electrons to re-route through an external circuit. Of course, this does not happen magically. Water molecules facilitate the transport of protons across sulfonic acid groups within the membrane. Thus, sufficient membrane hydration is vital to the performance of PEM fuel cells. A dried-out membrane will result in less efficient operation and can also lead to degradation or eventual failure of the device.
You might think, “it’s a good thing that the reaction produces water!” While that’s true, it comes with a downside. Hydrogen and oxygen must diffuse through porous media to get from the anode and cathode flow channels to arrive at the catalyst layer and react. If too much water accumulates in the porous layers, it can condense and block the reactants from reaching the catalyst layer, known as flooding. Even though water is produced from the reaction on the cathode side, it crosses over through the membrane to the anode side as well. Therefore, both sides of the cell are at risk of flooding without proper cell design and water management. This can also reduce efficiency and lead to degradation.
Operation in subzero temperatures poses an additional challenge to fuel cell manufacturers and system engineers. Since water is the product of the reaction, it is susceptible to freezing inside the cell. Like flooding, this can prevent reactants from reaching the catalyst layer and can even separate layers from each other as the water freezes and melts, known as delamination. Proper “freeze start” operation relies on good shutdown strategies to remove water from the cell and careful startup strategies to heat the cell without generating too much ice too quickly.
Addressing Water Challenges in PEM Fuel Cells with GT-SUITE
This may all seem very overwhelming, but rest assured, GT-SUITE equips engineers with the simulation tools needed to tackle these challenges effectively.
The GT-SUITE PEM fuel cell stack model can capture all these critical physical mechanisms: production of water, crossover through the membrane, and phase change within the porous media and channels. Moreover, the physical models are compatible with a pseudo 2D-1D hybrid paradigm, which allows users to discretize the cell flow paths in a 2D spatial configuration via a handy widget. This enables engineers to identify and prevent local areas of dry-out or flooding within the cell without needing 3D CAD or computationally expensive CFD.
This is a key performance factor that enables analysis of long, transient simulations which are necessary to capture these water transport mechanisms that occur in the order of seconds and minutes. Such mechanisms (crossover, phase change, temperature change) have transient options within the GT-SUITE model. Engineers can study the water distribution in the stack across long drive-cycles for real-world predictions.

Figure 3: Comparison of steady and transient model response of membrane hydration due to load change
And don’t forget about freezing! Freeze start is one of the most expensive physical tests to run. Not only does preparation of a chamber for freeze start conditions take up to 12 hours, but the formation of ice during unsuccessful attempts can break expensive stack prototypes. The ice model within GT-SUITE does not break any devices and can be simulated repeatedly without delay. It can even resolve ice layers of only a few micron thickness which can be devastating to the device – all while maintaining the same real-time simulation speed!
If you are a system-level engineer who doesn’t design cells or stacks you are not off the hook so easily. Proper water management within the fuel cell highly depends on a good balance of plant design and control. Anode flow paths are typically closed loop circuits that recycle unused hydrogen. This carries along water and nitrogen that crossed over from the cathode side. Proper stoichiometry control to achieve good humidification is dependent on hydrogen blower operation, ejector pump performance, and purging strategy.
On the cathode side, which is typically open-loop, water is often recirculated by means of a humidifier which is a passive device that allows water to diffuse from the wet exhaust stream to the dry intake stream. This can be controlled via bypass circuits to prevent flooding. Additionally, cathode humidity can be indirectly affected by pressure, temperature, and flow rate, thus providing engineers with additional degrees of control via compressor and cooling system operation.
Once again, the task may sound complex and daunting, but all the necessary physics are built into the GT-SUITE simulation environment. The template library provides building blocks for all relevant components to allow for proper design and selection. Model-based and general PID controls are supported, along with co-simulation with other tools such as Simulink for controls optimization. And critically, two-phase flow is supported not only in the fuel cell stack, but throughout the flow circuits for accurate tracking of water.
Conclusion
At stack and system levels, from components to controls, proper water management is the responsibility of all engineers involved. Effective water management is vital to ensuring the efficiency, durability, and performance of PEM fuel cells. Complex interdependency, expensive physical testing, and the fine line between dry-out and flooding make this a challenging task. To learn more about how GT-SUITE can help you overcome these challenges, please visit our webpage on the topic: Fuel Cell Simulation Solutions – Gamma Technologies, or contact us!
Everything You Wanted to Know About Sodium-Ion Batteries
From Lithium-Ion to Sodium-Ion Batteries: A New Era in Battery Technology
As the demand for energy storage continues to rise, sodium-ion batteries (NIBs) are gaining momentum as a compelling alternative to lithium-ion batteries (LIBs). Leveraging more abundant and cost-effective materials, NIBs are especially well-suited for low-speed electric vehicles—where range is less critical and affordability is key—as well as for renewable energy systems and other large-scale applications. Here’s why sodium-ion technology is drawing increasing interest:
- Cost-effective & abundant materials
Sodium is far more abundant and significantly cheaper than lithium. This makes NIBs highly attractive for large-scale use, particularly where affordability and raw material availability are key concerns, such as in grid storage and renewable integration.
- Environmentally friendly
Sodium is easier to extract and process, which results in a smaller environmental footprint. NIBs align well with global efforts to transition toward cleaner, more sustainable energy technologies.
- Safety & stability
NIBs offer better thermal stability than LIBs. They can safely be discharged to zero volts and use thermally stable sodium salts that generate fewer hazardous byproducts. Their slower heating rates and delayed self-heating under stress conditions make them a safer option, particularly in high-temperature or abusive environments.
- Cold climate performance
Unlike LIBs, Na-ion cells consistently maintain performance in cold temperatures. They are less prone to electrolyte freezing and capacity loss, making them ideal for use in harsh environments.
How do Sodium-Ion Batteries Work?
Sodium-ion (NIBs) operate on electrochemical principles similar to LIBs. During charging, Na⁺ ions move from the cathode to the anode; during discharge, they travel back to the cathode. This process closely resembles the ion movement in LIBs, as illustrated in Figure 1. The materials, however, differ. A typical Na-ion battery includes:
- Cathode: Common materials include layered metal oxides, polyanionic compounds, or Prussian blue analogs.
- Anode: Hard carbon is widely used due to its structural stability and compatibility with sodium.
- Electrolyte: Sodium salts like NaPF₆ or NaClO₄ in carbonate solvents.
- Separator: Same as in LIBs—allows Na-ion transfer while preventing short circuits.
- Current collectors: Aluminum is used for both anode and cathode, lowering material costs compared to copper-based Li-ion systems.
What are the Key Challenges Facing Sodium-Ion Battery Technology?
While sodium-ion technology is promising, it still faces several technical challenges before achieving widespread commercialization:
- Lower energy density: NIBs typically offer energy densities between 100-160 Wh/kg, which is lower than that of LIBs. This makes them less ideal for high-performance electric vehicles or aerospace applications, in which compact size and high energy-to-weight ratios are required.
- Cycle life: Sodium ions are larger and heavier than lithium ions, which causes more mechanical stress during cycling and leads to faster material degradation. Improving cycle durability is essential for NIBs to compete in long-term applications.
- Scaling production: As NIBs are still in the early stages of mass production, improving process efficiency and developing industry standards are key to driving down costs and accelerating adoption.
- Lower operating voltage: NIBs generally operate at lower voltages, reducing energy output per cell, and often requiring more cells in series.
How to Model Sodium-Ion Batteries using GT-AutoLion
At Gamma Technologies, we are enabling the advancement of Na-ion technology through high-fidelity electrochemical simulation with GT-AutoLion. Using a robust pseudo-2D (P2D) framework, GT-AutoLion allows users to simulate Na-ion cells with detailed physics-based models that help optimize performance, thermal behavior, and safety.
To illustrate sodium-ion behavior, AutoLion-1D includes an example model based on NVPF/hard carbon chemistry. Figure 2 shows the calibration of this model against experimental data at different C-rates.
The Future of Sodium-Ion Battery Technology
While LIBs dominate the market, NIBs are emerging as a strong competitor, especially in applications where cost, resource availability, and safety are top priorities. With strengths in large-scale energy storage and reliable cold-weather performance, NIBs represent a promising alternative for the future.
Rapid progress in NIB research is improving their performance and durability. Physics-based simulation tools like GT-AutoLion are essential for bridging the gap between NIBs and LIBs by helping engineers design safer, more efficient, and higher-performing NIBs for real-world use.
Ready to shape the future of energy storage? With GT-AutoLion, you can refine your NIB designs and stay ahead in this fast-evolving market. We’re here to support your journey and help push the boundaries of innovation in energy storage.
At Gamma Technologies, our GT-SUITE and GT-AutoLion simulations provide battery engineers and designers robust solutions for modeling and predicting battery performance throughout its lifecycle. Enjoy reading our battery-focused technical blogs to learn more and contact us to see how Gamma Technologies can support your battery development goals.
References
[1] Yu, Dandan, et al. “Low‐Temperature and Fast‐Charge Sodium Metal Batteries.” Small 20.30 (2024): 2311810.
[2] Zhao, Lina, et al. “Engineering of sodium-ion batteries: Opportunities and challenges.” Engineering 24 (2023): 172-183.
[3] Iwan, Agnieszka, et al. “The Safety Engineering of Sodium-Ion Batteries Used as an Energy Storage System for the Military.” Energies 18.4 (2025): 978.
Fuel Cell System Modeling: Powering the Future of Hybrid Locomotives
Transitioning from Diesel to Hydrogen Locomotive Power: Modeling the Future of Rail Transport
Can a fuel cell-powered locomotive haul freight as reliably as diesel while significantly reducing emissions?
As the transportation sector accelerates toward net-zero goals, hydrogen fuel cells are emerging as a clean alternative to conventional diesel locomotives. In fact, the hydrogen fuel cell train market is projected to generate $653.6 billion in cumulative revenue by 2038, with unit sales growing at a compound annual growth rate (CAGR) exceeding 100%, clearly signaling global momentum.
Yet, designing an efficient and reliable hybrid fuel cell-battery system for rail applications is no easy task.
This is where GT-SUITE plays a critical role. It offers a unified simulation platform to model, analyze, and optimize the dynamic interaction between fuel cells, batteries, cooling systems, and traction power demands under realistic operating conditions.
Customer Spotlight: Wabtec Corporation
Wabtec Corporation, a respected leader in the global locomotive industry, presented their innovative fuel cell powertrain work at Gamma Technologies’ technical conference. This blog summarizes their study to help you better understand the modeling objective, fuel cell-battery control strategy, and key insights gained using GT-SUITE. To access the complete presentation, click here.
Optimizing the Power Split Between Fuel Cells and Batteries in Hybrid Locomotives
The primary objective of this simulation study was to optimize the power split between the PEM (Proton Exchange Membrane) fuel cell system and the traction battery to fulfil the power demand of the locomotive’s traction motor under realistic route conditions.
The traction battery acts as a secondary energy source, supporting peak load requirements and supplying power during low-demand phases.
A 1D simulation model of the hybrid locomotive powertrain was developed in GT-SUITE to simulate power distribution and energy flow across a representative rail route.
Simulation Framework for Hybrid Hydrogen Locomotives
The GT-SUITE model integrates multiple physical domains in a single simulation environment.
Key Inputs:
- Throttle and dynamic braking data for selected routes
- Ambient conditions (pressure and temperature) for each route
- Battery specifications, including charge/discharge limits
- Empirical model of fuel cell stack performance and control logic
Simulation Configuration:
- Route data: 5 rail routes selected based on typical duty cycles
- Power profiles: Time-dependent notch profiles for both throttling and dynamic braking
- Weather conditions: Summer and winter ambient profiles
- Fuel Cell Module Variants: 3 configurations delivering net power comparable to a diesel locomotive
- Battery Pack Options: 3 configurations plus one baseline case without a battery
Simulations for Route-Based Hydrogen Powertrain Performance
The following configurations were studied for a single route simulation:
- Fuel Cell Power Rating as a percentage of diesel engine equivalent: 80%, 100%, 120%
- Battery Power Rating as a percentage of diesel engine equivalent: 0%, 1.5%, 3%, and 6%
Simulation Outputs and KPIs
The model provided detailed insights into:
-
- Total hydrogen consumed
- Power supplied by the battery
- Power loss in the battery system
- Power generated by the fuel cell
- Power recovered through regenerative braking
- Power delivered to the traction motor
- Power loss at the traction motor
Key Insights from Fuel Cell Locomotive Modeling
- Increasing battery capacity helps shift the fuel cell’s operating region toward higher efficiency, improving route-specific hydrogen consumption.
- Power deficits decrease with larger battery configurations, as expected.
- A careful trade-off between fuel cell sizing and battery cost (initial and operational) can help determine optimal hybrid configurations.
- GT-SUITE enables estimation of fuel economy, power deficits, and component interactions, offering an efficient way to right-size the hybrid powertrain.
- The simulation framework is scalable to multiple routes and environmental conditions, making it highly adaptable for feasibility studies.
Why Use GT-SUITE for Hydrogen Train and Rail Electrification Projects?
GT-SUITE provides a holistic modeling platform for virtual prototyping and optimization of fuel cell-electric locomotives. Its capabilities include:
- Electrochemical modeling of hydrogen PEM fuel cells
- Battery system dynamics, including thermal and aging effects
- Mechanical and thermal subsystem modeling, such as cooling circuits and lubrication systems
- Full system-level simulation of train powertrains, including control strategies and energy management
This allows engineers to virtually test fuel cell vs battery performance and make informed design decisions before committing to physical prototypes.
Conclusion: Advancing Clean Rail Transportation with Simulation
Simulation accelerates innovation. With GT-SUITE, engineers can explore the full design space of hybrid hydrogen locomotives, optimizing component sizes, control logic, and energy flow management. This empowers rail operators to confidently pursue clean transportation technologies and reduce reliance on fossil fuels. Learn how simulation supports cleaner rail strategies in our blog “How to Model Fuel Reformers with Simulation“, and watch the “GT Webinar – Fuel Cell Fault Simulation and Detection for OBD Using Real-Time Digital Twins” to see how digital twins enable predictive maintenance and regulatory compliance or contact us to see how Gamma Technologies can support your fuel cell development goals.
How Multi-scale Models Are Enhancing Battery Performance and Design
Beyond Experimentation: Predicting Battery Performance with Multi-Scale Models
In today’s electrified world, designing better batteries goes far beyond trial-and-error testing. Engineers and researchers are increasingly turning to simulation to accelerate innovation and reduce development costs. Lithium-ion batteries power modern energy storage systems, from electric vehicles to grid storage. As demand grows for higher performance, longer lifespan, and improved safety, accurate battery modeling becomes increasingly important. A promising path forward involves uniting atomic-scale simulations with continuum models (multi-scale modeling) to enhance the fidelity of performance predictions.
Enhancing Battery Design Through Atomic and Continuum Model Integration
Multi-scale modeling approach accelerates the design of new battery materials, reducing reliance on time-consuming and expensive experimental testing. It enables:
- Faster material development: Atomic-scale simulations allow rapid exploration of new electrolytes and additives.
- Better battery optimization: Engineers can fine-tune battery performance for specific applications, such as high-power EV batteries or high-energy grid storage solutions.
- Improved safety and longevity: Accurate predictions help optimize electrolyte formulations, reducing risks associated with lithium plating and thermal runaway.
The Role of the Electrolyte in Battery Simulation
Achieving high predictability in battery modeling requires a deep understanding of each cell component, particularly the electrolyte, which is essential for lithium-ion transport between the electrodes. The electrolyte significantly impacts overall battery performance by influencing internal resistance, voltage behavior, and degradation over time. However, key electrolyte properties, such as ionic conductivity, lithium transference number, and diffusivity, are difficult to measure and highly sensitive to variables like temperature, concentration, and interactions with electrode materials.
Understanding Atomic-Scale Simulations in Battery Modeling
Continuum models—mathematical approaches that simulate battery behavior at the cell or system level by treating materials as continuous substances—such as the pseudo-two-dimensional (P2D) model, rely heavily on these properties to simulate ion transport and electrochemical behavior. Inaccurate values may lead to significant errors in predicting concentration gradients, voltage losses, and overall battery performance. Atomic-scale simulations—methods that simulate battery behavior at the molecular (or atomic) level by modeling individual particles or molecules—help estimate electrolyte properties across a wide range of conditions, improving the accuracy of continuum models and reducing reliance on experimental measurements.
Combining Atomic-Scale and Continuum Models for Better Insight into Battery Performance
To generate accurate electrolyte property data across various conditions, Gamma Technologies has collaborated with Compular, a company specializing in atomic-scale simulations. By integrating Compular’s molecular dynamics (MD) simulations with our GT-AutoLion continuum model, it is possible to directly calculate fundamental electrolyte properties from atomic-level simulations.
To integrate Compular’s simulation with Gamma Technologies’ (GT) GT-AutoLion, an electrolyte with an additive (LiPF6 in EC:PC:EMC, 1:3:8 by volume, with 2% FEC) is simulated across varying salt concentrations and temperatures to study the performance of both high energy and high power-dense cells.
GT-AutoLion utilizes a physics-based P2D model to simulate battery charging and discharging. It divides the battery into three main regions—anode, cathode, and separator—discretizing them to capture critical electrochemical interactions (Figure 1). However, to ensure the accuracy of these models, precise electrolyte data is necessary. This is where MD simulations come into play.
Compular Lab models electrolytes at the molecular level using MD simulations, simulating systems with about 5000 atoms. These atoms move according to the laws of physics, and we track their motion over time to observe how they interact. The simulations run at specific temperatures and salt concentrations, and for each condition, we record around 15 nanoseconds of ion movement.
To extract useful data from these simulations, Compular’s CHAMPION, a software tool, analyzes how ions move together. It calculates what’s called Onsager coefficients, which describes how different ions affect each other’s motion (see Figure 2). From these, we derive four key transport properties:
- Ionic conductivity: how efficiently ions carry electric charge through the electrolyte
- Salt diffusivity: how fast ions spread out in the electrolyte
- Transference number: the fraction of the current carried by the cation
- Thermodynamic factor: how ion–ion interactions influence diffusion and concentration behavior
Typically, calculating these properties requires long simulations because random motion (or “noise”) from non-interacting ions makes it harder to get accurate results. But CHAMPION improves efficiency by focusing only on the meaningful interactions between nearby ions, reducing the required simulation time by about 90% to approximately 10 nanoseconds.

Figure 2: Schematic representation of the MD simulations; Key transport properties are calculated based on the motion of atoms governed by Newtonian mechanics
The Workflow: From Atomic Interactions to Battery Performance
Here’s how the combined modeling approach works:
- Molecular Dynamics Simulations: Compular’s lab tool runs MD simulations to extract crucial electrolyte properties like conductivity and diffusivity. These simulations provide insights into how electrolyte composition affects battery behavior, particularly under different temperatures and salt concentrations.
- Integrating Data into GT-AutoLion: A Python script transfers the extracted electrolyte data into the GT-AutoLion simulation framework. The data is structured into a reference object (XYZMap) that incorporates temperature/concentration-dependent electrolyte properties.
- Simulating Battery Performance: Using these electrolyte properties, GT-AutoLion predicts voltage vs. capacity curves for both energy-dense and power-dense cells. This allows for comparative analysis under varying temperatures and charging rates.
Simulation Findings: Electrolyte Properties and Their Impact on Battery Performance
When we tested these models, we found some interesting trends.
MD simulations reveal the following (Figure 3):
- Ionic conductivity and salt diffusivity decrease at lower temperatures, affecting overall battery efficiency.
- There is an optimal salt concentration (~1 M) that maximizes conductivity.
- Transport properties vary significantly with electrolyte composition and temperature, influencing battery performance.

Figure 3: Transport properties as a function of salt concentration at three temperatures, as predicted by using molecular dynamics simulations using Compular Lab
Electrochemical Battery Modeling using GT-AutoLion
By incorporating atomic-scale insights, GT-AutoLion enables:
- Accurate predictions of voltage vs. capacity for power-dense and energy-dense cells.
- Insights into how electrolyte behavior impacts capacity, especially under high C-rates and low temperatures.
- A better understanding of trade-offs between power and energy density in different applications.

Figure 4: Voltage vs. capacity for power-dense and energy-dense Li-Ion cells at different temperatures and C-rates
Conclusion: The Future of Battery Simulation
The progression of battery technology will rely heavily on the integration of multiscale simulation techniques that connect molecular-level behavior with system-level performance. With enhanced prediction accuracy and faster innovation, the future of lithium-ion is brighter than ever. If you are interested to read more about battery modeling you can read blogs on using simulation for battery engineering and watch webinar on “Machine Learning for Fast, Integrated Battery Modeling” or contact us to see how Gamma Technologies can support your battery development goals.
Simulating the “Impossible”? Automation Meets Rotating Detonation Engines
Enabling Rotating Detonation Engine (RDE) Innovation Through Simulation and Automation
The Rotating Detonation Engine (RDE) stands out as a leading technology that can advance performance and efficiency for future propulsion systems. Simulation software plays a critical role in accelerating development and tackling complex design challenges while engineers and researchers strive to unlock the full capabilities of this innovative technology. This blog explores the application of GT-SUITE together with automation for developing detailed and precise RDE models that generate 10,000 flow volumes within hours. This blog will cover RDE fundamentals alongside modeling methods and illustrate how automation transforms daunting tasks into achievable engineering achievements.
What is a Rotating Detonation Engine?
Some of you may read the title and wonder “What is a Rotating Detonation Engine?” (RDE). An RDE is a groundbreaking advancement in propulsion technology that is being actively researched by NASA, universities and research labs, established jet engine companies and start-ups. Unlike traditional engines that rely on a flame spreading through the combustion chamber at subsonic speeds, RDE’s use controlled detonation waves that travel at supersonic speeds to burn the air-fuel mixture. This innovative approach allows for the rapid generation of high-pressure and high-temperature gases, leading to significant improvements in thermal efficiency that will result in lower fuel consumption. Another key advantage of this technology is the significant reduction in moving parts, thus simplifying maintenance. However, RDE’s face challenges such as maintaining stable detonation waves, managing extreme temperatures and pressures, and optimizing components like nozzles and injectors for better performance that require study and design. The images below help illustrate the ideas behind the system.

Figure 2 Unwrapped view of the RDE, illustrating the various regions. Colors represent temperature ranging from ≈500K (blue) to ≈3500K (red)
We at Gamma Technologies have received several inquiries on whether GT-SUITE can be used to simulate such engines. We considered developing a detailed combustion model specifically for this combustion device, but it was estimated to take as much as a thousand hours to develop, and we were not sure whether the long-term demand was large enough to justify such an investment of time. However, with some creative thinking — leveraging existing capabilities in the GT-SUITE solver and features available in GT-Automation — we were able to develop a working solution in less than a hundred hours.
Simulation Methodology for Rotating Detonation Engines (RDE)
Simulation of the RDE involves complex interactions of high-speed fluid dynamics with fast chemical kinetics. The fully compressible and transient flow solver plus the integrated chemical kinetic solver in GT-SUITE allows the capture of this intricate interplay, simulating the dynamics of the entire combustion process and the movement of the shock wave through the combustor. Using GT-SUITE’s modular architecture we were able to build a comprehensive model for this combustor from scratch using existing capabilities in the software. Such a model involved discretizing the annular volume of the RDE using discrete flow volumes, both in the circumferential and axial directions, as shown in Figure 3. This resulted in a total of about ten thousand flow volumes, all interconnected with each other and with parts simulating chemical kinetics.
Automation-Driven Modeling for RDEs: Building 10,000 Flow Volumes in Hours
You may have read the part about ten-thousand flow volumes being built in the model and wondered, “how long would it take to build that model?” or “does your hand hurt after clicking the mouse that many times?”. These are good questions to ask, and you may be relieved to learn that no carpal tunnel syndrome was triggered while building this model. The possibility to build and modify models in GT-ISE through Python scripting and API’s by using GT-Automation was remembered and used to move forward quickly.
If you are not already familiar with it, GT-Automation is a time-saving enterprise package in GT-SUITE that enables Python scripting of GT-ISE and GT-POST operations, as well as process integration of modeling and simulation tasks. With GT-Automation, users can save time and eliminate errors that often come from repeated, tedious operations. In this instance, a Python script was written that automated the entire model building process, allowing us to quickly adapt to changes in discretization, geometry and operating conditions while significantly reducing the time and potential errors associated with manual modeling. This led to the development of an innovative quasi-3D modeling methodology using GT-SUITE, which has the potential for rapid simulations of RDEs at both the component and system levels. Also, by creative use of the existing capabilities of GT-SUITE and GT-Automation, this model was developed with no changes or additions to the physics-based solvers and completed in less than one hundred hours, providing a lot of cost-savings compared to a specialized development.
Here is a video showing the building of the model in GT-ISE that results from running the Python script in GT-Automation:

Video showing the building of the model in GT-ISE that results from running the Python script in GT-Automation
To help you understand the model in relation to the device, Figure 2 is shown again with some flow parts overlaying the image.
What is predicted?
Some results of these simulations are shown in Figure 4, below.

Figure 4 (a) Unwrapped view showing single and dual detonation wave propagation patterns (co- and counter-rotating configurations) (b) 3D view of the RDE. Colors in (a) and (b) represent temperature ranging from ≈500K (blue) to ≈3500K (red) (c) Variation of thrust with injection area, parametrized by increasing injection pressure (d) Temporal evolution of pressure close to the injection plane. Injection pressure (dashed line) shown for comparison
These results demonstrate the model’s capability to capture realistic RDE behavior, including:
- Detonation Wave Motion: Both single and multiple waves (co- and counter-rotating) can be simulated effectively.
- Performance Influences: The impact of injection parameters on engine performance has been demonstrated.
- Limit Cycle Operation: The system can achieve a stable, periodic state, which is crucial for reliable engine operation.
As a bonus, the 3D animation capabilities of GT-SUITE were used to create this video for your viewing pleasure:
Accelerating RDE Model Development Using GT-SUITE and GT- Automation
This project turned out to be a great demonstration of the capabilities and flexibility of GT-SUITE as a simulation platform and multi-physics solver. A project that was intimidating in size, scope and effort at the beginning turned into a manageable task in the end, yielding realistic results and exciting animations. GT-Automation was a critical component in empowering the team to build this model with a relatively low effort. If you are interested in learning more about how GT-Automation can support your projects, please visit our web page on the topic (GT-Automation) or contact us! You can also watch our webinar on GT-Automation, and check out our blogs on Hydrogen-Powered Rocket simulation and how engine manufacturers leverage simulation to stay ahead of increasing regulations.
Digital Twin Simulation: Engineering Smarter, Faster, and More Reliable Products
Why Digital Twins are Necessary and Important
Imagine being able to predict equipment failures before they happen, optimize system performance in real time, and reduce expensive physical testing. This is the power of digital twins. A digital twin is a virtual replica of a physical asset, enabling real-time monitoring, simulation, and optimization.
With increasing system complexity and the demand for faster, more efficient product development, digital twin technology is revolutionizing engineering across industries.
How Digital Twins Help Address Major Challenges
Downtime and Maintenance: Unplanned machine downtime can disrupt workflows, delay production schedules, and lead to significant revenue losses. Unexpected failures not only impact profitability but also strain resources and damage customer trust. Digital twins help predict failures and optimize maintenance schedules, ensuring uninterrupted operations.
Fault Detection and Prediction: Late fault detection can result in system failures, escalating repair costs, and potential damage to other components. At the same time, excessive false alarms can interrupt production and reduce efficiency. Digital twins enable precise fault detection, balancing accuracy with minimal disruption.
Controls Optimization: Misconfigured control systems can lead to production stoppages and hardware degradation. Control software developed in lab conditions may not perform reliably in real-world applications. Digital twins allow engineers to test and optimize control strategies in a virtual environment before deployment, improving safety margins and efficiency.
What-if Scenarios: Physical testing of all possible operating conditions is expensive and time-consuming. Relying solely on sensor data and human intervention can introduce inconsistencies. Digital twins enable engineers to simulate countless real-world scenarios quickly, reducing reliance on costly prototype testing.
How Simulation is Making an Impact
GT-SUITE provides a comprehensive platform to develop digital twins, integrating robust multi-physics simulation with cutting-edge data science. By seamlessly connecting with a customer’s data collection system in a cloud-based environment, GT-SUITE enhances asset performance, minimizes downtime, and improves decision-making. Let’s explore how digital twins solve critical challenges and how you can build one to unlock the full potential of your engineering systems.
“How to Guide” for Building a Digital Twin
A digital twin functions by continuously updating a virtual replica of a physical asset, system, or process with real-time data. This enables simulation, analysis, optimization, and monitoring of the physical counterpart.
Here’s how to build one:
Step 1: Data Collection; Data can be collected from sensors on the physical asset (e.g., an engine, machine, or compressor), measuring parameters such as temperature, pressure, vibration, speed and more. Additionally, historical test data and field data from previous operations can be leveraged.
Step 2: Data Integration; All collected data—whether from sensors, past tests, or field operations—is aggregated, cleaned, and processed to ensure accuracy and usability for simulation and analysis.
Step 3: Virtual Model Creation;
-
- Physics-based modeling: GT-SUITE enables the creation of high-fidelity digital twins using physics-based models calibrated with real-world measured data, ensuring precise system behavior predictions.
-
- Data-driven machine learning models: GT-SUITE’s Machine Learning Assistant utilizes datasets from sensors, testing, field operations, and design of experiments to create fast-running mathematical models (metamodels). These models leverage real-time sensor data for enhanced predictive accuracy and can also integrate synthetic data from physics-based simulations, reducing reliance on physical testing while improving reliability.
Step 4: Real-Time Interaction; The virtual model continuously updates via live data streams and sensor feedback which enables real-time monitoring, fault detection, and predictive analysis. This ensures that engineering teams can proactively optimize system performance, diagnose failures, and improve reliability.
Case Studies: Real-World Applications of Digital Twins
Case Study 1: Fuel Cell Fault Simulation and Detection for On-Board Diagnostics Using Real-Time Digital Twins
As electrified powertrains become more complex, developing reliable On-board diagnostic (OBD) systems is crucial for ensuring compliance with evolving regulations. However, the lack of prototype hardware makes traditional validation methods impractical. To address this, real-time digital twins enable virtual testing through model-in-the-loop (MiL), software-in-the-loop (SiL), and hardware-in-the-loop (HiL) simulations.
This approach involves creating high-fidelity models to identify and analyze failure modes in key fuel cell components such as the compressor, recirculation pump, humidifier, and cooling system. The process begins with fuel cell stack calibration using measured polarization curves and predictive loss models. This is followed by balance of plant modelling that integrates the stack with sub-systems like the anode recirculation loop, the cathode system including humidifier, intercoolers, compressor, and turbine, and motor controls as well as the cooling systems. To enhance efficiency, the model is optimized using 0D and map-based approaches, ensuring fast simulation without sacrificing accuracy. Fault scenarios are then defined, specifying key variables to be monitored by the control system. In the HiL phase, the fast-running fuel cell model developed in GT-SUITE is integrated with MATLAB Simulink for real-time execution. The model’s performance is assessed by running real-time simulations, comparing speed and accuracy, and introducing faults to evaluate system response alongside sensor data. By leveraging real-time digital twins, engineers can efficiently develop and validate OBD systems, ensuring robust fault detection while reducing reliance on physical prototypes. This accelerates regulatory compliance and enhances the reliability of fuel cell powertrains.
Click here to access the complete webinar.
Case Study 2: Enhancing Cabin Thermal Management with Digital Twin Technology
A major automotive company leveraged digital twin technology to optimize cabin thermal management, improving energy efficiency and driver comfort in electric trucks. The challenge was to balance the battery thermal system and cabin climate control while maintaining efficient energy use.
To achieve this, the company calibrated high-fidelity models, dividing the cabin into flow volumes and creating surface mesh models for precise simulations. A co-simulation was then conducted by integrating GT-SUITE’s fluid solver with GT-TAITherm, leveraging key parameters. The model was rigorously validated against multiple real-world test datasets, ensuring accuracy and reliability.
Building on this foundation, the company aims to take its digital twin implementation to the next level by developing fast-running models for SIL and HIL applications and integrating internet of things (IoT) connectivity for real-time data exchange and enhanced predictive capabilities. These advancements will enable smarter automation, deeper system insights, and more responsive thermal management strategies, bringing them closer to a fully realized digital twin ecosystem.
Click here to access more digital twin related presentations.
Learn More About our Digital Twin Solutions
Digital twins are transforming engineering by enabling predictive maintenance, optimizing system performance, and accelerating product development. By combining physics-based modeling with data-driven machine learning, GT-SUITE provides a powerful platform for creating and deploying digital twins across industries.
Whether you are working on fuel cell powertrains, vehicle thermal management, or other complex systems, leveraging digital twins can drive efficiency, reduce costs, and improve reliability.
Ready to take your engineering processes to the next level? Start building your digital twin with GT-SUITE today! If you’d like to learn more about how GT-SUITE‘s capabilities, contact us!
Using Toshiba’s Battery Electrochemical Models to Make System-level Decisions Faster
Eliminate the Guess Work in Battery Modeling Selections
System-level design engineers have a difficult task. They deal with challenging questions such as: What Lithium-ion cells work best for a particular application; How many cells should be placed in series and parallel; or What type of motor and drive technology should be used? They have to make these decisions with little information about the actual components and before any prototypes. Often, important decisions with long-lasting consequences are made with “rules of thumb” and shortcuts such as; let’s use the component-level data from the previous generation.
This blog will focus on two of these challenges:
- What Lithium-ion cells work best for a particular application?
- How many cells should be placed in series and parallel?
To help engineers answer these questions, we’ve partnered with Toshiba Corporation. Toshiba provides the SCiB™ Lithium-ion cells that use lithium titanium oxide (LTO) anodes for superior safety, long life, rapid charging, and excellent performance at low temperatures. Within the SCiB™ product lineup, Toshiba provides a full spectrum of cells ranging from high-power cells (2.9 Ah, 10 Ah), high-energy cells (20 Ah, 23 Ah), and combination cells (20 Ah-HP).
For cell selection and battery sizing, a system-level engineer might start by using models and data from batteries used in previous generations of similar products. Additionally, if specification sheets for Lithium-ion cells are available, they can be used to calibrate electrochemical models. However, Toshiba, a long-time user of Gamma Technologies’ (GT) GT-AutoLion, wanted to provide more than just a specification sheet and provide their clients with encrypted GT-AutoLion models.
Mission – Tugboat
Tugboats are key for the navigation of large and bulky vessels in the narrow water channels of typical ports. Our today’s mission is around a battery-electric tugboat that is responsible for the towage of large vessels and consist of a series of phases:
- Transit phase: transit from the tugboat pier to the calling vessel
- Towing phase: towage of the vessel from the dock and out of the port
- Return phase: returning to the tugboat pier and charging station
We will take the view of an electrical system engineer at a tugboat building company, going through the cell selection and pack sizing process with these models provided by Toshiba. The system we are modeling is a battery-electric propulsion tugboat operating in a port requiring idling, transit, and towing maneuvers for multiple large vessels in a single-day of operation.
To illustrate the mission, we’ve overlaid arrows over a Google Maps screenshot of the Hamburg harbor, along with the target speed vs. distance profile we’ve applied to the model (Figure 1).

Figure 1. Google Maps screenshot of the Hamburg harbor with the tugboat target speed vs. distance profile
This mission is repeated four times over the course of a 15 hour workday, meaning that the tugboat will have 3 hours and 45 minutes to accomplish the mission and recharge the battery before being sent on the next mission.
Tugboat Simulation Model
The system-level model of the tugboat consists of the following components:
- 42m boat hull with a displacement of 280t
- Propulsion:
- Two 2.4 m azimuth thruster
- Two 2.7 MW Permanent magnet synchronous machines
- Maximum bollard pull of 58 tons
- Genset:
- Diesel genset with 1000/1260 kW @ 50/60 Hz
- A battery that needs to be selected and sized (will be described in the text below)
These components are arranged according to the single-line diagram below (Figure 2). In this system design we assume a DC link between the components. In case of an AC link design, additional switchboards would need to be integrated. In the image, “ESS” represents the electrical storage system (battery), “Shore Supply” is the onshore power supply (for cold ironing and battery charging), and “Hotel Load” stands for the onboard electrical power consumers.
The model, built in the simulation platform GT-SUITE, is shown below (Figure 3). Please note the yellow links represent electrical connections and the black links represent mechanical connections. Additional thermal components and connections can be integrated to include thermal dependencies in the model and the warm-up of components.
Please note, that the four bulk carriers displayed in the image will be towed individually throughout a 15-hour workday. Each bulk carrier is being modeled as a passive load weighing nearly 47,000 tons.
GT-AutoLion Model
The Toshiba-provided model of the SCiB cell is built using GT-AutoLion, which follows the principles of the pseudo two-dimensional model (P2D) for lithium-ion batteries. The P2D model is based on the work of and captures the electrochemical reactions occurring inside the cell to capture terminal current, terminal voltage, power, heat generation, and concentration gradients of Lithium throughout the cell. As shown in the figure below, the model discretizes the lithium-ion using the finite control volume approach (Figure 4). The cathode, anode, and separator are discretized in the “thickness” direction; additionally, in each control volume of the cathode and anode, a spherical representation of active materials is used and is discretized in the radial direction.
In addition to the P2D model, GT-AutoLion has built upon the original work from Doyle, Newman, and Fuller to also include capabilities to capture Li-ion degradation, swelling, and thermal runaway.
The performance of the cell model was calibrated by Toshiba between -30°C and 70°C, and the voltage results of putting the cell model through constant current discharge tests are shown below (Figure 5).

Figure 5. Toshiba battery cell model discharge test voltage results in temperatures between -30°C and 70°C
Integrated Model
As GT-AutoLion is available as a model template in GT-SUITE, the model integration into system-level is very simple. The physics-based battery model received from Toshiba is simply linked to the electrical domain, which allowed us to replace the existing electrical-equivalent battery model, as shown in the image below (Figure 6).

Figure 6. GT-SUITE and GT-AutoLion battery-electric tugboat system model integrating Toshiba’s battery models
The supervisory controls were defined to operate the tugboat in “battery-electric” mode until the battery state of charge falls below 20% and to switch on the genset to continue the maneuver in “diesel-electric” mode, while keeping the battery state of charge at almost constant level.
The results shown below are displayed for a single maneuver (one 3 hour and 45-minute interval) with a battery that was sized to have 250, 20 Ah SCiB™ cells placed in series and 170 placed in parallel (250S/170P) (Figure 7).

Figure 7. Simulation results of a single mission. The Toshiba 20 Ah SCiB™ cells are arranged in 250S/170P.
Battery Sizing Results
The next image compares results between two different battery sizes, 250S/170P is shown in light grey and 250S/142P is shown in black (Figure 8). Notice how the state of charge decreases slower in the battery with more parallel cells, which allows the tugboat to complete more of its mission before needing to turn the genset on. Ultimately, this will mean that as battery size increases, less fuel is required to accomplish a single mission.

Figure 8. Simulation results of a single mission comparing two different battery designs. The Toshiba 20 Ah SCiB™ cells are arranged in 250S/170P (light grey) and 250S/142P (black).
Finally, we decided to sweep the number of parallel cells for a preliminary study on battery sizing for this tugboat application using the integrated parametrization and design of experiments features. The main results are summarized in the table below (Figure 9).
Learn More About our Gamma Technologies’ Battery Simulation Solution
As mentioned at the beginning of the blog, system-level design can be very challenging especially when details of the components, such as batteries, are largely unknown. However, with the ability to download and run calibrated encrypted electro-chemical battery models directly from Toshiba, system-level designers can have accurate representations of batteries to use already in their early-stage modeling even before any sample cells are available for testing.
After this battery sizing stage, engineers can also use these models to understand how batteries will age in their systems. Learn more about our battery system simulation by reading one of our technical blogs here and explore how Gamma Technologies is contributing to the maritime electrification sector. Also, in collaboration with the Maritime Battery Forum, watch this webinar on Gamma Technologies and Toshiba’s collaboration with Toshiba’s SCiB cell to accelerate maritime electrification!
Access the Toshiba, GT-AutoLion model
If you are interested in the Toshiba-supplied encrypted GT-AutoLion model of the 20Ah SCiB™ cell, fill out this request form here. Our team will carefully review all submissions.
Simulation for the HVACR Industry: How to Leverage Machine Learning
The HVACR Industry is Evolving
As we step into 2025, major trends in systems simulation are emerging. The heating, ventilation, air conditioning, and refrigeration (HVACR) industry is seeing accelerated growth in the adoption of simulation throughout the design and development process. This industry is looking to further modernize, appeal to consumers, and demand energy-efficient, sustainable and smart solutions. HVACR original equipment manufacturers (OEMs) are addressing these requests by implementing engineering processes that are supported by machine learning, Internet-of-Things (IoT), and AI advancements.
How Systems Simulation Modeling & Machine Learning can Assist
The HVACR industry is being driven by new regulations that have been enacted to combat climate change as well as to increase efficiency of current and future vapor compression systems. These trends include:
- A big emphasis on using low global warming potential (GWP) refrigerants
- Heat pumps becoming more popular within the industry
To achieve practical viability and adoption of heat pumps and low GWP refrigerants in HVAC systems within the next decade, thousands of prototypes and experiments need to be performed. Fast, accurate and robust modeling and simulation can also aid in this endeavor and accelerate this process. There also needs to be extensive collaboration between industry, academia, and policy makers to comprehensively address these goals. An important tool available to industry, academia and policy makers is the huge amount of data already available from different sources. This data can be leveraged using machine learning tools to aid and enhance modeling as well as providing new physical insight into HVAC systems.
Applying Machine Learning to Thermal Model Simulations
We at Gamma Technologies presented a paper at the Purdue Herrick Conferences on machine learning under the model speedup umbrella titled, ‘Application of Feedforward Neural Networks to Simulate Battery Electric Vehicle Air Conditioning Systems.’
In this paper a representational model of the thermal systems of a battery electric vehicle (BEV) was built in GT-SUITE and transient drive cycle simulations were carried out to compare the speed and accuracy of a machine learning based metamodel compared to a physics-based solution. The air-conditioning circuit in the EV thermal model was replaced with a feed-forward neural network trained against physical data. The battery and cabin temperatures were tracked and compared during a heat-up (ambient temperature -10 C) and cooldown (ambient temperature 30 C) cycle.
We observed that the machine learning metamodel does a good job in capturing the battery and cabin temperatures but there is some mismatch between the evaporator and condenser heat transfer rates during the heat-up simulation. However, the speed of the metamodel is around 35% faster than the physics-based model with an RT factor of 0.17 compared to the already faster than real time physics-based solution which has an RT factor of 0.37.
With this work we were able to show that we can successfully integrate machine learning based metamodels into GT system models using in-built tools and use them as alternatives to physics-based solutions to address various simulation needs.

Figure: EV thermal system model with physics-based solution (red) and feedforward neural net (green)
Link to Presentation
Learn More About our HVACR Simulation Solutions
See how GT-SUITE’s simulations solutions impact the HVACR industry here.
GT-SUITE doesn’t take hours or days to run complex simulations. HVACR engineers can run simulations in a matter of seconds or minutes, which allows for a more iterative design cycle in addition to the ability to test more product possibilities.
Firms such as Trane have successfully unleashed the power of GT-SUITE and simulation on HVACR scroll compressor designs. Learn how a simple sensitivity analysis enabled Trane engineers to solve a challenging performance shortfall problem that resulted in an estimated savings of between $50,000 and $40 million in product development costs.
Making Music with Multiphysics
The capabilities of simulation software appear to be endless (not really, but you know what I mean…) when it comes to modeling different systems and things that may not have been simulated before. This can be especially true when considering how model-based systems engineering (MBSE) has advanced in the past few decades from single-purpose tools focused on one system or set of physics, like engines or general fluid flow simulation, to the current state of multi-physics simulation that encompasses mechanical, fluid flow, electrical, thermal, and other domains. While looking at different possibilities considered by manufacturers, one system came to the attention of a few of us at Gamma Technologies and took us down an unexpected, yet interesting and even amusing path.
Making Noise to Hide Noise...Virtually
In the late 2010’s, different automotive exhaust suppliers were developing active noise cancellation (ANC) systems that could further reduce the sound coming from the passive systems of mufflers and resonators that have been a standard component of engine for many decades. As a result of this investigation, Gamma Technologies took some time to examine what was done and determine if GT-SUITE could be used to simulate such a system and if not, determine what might need to be added to make it possible.
Description of the System and Source of the Noise
The ANC system is made of a volume that is nearly spherical attached to a pipe that connects to the outflow of a muffler near the end of an automotive exhaust system. In the volume are speakers that produce the artificial noise that cancels or modifies the noise coming from the muffler. Engineers at the exhaust system maker Eberspaecher wrote a paper describing the system in more detail in 2017 that was published by the Society of Automotive Engineers. (“Active Cancellation of Exhaust Noise over Broad RPM Range with Simultaneous Exhaust Sound Enhancement”; Riddle, Bemman, Frei, Wu & Padalkar; SAE Technical Paper 2017-01-1753; 2017; doi:10.4271/2017-01-1753.) A simplified depiction of this system is shown in Figure 1 below.
Modeling Exhaust System Acoustics and Active Noise Cancellation
To model this system, engineers at GT considered the possibilities and the physics included among the parts. GT-SUITE has been used for a few decades to model the pulsations of exhaust systems. Suppliers like Eberspaecher, Forvia (formerly Faurecia), and Tenneco have been using simulation as part of their design process for much of that time. Included in the software is a virtual microphone that can calculate the sound at a distance from the exit pipe and store it in a sound file playable on typical computers and other devices, like phones, often in the WAV format. At the same time, GT-SUITE includes multiple multi-physics libraries for electrical and mechanical systems that can be incorporated into the fluid dynamic model to simulate the entire system. There is a template in GT-SUITE that can convert a current signal into a magnetic and mechanical force that can move a mechanical mass, whose motion can be limited by a spring and damper. This is the principle of a speaker, used in sound systems, phones headphones, and so on. These parts are shown in the yellow part of the image below (Figure 2):
The mass of the speaker coil and cone is connected the volume of the ANC system by a connection in GT-SUITE made for this purpose, MechFlowConn. This connection translates the motion of the mass into changes in the volume, and which transmits the sound to the fluid system, shown in the blue region of the image. The sound transmits to the pipe and down to the outlet, which is connected to the microphone and sound file generator in the pink region.
In practice, one might produce a signal from another source, such as a control system model made in Matlab/Simulink, especially to test the controls that would take feedback from the system and change the signal going to the speaker system. However, as this was just a conceptual study, a realistic system was not available to GT to test it and so it was decided to generate signals within the control system library of GT-SUITE and listen to the results. Tests with individual frequencies passed and GT was satisfied with the results.
Engineers Having Fun
By good fortune, in the past weeks, it was remembered that this study had been performed, and the question was asked, “Could we make a little melody using this method?” The answer was YES!
In the spirit of the upcoming holidays, the main melody from Beethoven’s Ninth Symphony (fourth moment) was selected. And so we present to you an original ANC recording, and our own version of Ode to Joy, performed by GT-SUITE:
Learn More About our Acoustic Simulation Solutions
If you are interested in learning more about GT-SUITE’s intake and exhaust acoustic simulation offerings, visit this webpage here. If you’d like to learn more about how GT-SUITE‘s capabilities, contact us!
How Simulation Accelerates the Development of eVTOL Aircraft for Taxi Services
Unlocking the Potential of eVTOL Aircraft for Taxi Services: Advancing On-Demand Transportation Safely and Efficiently
The world is rapidly advancing towards an integrated and accessible on-demand transportation network. Electric Vertical Takeoff and Landing (eVTOL) vehicles have emerged as the ideal solution for the near future, offering faster and more efficient travel options. However, ensuring the safety and reliability of these innovative aircraft is paramount. This blog explores how simulation models play a vital role in evaluating eVTOL designs, identifying potential issues early, and paving the way for reliable and sustainable air mobility solutions.
Understanding the Aging Challenges of Li-ion Batteries in eVTOL Aircraft
Limited durability of lithium-ion batteries poses a significant obstacle in developing long-lasting eVTOL aircraft. Lithium-ion batteries experience capacity decline, increased impedance, and decreased power output over time. A deep understanding of battery aging is crucial to predict lifespan accurately and optimize battery management systems (BMS) for longevity and reliability. Let’s learn how addressing these challenges is vital for ensuring the success of eVTOL technology.
How Simulation Plays a Role in Electric Aircraft Designs
Simulation platforms such as GT-SUITE offer practical, efficient, and reliable solutions for studying various aspects of electric aircraft design. This comprehensive simulation suite allows for in-depth exploration of critical areas such as aerodynamics, flight control, mission definition, propulsion, and battery pack systems (see Figure 1 below). With GT-SUITE, engineers can comprehensively assess and optimize each subsystem, ensuring optimal performance and safety throughout the eVTOL aircraft’s operation. Check out our blog on eVTOL design and this study co-authored with Advanced Rotorcraft Technology on a comprehensive simulation for eVTOL aircraft.
Real-World Aging Scenarios: Using Simulation for Battery Degradation Prediction
To ensure the economic viability of an eVTOL taxi service, maximizing the number of trips during peak traffic hours is crucial. However, it is essential to consider the limitations imposed by the battery pack. In a recent case study, we discussed the evolution of an eVTOL’s range over a span of four years, exploring a scenario where ten trips were scheduled each weekday between 6 AM and 10 AM in the morning, and from 4 PM to 8 PM in the evening. To maintain optimal performance, a 10-minute recharge was performed between each trip, with a full recharge between the morning and evening shifts. The proposed aircraft utilization was inspired by the UberAir Vehicle Requirements and Mission study. The vehicle requirements and mission have been developed through extensive analyses of current and predicted demand, understanding the capabilities of enabling technologies, and focusing on creating the optimal rider experience. Extensive analyses of current and predicted demand, understanding the capabilities of these technologies, and focusing on creating the optimal rider experience.
By leveraging the advanced capabilities of another simulation platform GT-AutoLion, we were able to extrapolate the battery power demand during a representative mission flight from GT-SUITE and simulate the long-term degradation of the battery pack in a real-world scenario. In Figure 2, the evolution of the state of charge (SOC) of the eVTOL battery pack during the morning shift is plotted.
The focus of this study was the ability to model the aging mechanism using GT-AutoLion. GT-AutoLion offers solutions that empower engineers to leverage cycle and calendar aging data, extracting valuable insights into battery degradation within more realistic scenarios. GT-AutoLion’s physics-based and postdictive degradation models can be calibrated to align with this data and subsequently applied to predict the aging behavior of Li-ion cells in various applications. These applications include li-plating, active material isolation, cathode electrolyte interphase (CEI) and solid electrolyte interphase (SEI) layer growth and cracking. These can all be modeled and investigated as aging mechanisms in Gamma Technologies’ software (see Figure 3).
The GT-AutoLion aging simulation generates an external file (.cellstate) capturing the Li-ion cell’s state at each cycle during the aging process. Each cycle represents a mission flight of the eVTOL aircraft. This external file serves as valuable input for the system-level models, enabling accurate predictions of how the aged battery will impact the product performance (see Figure 4).
By harnessing the power of this invaluable tool, we were able to project the anticipated range of the battery pack over a four-year operational period (see Figure 5).
Such insights provide a comprehensive understanding of the technology, aiding both technical exploration, and supporting the business development of the latest eVTOL advancements.
With the ability to simulate and predict battery performance and aging, operators of eVTOL taxi services can make informed decisions about their operations, keeping an eye on profitability while maintaining reliable and sustainable service. By accurately estimating the range evolution over the course of four years, they can strategize the optimal scheduling of trips, maximizing efficiency during peak traffic hours, and mitigating the impact of battery pack limitations.
Learn More About our eVTOL Simulation Solutions
The future of on-demand transportation is bright with eVTOL vehicles leading the way. Through the use of cutting-edge simulation models and advanced battery management solutions like GT-SUITE and GT-AutoLion, the safety, reliability, and performance of eVTOL aircraft is elevated to new heights. Embracing innovation and overcoming obstacles, the sky’s the limit for this exciting and sustainable mode of transportation!
Watch this great video case study of coupling GT-SUITE and Advanced Rotorcraft Technology’s FLIGHTLAB simulation capabilities to provide an easy-to-use, holistic solution for simulation-supported system design during the early design stages of eVTOLs.
If you’d like to learn more about how GT-SUITE and GT-AutoLion are used to solve eVTOL simulation challenges, contact us!
Combining Measurements and Simulation to Streamline Combustion/Controls Development
How to use Simulation to Improve the Engine Development Process for Carbon Neutral Fuels
The need for clean, renewable energy sources requires exploring carbon neutral fuels and their combustion behaviors. This is typically done using single-cylinder (SC) engines. The advantages of this process are to make quick hardware changes such as replacing the head or piston, or to change the fuel composition, which provide fuel cost savings compared to a multi-cylinder engine. Combustion control strategies, various air fuel ratios, and the impact on emissions are studied using the simulation platform, GT-SUITE and its real time engine plant model solution, GT-POWER-xRT.
Based on this research, multi-cylinder (MC) engines are designed, simulated and manufactured. The main problems that can result from this methodology are the differences in behavior between a single-and multi-cylinder engine due to the cylinder-to-cylinder and turbocharger interaction. These interactions are not represented in the SC engine. Therefore, control strategies applied to the SC cannot be applied one-to-one for the MC engine. To mitigate this problem, engine simulations of a MC engine using combustion data from a SC engine are carried out to test and develop control strategies in the time before the multi-cylinder engine is built and available on a test bench. The other drawback is the time between taking SC measurements and applying it to the MC model, which can be weeks or even months. It is not uncommon to find issues or at least determine some data are questionable after analyzing MC simulation results. If possible, SC measurements are taken again, or the project is continued based on assumptions that might or might not be good.
Using Simulation to Model Varied Engine Configurations
A solution to both problems is running the MC model in parallel and in real-time when measuring SC data. The real SC engine provides the combustion data, which then can be applied to all cylinders in the MC model. The differences in gas exchange for each cylinder, such as varying trapped gas and residual fractions, are captured, and the same is true for the interaction with the turbo. The MC provides engine speed, crank-angle resolved intake/exhaust pressures, and average intake temperature. The SC needs to be equipped with fast acting valves (e.g. 10kHz) on the intake and exhaust side to impose the conditions that come from the MC. Similarly, changes in fuel composition and air fuel ratio (AFR) can easily be studied and control strategies for the MC can be developed.
Why are Carbon Neutral Fuels Different?
Combining measurements and simulation for combustion/controls developments is especially interesting for hydrogen/natural gas or methane blends. Combustion characteristics like the laminar flame speed strongly depend on the actual concentration, especially for hydrogen. Hydrogen’s ability to burn at very lean conditions combined with the fact that nitrogen oxide (NOx) formation reaches its peak at relative air-fuel ratios (‘lambda’) of ~2.0 make it useful to run at quite lean conditions. Current trends in engine development are finding that operation at lambda 3 is not uncommon and some research indicates that this could even go higher. This requires a different approach determining the charging system requirements compared to conventional fuels like gasoline, diesel, or natural gas. The charging system must be able to deliver high boost pressure levels with low exhaust energy due to low combustion temperatures caused by excess air. Therefore, optimized turbos, electric turbo (eTurbos) and/or electric compressors (eCompressors) are considered, especially for on-highway applications.
For power generation applications, the time-to-torque is essential. Coupling SC and MC enables control strategy development accounting for transient effects like turbo lag or fueling for non-direct injection (DI) applications.
Do I need Hardware in the Loop (HiL) Systems to Run the Real Time Model?
Depending on test cell infrastructure, the MC model can be executed on the test bench machine. There is no need for a HiL system. The MC model can be linked directly to ETAS INCA, Vector CANape or any system simulation tool that supports FMUs, like Synopsis Silver.
In Summary: No Combustion Analysis Test Bench Tool Available? No Problem!
If combustion data are not already available from the SC test cell software, three pressure analysis (TPA) in GT-SUITE can be integrated in the process. A TPA model typically consists of a single cylinder representing the test cell hardware. Dynamic intake, exhaust, and cylinder pressures are used as model inputs. For this application, intake and exhaust pressures plus intake temperatures are extracted from the MC model. The output of the TPA model is a burn rate that describes how fuel and air burns. This combustion profile can be directly imposed in the MC model cylinders.
The whole process is described in the figure below:
Learn More About our Combustion and Controls Simulation Capabilities
If you are interested in applying this technique to your development process or have questions on the process, please contact us for specific comments or questions. Learn more about GT-SUITE and our propulsion systems applications.
How Vehicle Cabin Model Order Reduction Can Optimize Passenger Comfort and Range
Why Cabin Modelling?
Efficient cabin modelling has become crucial for the new age battery electric vehicles (BEVs) as every bit of energy that can be conserved will help increase the range of a BEV. The goal of a design engineer is to achieve an optimal balance between passenger comfort and the energy consumption of the vehicle for various ambient conditions like hot or cold weather in different geographical locations. In the latest GT-SUITE v2025 release, Gamma Technologies has introduced a new feature that can automatically generate a physics-based, real-time capable 1D reduced order model (ROM) from a detailed 3D cabin model. This new feature will help the users to quickly explore various cabin geometries and develop accurate cabin comfort control strategies.
Main Components in the HVAC Circuit of a Vehicle
The complexity of a modern vehicle HVAC system has increased recently as the refrigerant loop not only has to provide cooling and heating to the cabin in a heat-pump operation but also helps in maintaining the temperature of critical powertrain components like batteries, electric motors, and power electronics.
A typical HVAC circuit consists of the following components:
- Compressor – compresses the refrigerant to the required condenser pressure
- Condenser – rejects the heat taken from the cabin air
- Expansion Valve (TXV/EXV) – throttles the refrigerant to the evaporator pressure
- Evaporator – cools down the hot cabin air
- HVAC Door – controls the air recirculation rate
- Blower – creates air flow in the system
- Heater – heats up the cold cabin air
- Cabin – accounts for the thermal inertia of the system
The multi-physics simulation platform GT-SUITE accurately models the interaction of these components, like in a real-world vehicle. These simulations allow engineers to develop an optimal design on the individual component-level as well as on the entire system-level. An illustration of a simple HVAC system in GT-SUITE is shown in Figure 1. To learn more about the advanced features in the HVAC modelling, please visit this webpage.
Different Cabin Modelling Fidelities in GT-SUITE
In GT-SUITE, cabin modelling can be performed at different levels of fidelity depending on the design need. On the bottom left of Figure 2, we have the 0D/1D cabin modelling capabilities using mono and multi-zones. This is a system-focused approach. As we move towards the right side in Figure 2, it becomes more comfort-focused with high-fidelity results.
In GT-SUITE, 3D cabin comfort modelling is possible through co-simulation with GT-TAITherm. In this approach, GT-SUITE solves the fluid domain inside the cabin, while GT-TAITherm solves the thermal solid structures of the cabin and the passenger comfort. To learn more about the GT-TAITherm modelling, please visit this webpage.
These cabin modelling methodologies in GT-SUITE are multiple orders of magnitude faster than performing a conventional 3D CFD. At the same time, GT-SUITE modelling offers the required accuracy to design an efficient and integrated HVAC system. Some of the key advantages of GT-SUITE cabin modelling and simulations are:
- Predict if new components in the vehicle meet comfort and energy use targets
- Improve control strategies in the HVAC systems
- Evaluate new technologies like low emissivity glass coatings and localized cooling/heating
- Optimize global and local passenger comfort
- Size different components such as a compressor, evaporator, blower and heater
- Investigate different boundary conditions like air recirculation rate, flap positions and inlet temperatures
Cabin Model Order Reduction
As shown in Figure 2, this new streamlined workflow can automatically create a fast-running 1D ROM from a detailed 3D GT-TAITherm cabin model. This feature can automatically extract all the required properties from the 3D model and generate the ROM using the flow and thermal primitives. It can also automatically prepare the ROM for calibration based on the 3D simulation results and automatically create plots to check the calibration results. This fast-running ROM can be easily integrated into system-level models and used in real-time applications.
A cabin is usually made up of different layers of solid structures. As shown by the illustration in Figure 3, each layer in a cabin part is modelled as a lumped thermal mass. The air inside the cabin is represented as a flow volume. The physics-based ROM gives accurate results by capturing the following modes of heat transfer:
- Conductive heat transfer between different layers in all the solid parts
- Convective heat transfer between the cabin solid parts and cabin air
- Convective heat transfer between the cabin solid parts and surroundings
- Radiative heat transfer between different solid parts
- Effects of solar heat flux
A case study has been performed to investigate the cabin ROM in heat up and cool down scenarios. The 3D GT-TAITherm cabin model used for this case study is shown in Figure 4. The cabin model consists of various parts representing the solid structures and a human manikin. Each solid structure in this cabin model is composed of multiple layers.
The boundary conditions for the cool down scenario are shown in Figure 5. The ambient temperature is 50 degrees Celsius and cold air is blown into the cabin through the dashboard vents. The total duration of the simulation is 60 minutes.
Simulation Results from the ROM
The temperature results of the 3D Model and ROM for the windshield, doors, floor, and cabin air are shown in Figure 6. The solid structure temperatures and cabin air temperature of the ROM match closely with the 3D results. In this workflow, the tool automatically prepares the ROM for calibration based on the 3D simulation results to further improve the accuracy. During the calibration process, various multipliers corresponding to the solid material properties and convective heat transfer coefficients (HTCs) were varied using the integrated design optimizer in GT-SUITE. The aim of the optimization was to minimize the difference between the 3D simulation and ROM results for three quantities, namely cabin air temperature, ambient side temperature of the solid parts, and cabin air side temperature of the solid parts. To learn more about the Integrated Design Optimizer and Machine Learning platform in GT-SUITE, please visit this webpage.
Values Delivered by the Innovative Cabin Model Order Reduction Workflow
The key values delivered to the users by the cabin model order reduction workflow are listed below and illustrated in Figure 7.
- Automatic data extraction from the 3D GT-TAITherm vehicle cabin model
- Automatic creation of a physics-based ROM in the order of seconds allowing quick model setup for various geometries and levels of details
- Automatic preparation of the ROM for calibration based on the 3D results and automatic creation of plots to the check the calibration results
- Applicable for both the cool down and heat up scenarios
- Simulation speed in the order of 100x faster than real time
- Seamless integration of the ROM with a system-level model
Learn More about our Vehicle Thermal Management Simulation Solutions
If you’d like to learn more or are interested in trying GT-SUITE and GT-TAITherm for thermal management and cabin comfort simulations, please view this webpage. To speak with a GT expert, contact us here!
Leveraging Machine Learning for Early Design Decisions on an Accessory Belt Drive Simulation
There are various challenges faced by an automotive engineer while designing a robust and optimized accessory drive system. Most original equipment manufacturers (OEMs) rely on different suppliers for their engine belt(s) and accessories (e.g. water pumps, alternators, A/C compressors, etc.). This leads to challenges in obtaining a comprehensive set of input data to incorporate in a system-level simulation. In instances where the analysis is being performed by the belt supplier, there are often issues obtaining permission to share accessory input data for use in the simulation. Instead, it may be preferable for the OEMs to create their own common simulation platform and system level models using data collected from their various suppliers
In case of an issue in the accessory drive system, it is challenging to get a common model to analyze the entire system in one simulation platform. Most of the time, sharing input data of one supplier with other leads to proprietary issues. It is necessary for OEMs to create their own common simulation platform and system-level models using inputs collected from their various suppliers.
A System-level Approach to Analyze Accessory Drives
On many occasions, due to simulation tool limitations, crank loading on the accessory drive is modeled as an imposed speed including torsional oscillation of the crank pulley. But this approach does not consider the dynamic coupling between cranktrain and accessory drive system dynamics. GT-SUITE’s multi-physics capabilities can model the system-level interactions between the torsional crankshaft model, excited by cylinder pressure traces, and the detailed belt model, including accessories and tensioners. This integrated model helps design a robust product and allows for system-level optimization. For example, in a single model, changes in a torsional vibration damper (TVD) on a crankshaft can be analyzed from a belt dynamics isolation standpoint.
Using GT-SUITE, it is also possible to model the accessory drive system for mild hybrid electric vehicle (HEV) applications. Mild hybrids are often fitted with a dual arm tensioner for boosting/re-generation or start/stop applications. These tensioners are easy to construct and incorporate into the system model to accommodate the change in tight and slack spans as the direction of load transfer changes during operation.
Note: Using GT-SUITE, detailed cranktrains, timing drives, and valvetrain subsystems can be integrated to perform a detailed system-level analysis of the entire powertrain (see Figure 1 below).
Longitudinal and Transverse Vibration in Belt Drive
In belt drive systems there are fundamentally two major vibration issues which can cause failure:
Torsional Vibration: A belt and pulley system can be conceptualized as a rotational system of torsional springs and rotational inertias. The span between two pulleys acts as a spring and the pulleys as rotational inertias in a simplified representation. This system can get excited at its torsional natural frequency due to the crank pulley excitation. When there is an alternator decoupler in the system, which has soft springs (used to decouple the heavy inertia of the generator), it can create a first natural frequency in the range of 10-20 Hz.
Transverse/Span Vibration: The belt segment between two pulleys has its own stiffness and can be conceptualized as string with standing wave dynamics. This span stiffness depends on span length and initial belt installation tension. The shorter the span length, the higher the stiffness and higher the span natural frequency. In many cases, due to the packaging, it is difficult to reduce the span length. In this case, installation tension becomes a critical parameter to reduce transverse vibration.
See Figure 2 to see the difference between torsional and transverse vibrations.
Main Simulation Results to be Analyzed
For analyzing the accessory drive system, below are the main results that need to be analyzed with defined design targets:
- Maximum and minimum belt tension
- Maximum and minimum hub loads
- Tensioner arm angular motion
- Slips at each pulley
- Transverse belt span deflections
Among these, maximum belt tension and tensioner arm motion are the most critical output as failing these criteria can lead to the mechanical failure of the belt. There is often a hard stop in the tensioner. Once the angular arm motion reaches this limit, then there is an abrupt increase in belt tension which may lead to breakage of the belt. Hub load outputs are also important for bearing design and durability considerations. Minimum belt tension, hub load, slip, and span deflection are important for efficient design. For example, if the minimum belt tension gets too low, the belt may lose sufficient contact load with the pulley, reducing the system’s efficiency with unwanted slip.
In Figure 3 below, here are some examples of the results in GT-SUITE (e.g., belt tension and global slip %). Note that this example was based on a genset application in which the front end auxiliary drive (FEAD) is running at single engine speed.
Use of Metamodels to Provide Early Design Direction to the Product Team
Once a baseline model is defined, including the ability to qualify design guidelines, constraints, and identify failing criteria, it is ready to leverage GT-SUITE’s built-in Machine Learning Assistant (MLA). The MLA can be used for following scenarios:
- Cost Optimization (e.g. reducing the number of ribs on the belt, removing the idler, and so forth)
- Robust Design (e.g. choosing the appropriate belt type either as an aramid or polyester cord, setting up optimum installation belt tension, and more for optimum performance and durability)
In the plots below (Figure 4), parameters like belt pre-tension, water pump/alternator torque, belt axial stiffness, idler diameter, and tensioner position are studied for a given model to evaluate their relative impact on critical output. In general, these parameters have some flexibility to change while designing the system. Based on the sensitivity analysis, a design direction can be given to the production team.
Also, there is another group of plots which show variational analysis results (Figure 5) within the reasonable ranges specified for the different input parameters. This will help the production team to choose the right design. For example, in this case based on the sensitivity analysis, a user can first try to optimize the system–based on the belt dynamic tension and tensioner angular motion by changing the belt axial stiffness or water pump torque. The effect on the dynamic belt tension and tensioner motion within the range can be seen by the slider bar results as shown in Figure 5.
Video Demonstration of Kriging and Multilayer Perceptron (MLP) Metamodeling Methods
Learn More About our Accessory Drive Belt Dynamic Simulation
If you would like to learn more or are interested in trying accessory drive belt dynamic simulation, contact us!
Stay tuned for the next blog which will be focused on the details of the use of GT-SUITE’s Machine Learning Assistant for accessory drives!
Dynamic Machine Learning for Modeling and Simulation
Incorporating Dynamic Metamodeling Simulation
To save computational time, engineers are persistently trying to speed up physical models, and some situations absolutely require faster simulation speeds. These situations might include more advanced co-simulation tasks, performing model-based optimization on a slower physical model, or the need to have a surrogate model for XiL (X-in-the-Loop) applications or to flash onto a microcontroller, ECU, or other low-power (meaning low-memory and low-CPU) device. Machine learning (ML) models, or metamodels, are an obvious choice for creating fast surrogate models, but many ML model frameworks only work with “static” datasets consisting of scalar values. These metamodels are sufficient for datasets consisting of steady-state data, but to capture transient, inertial effects (whether they be flow, thermal, electrical, chemical, or other types), more advanced ML frameworks are needed.
GT-SUITE has recently enhanced its Machine Learning Assistant to support the import of time series datasets and training them to transient neural networks. These metamodels allow the output prediction to depend on previous time steps, thereby providing the capacity to capture dynamic, inertial effects. The schematic below (Figure 1) shows the structure of a neural network where the output from two previous time steps, along with the previous time step of input #2, serve as inputs.
In addition, because neural networks are fast executing, these machine learning models can be exported as C-code, then compiled and run on microcontrollers and other low-power devices.
Real World Use Cases for Dynamic Machine Learning Modeling – CASE #1: Battery Modeling
One application that greatly benefits from dynamic machine learning is for battery modeling. The state of charge (SOC) is an internal state variable that affects the voltage, and therefore static ML models are insufficient for predicting voltage as a function of common inputs such as current, power request, and cell temperature.
Consider a battery undergoing hybrid pulse power characterization testing to determine its dynamic performance. In this test, a square step signal is applied to the current at different temperatures to evaluate the voltage response. With a detailed battery simulation tool such as GT-AutoLion, the SOC can be saved and used to help predict the voltage response.
A Design of Experiments (DOE) was run to vary the current level, initial SOC, and initial temperature. The input variable to the transient neural network was the current, and the outputs were voltage, SOC, and temperature. The voltage and SOC results from one time-series dataset not used for training is shown below (Figure 2).

Figure 2: Transient neural network predictions of voltage and state of charge (SOC) as the current cycles on and off
Real World Use Cases for Dynamic Machine Learning Modeling – CASE #2: Vehicle Thermal Management
Vehicle thermal management is another application that can benefit from dynamic machine learning, as thermal inertial behavior cannot be captured by static metamodels. Consider a model of a battery module containing 280 cells, each of whose thermal solution is calculated with a finite element mesh. The thermal performance of this module is characterized by running it through 82 different drive cycles where the initial temperature, inlet coolant temperature, and initial SOC are varied (see Figure 3). Along with these three variables, the power request is also used as a metamodel input.

Figure 3: Battery module containing 280 cells testing thermal performance of initial temperature, inlet temperature, and initial SOC
The time-resolved maximum, average, and minimum temperatures within the module are the key transient results, along with the transient outlet coolant temperature. These four temperature variables are trained to a single transient neural network, where 15% of the drive cycle simulations are set aside for testing. The following plots show the metamodel predictions vs. the simulation data for one of the test cases that the metamodel had not been used during training (Figure 4).

Figure 4: Results of metamodel predictions of battery module thermal performance vs. the simulation data for one of the test cases that the metamodel had not used during training
Real World Use Cases for Dynamic Machine Learning Modeling – CASE #3: Exhaust Gas Aftertreatment
Another application that can benefit from dynamic machine learning is the modeling of exhaust gas aftertreatment reactors. Outlet emissions cannot be predicted with a static ML model because they strongly depend on the internal states of the reactors, such as catalyst storage (coverage) and wall temperature which evolve slowly over long time scales.
A combination of dynamic and static ML has been applied to a selective catalytic reduction (SCR) reactor, where training data was created by simulating a physics-based SCR model using real driving conditions (Figure 5). Additionally, validation and testing datasets are generated using different standard test cycles. A dynamic neural network was used to predict the ammonia coverage, average reactor wall temperature, and outlet gas temperature from the inlet flow variables. Then these three predicted variables were fed, along with the inlet flow variables, to a static neural network to predict the outlet NH3, NO, and NO2 concentrations.
The plots below highlight the predicted vs. target stored ammonia coverage and NO outlet mass flow rate for the test dataset, which was unused during training the ML models (Figure 6).

Figure 6: Plot highlighting the predicted vs. target stored ammonia coverage and NO outlet mass flow rate for the test dataset, which was unused during training the ML models
Ready to Learn More About Machine Learning?
If you are interested in learning more about how you can implement machine learning in GT-SUITE, see our productivity abilities. You can also contact us here.
If you’re curious to learn more about our static machine modeling capabilities, read this two-part blog series on enhancing model accuracy by replacing GT-SUITE’s lookup maps with machine learning models and optimizing neural networks for modeling and simulation!
Citations
- B. Sarkar, S.R. Gundlapally, P. Koutsivitis, S. Wahiduzzaman, Performance evaluation of neural networks in modeling exhaust gas aftertreatment reactors, Chemical Engineering Journal 433 (2022)
Gamma Technologies and GT-SUITE: Pioneering the Future of Simulation
Unveiling the Power of GT-SUITE
This year, Gamma Technologies celebrated a significant milestone: its 30th anniversary. Since its inception in 1994, Gamma Technologies has been at the forefront of engineering simulation, revolutionizing how industries approach design and innovation. At the heart of this transformation is GT-SUITE, the company’s flagship systems simulation software that has become a cornerstone in various fields, from automotive to aerospace, HVACR, energy, and beyond.
Gamma Technologies grew its prowess in the automotive industry with GT-POWER, the industry standard engine performance simulation tool used by most engine manufacturers and vehicle original equipment manufacturers (OEMs). GT has continuously expanded its simulation capabilities to meet consumer demands with extensive developments in batteries, electric motors, and more with products such as GT-AutoLion, GT-PowerForge, GT-FEMAG, and others. GT continues to accelerate in agnostic powertrain and systems development worldwide.

SOURCE: AFDC (n.d.a). National Academies of Sciences, Engineering, and Medicine. 2021. Assessment of Technologies for Improving Light-Duty Vehicle Fuel Economy—2025-2035. Washington, DC: The National Academies Press. https://doi.org/10.17226/26092.
GT-SUITE is more than just a simulation tool. It’s a comprehensive, multi-domain platform that empowers engineers to model, simulate, and analyze complex systems. With capabilities spanning mechanical, electrical, fluid, and thermal domains, GT-SUITE offers a holistic approach to understanding how different systems interact. This versatility is essential in today’s engineering landscape, where the integration of various technologies and systems is more crucial than ever.
In the transportation industries (automotive, on-and-off highway vehicles) GT-SUITE has made a substantial impact. The software allows for the creation of detailed simulations of vehicle systems, from powertrains to suspension systems.
Almost every vehicle on the road has components simulated and designed with one of Gamma Technologies’ simulations. Most major automotive original equipment manufacturers (OEMs) have used GT-SUITE for engine and vehicle development.
By providing a virtual environment to simulate and optimize designs, GT-SUITE helps manufacturers improve performance, reduce costs, and shorten turnaround times. To accelerate development time, GT’s XiL (X-in-the-Loop) modeling capabilities (method that combines virtual testing with real-world elements to validate components of an Electronic Control Unit or ECU) integrate seamlessly with industry tools, ensuring a streamlined and efficient product design cycle.
The ability to simulate real-world scenarios and interactions is particularly valuable for developing advanced technologies such as electric and hybrid vehicles, where precise predictions and optimization are critical.
Expanding Horizons: Aerospace, HVACR, Marine, Energy, and Beyond
The influence of GT-SUITE extends beyond automotive engineering.
In the HVACR (heating, ventilation, air conditioning, and refrigeration) industry, Gamma Technologies’ comprehensive set of validated 0D/1D/3D multi-physics component libraries have enabled HVACR engineers to tackle challenges in development such as sustainability and efficiency, decarbonization, new refrigerants, and system complexity and controls. GT-SUITE’s combined with GT-TAITherm can model human comfort which allows the user to have an additional target besides traditional temperature and humidity. The human comfort model has localized comfort zones that can be used to determine if cabin insulation or HVAC settings need to be modified. Well-known brands such as Carrier, Copeland, Daikin, Sanden, Trane, Tecumseh, Rheem, and others have found tremendous benefits utilizing GT-SUITE. Learn more about these case studies here.
In aerospace, the software supports the design and analysis of complex systems like propulsion and avionics. Engineers from organizations such as NASA, Roush, SAFRAN, and others have used GT-SUITE to ensure that aircraft systems are both efficient and reliable, contributing to advancements in performance and safety. Some of the applications that Gamma Technologies’ simulation solutions have assisted in include cryogenic systems, propulsion system modeling, environmental controls systems (ECS), fuel cell simulation, thermal management, e-propulsion batteries, flight dynamics and controls, multi-body dynamics, landing gear development, and fuel tank modeling.
GT is proud to say that our solutions have already been used to support the future of urban air mobility by providing simulations for electric aircraft and electric vertical take-off and landing (eVTOL) vehicles (including air taxis) development.
In the energy and oil & gas sectors, GT-SUITE aids in the development of innovative solutions for power generation and renewable energy. Our customers are already choosing GT for upstream, midstream, and downstream applications to optimize production. The ability to simulate energy systems helps companies enhance efficiency and sustainability, addressing some of the most pressing challenges in energy production and consumption.
It might be surprising that the marine industry is aggressively moving towards a sustainable future as well. GT is proud to have partnered and work with organizations such as the Maritime Battery Forum, WIN GD, Yanmar R&D, and Toshiba. These firms have leveraged GT-SUITE’s solutions to simulate engine and drivetrain development for ship modeling and create digital twins for electrified motors.
Gamma Technologies has been on the forefront of implementing AI (artificial intelligence) & ML (machine learning) technologies. These tools elevate simulation capabilities by allowing thousands of variables to be considered and help designers best engineer superior products. AI and ML enhance simulations by creating accurate and dynamic metamodels (mathematical models) that can adapt to complex, real-world scenarios in real-time. These technologies also streamline the analysis of vast data sets, leading to more precise predictions and informed decision-making.
To learn more about our machine learning capabilities, read this two-part blog series on enhancing model accuracy by replacing GT’s lookup maps and optimizing neural networks.
A Legacy of Innovation
As Gamma Technologies celebrated its 30-year milestone, it’s clear that its impact on the engineering world is profound. GT-SUITE’s ability to provide detailed, multi-domain simulations has empowered engineers across industries to tackle complex problems and push the boundaries of what’s possible. This dedication has kept GT-SUITE at the cutting edge of simulation technology, ensuring that it meets the ever-changing demands of its diverse user base.
Looking Ahead
As we look to the future, Gamma Technologies is well-positioned to continue its legacy of pioneering simulation technology. With GT-SUITE leading the way, the company is set to drive further advancements in engineering and design, helping industries navigate the complexities of modern technology and innovate for a better tomorrow.
Learn More About Gamma Technologies’ Simulation Solutions
To learn more about our simulation capabilities, visit our website. Learn more about GT-SUITE here. Contact us here to speak to a GT expert!
Simulating Real Driving Maneuvers in Traffic using SUMO and GT-SUITE
The Need for Realistic Vehicle Operating Conditions
In an era where mobility is becoming increasingly electrified, new engineering strategies are needed to properly optimize an entire vehicle system for both fuel and energy saving potential that account for a multitude of driving scenarios.
Especially during local commutes with traffic, being able to predict both vehicle operational behavior, together with sensitivity to variable human factors such as a drivers’ behavior, enables engineering teams to develop better vehicles.
Available traffic simulation tools like SUMO (Simulation of Urban MObility), VISSIM (Verkehr In Städten – SIMulationsmodell), AIMSUN (Advanced Interactive Microscopic Simulator for Urban and Non-Urban Networks), and others employ the traffic flow theory which describes drivers’ behavior and its impact on overall traffic performance. This approach typically investigates the behavior of the ego vehicle, a vehicle which contains sensors that perceive the external environment, together with various traffic actors, while ignoring the predictive details of the vehicle dynamics.
By combining SUMO with GT-SUITE (a systems simulation platform), an engineer can study the behavior of a high-fidelity powertrain model within a real-world environment, considering the non-deterministic nature of traffic and driver behavior. This approach leverages vehicle simulation to the next level, offering more realistic vehicle operation and assessment of fuel/energy saving potential.
Coupling Multiple Simulation Solutions to Model Traffic Behavior
In this study, we coupled various simulation solutions to best model a realistic traffic scenario. We selected the open-source software, SUMO, which is a well-known tool in the field of traffic simulation. The coupling was implemented by leveraging SUMO’s TraCI (Traffic Control Interface) protocol, which allows users to establish interactions between other simulation tools.
Easy coupling of both GT-SUITE and SUMO with Simulink enables us to use Simulink as a lead tool controlling the co-simulation process and data exchange between the ego vehicle in SUMO, and its dynamic response modeled in GT-SUITE.
Once these solutions are combined, driver traffic decisions are completely managed by the SUMO driver using implemented car following, lane changing models, overtaking models, and traffic constraints. A vehicle model in GT-SUITE replicates the speed trajectory calculating energy consumption at the same time. The GT-SUITE vehicle model then estimates the potential limits of acceleration and/or speed for future vehicle movements/maneuvers. These vehicle dynamic limitations shared with the SUMO driver will keep the ego vehicle operating in accordance with those limits (See Figure 1 below).
Communication between traffic and vehicle dynamic simulation through Simulink enables the integration of additional subsystems or controls to the existing one. Therefore, a vehicle model and simulation of its dynamic can be easily used for numerous studies and development processes as a part of more complex simulation platforms.
Simulating an Electric Vehicle (EV) in “New York City Traffic”
Let’s look at an example. We’ll use one of the available SUMO traffic examples to look at an electric vehicle (EV) model in GT-SUITE.
The main focus of this example model is to establish communication between these simulation platforms by being able to facilitate bidirectional communication while keeping the vehicle running within realistic powertrain operation. This was achieved by using available features of the powertrain physical and controls components to estimate limitations coming from different vehicle subsystems. An additional preprocessing of different powertrains coming from electric motors and battery management systems (BMS) regarding available traction force was also developed.
At each communication update interval, the GT-SUITE plant model provides the current acceleration/deceleration limits to the SUMO driver model. Knowing this information, SUMO drivers will achieve additional level of fidelity and adjust their driving style according to the vehicle powertrain limitations (see Figure 2).
Knowing the vehicle model limits from the SUMO driver behavior allows us to engineer ego vehicle acceleration and speed that’s both realistic and achievable. On the image below (Figure 3), we can see the effects. In the Co-Sim mode (SUMO and GT-SUITE), the driver is less aggressive and keeps the vehicle running, considering available powertrain limits.
From the plot below, we see that driver behavior is the same based on the initial aggressiveness when the powertrain limitations are not imposed. With this approach, the SUMO driver makes decisions freely until the physical model powertrain limitations are not reached.
In the example demonstrated, the limitations are imposed from the BMS which takes care of minimal and maximal battery voltage and maximal charge and discharge rates.
Other vehicle component physical limitations:
- Energy source systems including battery, super-capacitor, fuel cell or other current, voltage or temperature related torque limitations
- E-motor mechanical, electrical or temperature torque limitations
Learn More About our Co-Simulation and Driving Simulation Capabilities
In general, in addition to a vehicle’s physical component limitations, a user can impose acceleration or speed limits as a command from the advanced driving assistance systems (ADAS) controllers.
Here, we recognize a broad usage of integrated simulation solutions for more sophisticated energy consumption and emissions; development of hybrid energy system controls logic, ADAS, applications in the field of vehicle to vehicle (V2V), or vehicle to infrastructure (V2I) communication.
To learn more about our co-simulation capabilities visit this webpage here. Learn more about our hybrid and electric vehicle capabilities here. Contact us here to speak to an expert!
Simulating a NASA Hydrogen Powered Rocket
Propelling the Orion Spacecraft to the Moon
Images of the moon from the NASA Orion spacecraft reminds us of how our technological advancements have made these wonders of the night sky reachable. The Artemis I mission marked an important milestone as it is the closest a human rated spacecraft has come to the moon since the Apollo 17 mission in 1972. This mission is planned to be followed with the Artemis II launch where the Orion spacecraft will host a crew for a lunar flyby.
We celebrate our love for space exploration by highlighting GT-SUITE’s simulation capabilities in the aerospace field and share a model study. To demonstrate the integration of multi-physics domains, a GT-SUITE model was built to replicate the steady state operation of the RL10A-3-3A hydrogen powered rocket. The model was based on data and dimensions found in publications about the RL10A-3-3A rocket engine [1][2].
Modeling the rocket engine’s turbomachinery requires coupling of fluid, mechanical, and thermal domains, along with accurate two-phase fluid properties. The two-stage liquid hydrogen and liquid oxygen pump are powered by the expansion of hydrogen across a turbine, which is made possible through fluid-mechanical coupling. Thermal energy is also recycled from the burnt gases flowing out of the rocket nozzle to aid in powering the turbine. This is accomplished through a fluid to thermal structural connection. Energy from the gases within the rocket nozzle are transferred to the nozzle wall according to the Bartz heat transfer correlation. The nozzle wall is then cooled by hydrogen lines running through it, the turbine utilizes thermal energy added to the hydrogen to power the pumps, circulating energy from combustion back into the system. The hydrogen and oxygen properties are determined according to the NIST subroutine from the REFPROP program to ensure accurate fluid behavior.
Combustion is modeled with an equilibrium chemistry solver to calculate the composition of burned gases in the combustion chamber so effects of dissociation at high temperatures are considered. In the model it was also found that by solving the chemical kinetics to consider oxidation of radical species as the mixture expands within the rocket nozzle’s diverging section, the accuracy of the heat transfer and thrust predictions were improved.
Results of Simulating Steady-State Rocket Engine Pressure and Temperature
The results of the GT-SUITE model allow for the visualization of pressure and temperature throughout the rocket engine during steady operation. The thrust and the specific impulse of the RL10A-3-3A engine were also calculated based on flow conditions of the exhaust gases. The GT-SUITE results were shown to match the published RL10A-3-3A performance data well [1][2].
Learn More about Our Aerospace Simulation Capabilities
To learn more about our aerospace capabilities, visit our aerospace industry page.
Contact us for specific comments or questions.
Citations
[1] M. Binder. A Transient Model of the RL10A-3-3A Rocket Engine. Contractor Report NASA CR-195478, NASA, July 1995. https://ntrs.nasa.gov/api/citations/19950022693/downloads/19950022693.pdf
[2] Matteo, Francesco Di, et al. “Transient Simulation of the RL-10A-3-3A Rocket Engine.” https://arc.aiaa.org/doi/abs/10.2514/6.2011-6032
This piece was originally written on December 12, 2022
How Will Electric and Hybrid Vehicle Development Be Impacted by the Softening of US Rules
Governmental Regulations Impacting Automotive OEMs
In recent news, new vehicle tailpipe governmental regulations in the United States have softened for original equipment manufacturers (OEMs) development of electric vehicles (EVs) and hybrids (HEVs).
The Department of Energy has significantly slowed the phase-out of existing rules that give automakers extra fuel-economy credit for electric and hybrid vehicles they currently sell. The real-world impact of the complex regulations has helped U.S. automakers meet new federal standards for fleetwide fuel efficiency continuing to sell traditional, internal combustion engine (ICE) vehicles.
The Role Simulation Plays in New Vehicle Development
With these changes, it’s now imperative for the engineering community to leverage simulation platforms such as GT-SUITE in today’s automotive development for several reasons:
- Cost Reduction: Developing new automotive technologies, especially in the context of EVs and hybrids, can be expensive. Simulation allows OEMs to test various designs and configurations virtually, reducing the need for physical prototypes and costly trial-and-error processes.
- Time Efficiency: With simulation, OEMs can accelerate the development process. They can quickly assess the performance of different components and systems, identify potential issues, and iterate on designs much faster than with traditional methods. This agility is crucial in a competitive market where time-to-market can make a significant difference.
- Regulatory Compliance: Although regulations may slow down, they are unlikely to disappear. OEMs still need to meet stringent emissions standards and fuel efficiency requirements. Simulation enables them to explore different powertrain configurations, optimize efficiency, and ensure compliance with current and future regulations.
- Technology Exploration: Even as regulations ease, the demand for cleaner and more efficient vehicles continues to grow due to environmental concerns and consumer preferences. Simulation allows OEMs to experiment with emerging technologies, such as advanced battery chemistries or fuel cell systems, and stay ahead of the curve in the evolving automotive landscape.
- Risk Mitigation: Investing in new technologies carries inherent risks. Simulation helps OEMs mitigate these risks by providing insights into potential challenges and performance limitations before committing to large-scale production. This allows them to make informed decisions and allocate resources more effectively.
- Optimization and Innovation: Simulation enables OEMs to optimize the performance of electric powertrains, hybrid systems, and fuel cell technologies. By fine-tuning parameters such as energy efficiency, range, and power output, they can deliver vehicles that meet or exceed customer expectations while staying competitive in the market.
Learn More About Our Simulation Solutions
While phased-in regulations may temporarily ease the pressure on OEMs, simulation remains a crucial tool for innovation, efficiency, and competitiveness in the automotive industry. Especially in the context of evolving technologies such as electric powertrains and fuel cells.
To learn more about GT-SUITE, visit our website here. Speak to GT expert today as well here and see how to incorporate simulation for your vehicle development needs.
Understanding Fuel Cell Systems Simulation for Vehicle Integration
In Episode 3 of the Gamma Technologies Tech Talk podcast, the team delved into the world of fuel cell systems simulation and its integration with gas turbines (GT). Navin Fogla, PhD (Senior R&D Manager, Reactive Flow Systems) and Jake How (Senior Staff Application Engineer, Reactive Flow Systems), shared insights into the mechanics behind this advanced technology.
Watch the full episode on Gamma Technologies’ YouTube channel.
Mapping the Integrated Vehicle-Level Perspective & Achieving Fidelity in Simulation
Towards the end of the podcast, Jake provided a walkthrough of GT’s fuel cell modeling capabilities via the simulation platform, GT-SUITE. This walkthrough emphasized how it is possible to scale GT’s simulations from one level of fidelity to another, ensuring a comprehensive understanding of the fuel cell system’s behavior within an integrated vehicle model.
At the heart of the integrated vehicle system lies the fuel cell stack, but this is just one piece of the puzzle. The integrated vehicle-level map above showcases how the fuel cell stack is connected to various components such as hydrogen tanks, air handling systems, cooling systems, and an electrical powertrain. The powertrain also includes a DC-DC converter to regulate voltage and a motor that propels the vehicle. Fuel cell system simulation also can be conducted at two ends of the fidelity spectrum. At the lower end, models can be simplified to allow for faster simulations, optimization, and design of experiments.
On the other hand, more advanced simulations, such as pseudo-3D or 3D-1D modeling, provide a high-fidelity analysis of the fuel cell stack or individual cells. This level of detail allows for the investigation of coolant rates, local hotspots and cold spots, as well as oxygen and water distribution.
Learn More About Fuel Cell System Modeling Capabilities
If you’re curious about fuel cell technology, hydrogen safety, and system simulation, make sure to watch/listen to the full episode on the GT Tech Talk podcast on Gamma Technologies YouTube channel or on Spotify for Podcasters today!
To learn more about GT’s fuel cell system modeling capabilities, visit this webpage or speak to a GT expert today!
Subscribe to the GT Tech Talk podcast and learn more about the show as well as upcoming episodes here.
This blog was originally published on September 22, 2023.
How to Analyze Noise, Vibration and Harshness in Electric Powertrains (e-NVH) using Simulation
What are the Sources of Noise and Vibration in Electric Drives?
The general shift towards electrification in the electric vehicle (EV) market and beyond has created a need for higher fidelity simulation of electric powertrains. One aspect of this trend that has been getting attention is the desire for detailed analysis of an electric motor’s noise, vibration, and harshness (NVH) characteristics early in the design stage.
The sources of the characteristic high-pitch whine of electric motors are the interaction between different airgap field harmonics inside the machine, as well as the switching voltage inputs from the inverter. These elements generate force waves in the airgap, which can excite the structure of the motor and cause vibrations, particularly at specific resonant frequencies. Imperfect torque and stator load profiles cause further vibration of attached machinery components and the gearbox housing (e-axle). Unlike internal combustion engines, where the engine sound is often a prominent feature that we want to accentuate, with electric drive units, any sound that is produced is usually undesirable, so the goal is to minimize it.
A Complete & Fully Integrated Workflow
To properly analyze how this noise and vibration is created and to mitigate it, a system-level simulation of the motor, the inverter and the mechanical components is necessary. To capture the NVH characteristics of an electric drive unit, a new workflow, that spans GT-FEMAG’s electromagnetic finite element analysis simulations and GT-SUITE’s electrical and mechanical transient simulations, was developed (noted in Figure 1 below).
Electrical Section
The first step in this process is to use GT-FEMAG, a finite element electromagnetic modeling tool built for motor design to design a motor that meets speed and torque requirements for the traction motor. After the motor design has been finalized, GT-FEMAG can export a very high-fidelity model of the motor, used to populate the datasets of a new lookup-table–based permanent magnet synchronous motor (PMSM) template in GT-SUITE, that can capture the torque ripple and the spatial harmonics inside the machine. This motor template is coupled with a detailed 3-phase inverter and controlled with a closed loop feedback control (Figure 2).
This simulation outputs the 3-phase currents in the motor windings, at multiple different speeds (see Figure 3 below). These currents will be used in the next step to calculate the motor forces, in the mechanical part of the workflow.
Mechanical Section
Moving on to the mechanical section of the workflow, using the ABC currents that were calculated previously, FEMAG evaluates the magnetic pressure as a function of space and time, which we can then use to predict the forces that are generated in the motor, as a function of rotor position and stator tooth, for each operating speed and torque combination (Figure 4). These forces will be the boundary conditions for the mechanical analysis in the next step.
Next, these excitation loads are used in GT-SUITE as an input for a forced frequency analysis to get the structural steady–state response of the overall gearbox housing. By performing a Fourier transformation of the loads from the previous step, we can obtain the amplitudes of the applied loads at each frequency, resulting from the various speed and order combinations of the simulated drivetrain. With this information, it is possible to directly identify areas that end up in excessive surface vibration and react accordingly by modifying the system. Additionally, the surface vibration response can be used to perform an acoustic analysis using a rapid sound assessment method that will provide the sound pressure level at any location around that structure (see Figure 5 below).

Figure 5: Surface Normal Velocity at a given frequency, Campbell diagram at a given node, and sound pressure of the powertrain in 3D space
An All-in-One Package for e-NVH Analysis
This workflow can offer a very straightforward and convenient way to analyze the NVH performance of any electric powertrain. As a tightly linked system, contained fully within GT’s library of tools, it enables users to run many iterations easily and quickly and to optimize their designs based on many parameters, like the geometric characteristics of the motor, the switching frequency, or the modulation strategy of the inverter etc., and see how these changes affect the NVH performance. The high degree of integration between the electromagnetic, the electrical and the mechanical domains of this workflow provides a seamless user experience, without having to resort to multiple different simulation tools, as is typically the case.
Learn More About our e-Powertrain NVH Solutions
The full workflow is presented in more detail by GT’s experts in this 30-minute SAE webinar. If you’d like to learn more or are interested in trying GT-FEMAG and GT-SUITE for e-powertrain and NVH simulation, contact us!
How to Model Fuel Reformers with Simulation
Addressing the Evolving Needs of Powertrain Engineering Through Simulation
As the powertrain market begins to pivot from traditional diesel and gasoline engines towards hydrogen engines and fuel cells, there is the open question of how to provide hydrogen to these new powertrains. In the short term, it seems that converting an available hydrocarbon fuel to hydrogen will be needed. For mobile applications, methanol and ethanol as well as compressed natural gas are logical options. For stationary applications, using the natural gas supply infrastructure makes sense.
Gamma Technologies has created three example models of fuel reformers for methanol, ethanol, and methane in GT-SUITE activated with the GT-xCHEM product license. These example models help support research and development of H2 combustion engines and fuel cells, as well as expansion into general chemical processing. The results of each of these new fuel reformer example models are summarized in this blog.
Note that in the methanol and methane reformer sections, the X axis of the figures is the catalyst material load divided by the molar flow rate of the key reactant, W/Fm, which is sometimes referred to as the contact time. A low W/Fm value represents high flow (short residence time), and a high W/Fm represents low flow (long residence time).
Methanol Reformer Model
The first model we’re simulating demonstrates a methanol steam reformer reactor. Methanol (CH3OH) and water (H2O) react over a CuO/ZnO/Al2O3 catalyst to form H2 and CO2. The reaction mechanism, input data, and measurement data for the reformer are from the reference Purnama et al1.
In this specific reaction mechanism, the methanol steam reforming reaction (reaction 1) is modeled in the forward direction only. The water gas shift (WGS) and reversible WGS are modeled as two separate reactions (reactions 2 and 3). At high temperature, the H2 and CO2 can react through the reverse WGS reaction to form CO and H2O.
Reaction 1: CH3OH + H2O → CO2 + 3H2
Reaction 2: H2 + CO2 → CO + H2O
Reaction 3: CO + H2O → H2 + CO2
This example model is designed to recreate several figures (Figures 1 and 2) from Purnama et al1. Methanol and water are supplied to a packed bed reactor at a 1:1 molar ratio. Four temperatures: 230, 250, 270, and 300°C are simulated, and the catalyst load to molar flow ratio W/Fm is varied from 0.0001 to 0.03 kgcat-s/mmolCH3OH. The result is a good correlation for both the overall methanol conversion efficiency and the prediction of the products including hydrogen as shown in the figures below.

Figure 1. Simulation results of methanol conversion efficiency vs. W/Fm for four temperatures: 230, 250, 270, 300°C
Ethanol Reformer Model
The next model to simulate is the ethanol reformer. Three reactions were used to model the ethanol steam reforming process to produce H2 over an Rh-Pd/CeO2 catalyst. Reactions 2 and 3 are modeled as reversible reactions.
Reaction 1: C2H5OH → CH4 + H2 + CO
Reaction 2: CO + H2O ↔ CO2 + H2
Reaction 3: CH4 + H2O ↔ 3H2 + CO2
Simulations were run with an operating pressure of 4.5 bar, a steam-to-carbon ratio of 3, and an operating temperature of 500 to 1000 K. The results are shown in the figures below along with the measured data from Lopez et al2.

Figure 3. Ethanol conversion and H2 yield vs. temperature & product species molar flow rate vs. temperature
In Figure 3 above shows that between 500 and 700 K the ethanol conversion rises steadily from near zero to 100%. However, not all ethanol is converted directly into H2. As a result, the H2 yield does not follow the same pattern as the ethanol. Reaction 3 is the steam methane reforming (SMR) reaction, which begins after 700 K and causes a distinct slope shift in H2 production as more H2 is produced from methane.
Regarding the species molar flow rate, the bottom figure shows that the CH4 exhibits a unimodal-shaped curve with regard to operating temperature, culminating at 700 K, when ethanol breakdown reaches 100%. Increased temperature maintains CH4 generation via ethanol decomposition and activates SMR to produce additional H2. This results in a drop in CH4 molar flow rate. The SMR reaction also accounts for increased CO generation after 700 K, resulting in a bimodal-shaped CO curve, with the first mode being from the ethanol breakdown process combined with the water gas shift reaction (WGS). WGS causes an increase in the molar flow rate of CO2 following the first CO peak. All of these patterns are well captured by the model.
Methane Reformer Model
The methane reformer uses a chemical process called steam methane reforming (SMR) to convert methane into hydrogen gas. The methane reformer model can be used to study the effect of temperature and pressure on the efficiency of the reformer. The reaction mechanism used in the model is shown below. All three reactions are modeled as reversible in this reaction mechanism.
Reaction 1: CH4 + H2O ↔ CO + 3H2
Reaction 2: CO + H2O ↔ CO2 + H2
Reaction 3: CH4 + 2H2O ↔ CO2 + 4H2
At higher temperatures (700-1100 K), these reactions proceed in the forward direction resulting in conversion of methane (CH4) and water (H2O) into carbon dioxide (CO2) and hydrogen (H2). If the operating temperature is reduced (450 – 650 K), the mechanism runs in the backward direction producing CH4 and H2O from the reaction of CO2 and H2, also known as the methanation process.
The SMR model is made from information found in the reference Xu and Froment3 for a Ni/MgAl2O4 catalyst in a packed bed reactor. In the model, the inlet feed contains H2O, CH4, and H2 in the molar ratio 3:1.25:1, and the temperature is varied from 773 K to 848 K. For each temperature case, the methane contact time (W/FCH4) is varied from 0.01 to 0.425 gcat-hr/molCH4.
In Figure 4, shown below, the GT-xCHEM simulation results of the conversion of CH4 and production of CO2 and H2 are plotted along with the experimental results reported by Xu and Froment3. The GT-xCHEM simulation results correlate well with the experimental data as the conversion efficiency of the reformer increases with increasing contact time and increasing operating temperature.

Figure 4. Comparison of simulation results of conversion of CH4 and production of CO2 and H2 with experimental data
Learn More About Our Chemical Systems Modeling Solutions
In this blog we presented three fuel reforming example models available in GT-xCHEM. These example models help support research and development in H2 combustion engines and fuel cells, as well as expansion into general chemical processing. This study has also been published in SAE’s technical papers publication in April 2024. Access this paper here. in Gamma Technologies will continue to add to the library of ready-to-use catalyst and reactor models available in the installation directory of GT-SUITE activated with the GT-xCHEM product license. You may need to get the newest build update to see them, or if you have an older version or build then you can request the models from [email protected]. If you have any questions and would like more information about fuel reforming modeling with GT-SUITE please contact us here.
References
- “CO formation/selectivity for steam reforming of methanol with a commercial CuO/ZnO/Al2O3 catalyst,” Purnama, H., Ressler, T., Jentoft, R.E., Soerijanto, H., Schlögl, R., Schomäcker, R., 2004, Applied Catalysis A: General, v259, 83-94. https://doi.org/10.1016/j.apcata.2003.09.013
- “Ethanol steam reforming for hydrogen generation over structured catalysts,” López, E., Divins, N. J., Anzola, A., Schbib, S., Borio, D., & Llorca, J, 2013, International Journal of Hydrogen Energy, 38(11), 4418–4428. https://doi.org/10.1016/j.ijhydene.2013.01.174
- “Methane Steam Reforming, Methanation and Water-Gas Shift: I. Intrinsic Kinetics,” J. Xu, G.F. Froment, 1989, AIChE J., 35 (1), 88-96. https://doi.org/10.1002/aic.690350109
Top 10 Gamma Technologies Blogs of 2023!
From calculating EV range to heat pump design, there is a blog for every simulation!
As we kick off 2024, let’s look back at the best blogs of 2023! Since the inception of Gamma Technologies, GT-SUITE has optimized system simulation solutions for manufacturers! In no order, these are the top 10 blogs written in 2023 that highlight the vast application use cases and technical capabilities GT-SUITE can deliver!
- Decreasing Battery System Simulation Runtime using Distributed Computing
- Calculating Electric Vehicle Range with Simulation
- Engine Manufacturers Leverage Simulation to Engineer Ahead of Increasing Regulations
- Enhancing Model Accuracy by Replacing Lookup Maps with Machine Learning Models (Machine Learning Blog Part 1)
- Optimizing Neural Networks for Modeling and Simulation (Machine Learning Blog Part 2)
- Mitigating the Domino Effect of Battery Thermal Runaway with Simulation
- Designing Thermally Secured Electric Motors with Simulation
- Understanding Fuel Cell Systems Simulation for Vehicle Integration
- Addressing Heat Pump Challenges, from Home to Industry with Simulation
- Simulating Predictive Cruise Control for a Heavy-Duty Truck: Quickly and Easily
Shout-outs to our colleagues for their contributions!
Learn more about our simulation solutions!
If you’d like to learn more about how Gamma Technologies can be used to solve your engineering challenges, contact us here!
Wishing you a healthy & prosperous 2024!
Simulating Predictive Cruise Control for a Heavy-Duty Truck: Quickly and Easily
Enchanting World of Heavy-Duty Trucks
Start your engines, or in the case of an EV, activate your electric motors! Today, we’re diving headfirst into the intriguing world of predictive cruise control (or PCC for short) for a heavy-duty truck with a trailer.
What is predictive cruise control? Is that when my truck predicts my next coffee break and drives itself to the nearest café?” Not quite, but we’re about to embark on a technical adventure that’s both fascinating and energy-saving! So, buckle up, because this blog will take you on a ride filled with science, strategy, and a pinch of humor.
Creating a Heavy-Duty Truck Road Trip with Simulation
Our story begins with a 36-ton Diesel behemoth, the long-haul king of the road. 70% of the freight is transported across the United States in heavy-duty trucks (see Figure 1).
Now, PCC isn’t a crystal ball telling the truck’s future, but it’s the next best thing. It takes a sneak peek at the road ahead, courtesy of some nifty navigation tools, and then flexes its mathematical muscles to figure out the optimal speed to tackle whatever terrain lies in its path. Why? The main goal of PCC is to make sure our truck sips the diesel efficiently and does not stress service brakes too much while navigating terrains that range from steep hills to gentle slopes (see Figure 2).
To make this all work, our PCC strategy has two functions: terrain awareness and speed optimization. The terrain awareness function slices and dices the upcoming road data, while the speed optimization function goes full Einstein, calculating costs and optimizing speeds.
We built functions, through the simulation platform GT-SUITE, that performs PCC in the background with inputs from the model (see Figure 3). A user just needs to drop the PCC compound in a model, interface three control signals with the driver including: target speed minimum and maximum speed limits, and then watch as PCC tries to optimize speed profile over a terrain and eke out considerable fuel savings.
Energy Savings Results with Predictive Cruise Control
Now, the million-dollar question is, does PCC make a difference? Is it more than just a fancy set of algorithms and a slick name? GT-SUITE’s simulations have the answer.
In a world where standard drivers keep their cruise control locked at a constant speed, PCC stands out. It’s like having a driving coach whispering in your ear, telling you to speed up on that downhill slope and ease off the pedal when you’re climbing uphill.
The results? By applying the PCC strategy, this mighty truck, even with a 50% load, achieves an impressive 1% fuel savings. Actual fuel savings depend upon a lot of factors such as drive cycle, terrain profile, payload, lookahead distance, PCC speed offsets, and so forth. There is a considerable reduction in brake usage too, increasing the brake life. In the chart below (Figure 4), see the benefits of this PCC strategy on the US Interstate 5 route.
Leverage Trucking Simulation for Your Next Cruise!
In the realm of heavy-duty trucks traveling hundreds of miles every day, where every drop of fuel counts, 1% fuel savings can add up to lowering operating costs.
A PCC example model is available in GT-SUITE’s v2024 Build 0 which demonstrates how to use PCC compound in a vehicle model. Now you can simulate different speed and terrain profiles and analyze possible fuel savings on any route. It’s not just about getting from point A to point B; it’s about doing it smarter, more efficiently, and with a few technical tricks up your sleeve.
To learn more about our truck and commercial vehicle simulation capabilities, visit this webpage or speak to an expert today!
Addressing Heat Pump Challenges, from Home to Industry with Simulation
The topic of heat pumps has gained significant attention in recent years as the push for sustainable and energy-efficient solutions continues. In one of the latest episodes of Gamma Technologies’ Tech Talk podcast, GT experts, Jon Zenker (Head of Strategic Markets) and Rodrigo Aihara (Senior Staff Application Engineer, Thermal Fluid Systems), discuss the potential of electric-driven heat pumps in reducing carbon emissions and revolutionizing the HVACR industry.
Watch the full episode on Gamma Technologies’ YouTube channel.
Heat pump manufacturers face several challenges as they strive to meet the growing demand for sustainable heating and cooling solutions. In this blog, we will explore some of the key challenges that heat pump manufacturers are likely to face and discuss how simulation software can help overcome these issues.
Challenge 1: Technological Advancements
As the heat pump industry continues to evolve, manufacturers must keep up with technological advancements and find ways to improve the efficiency and performance of their products. This includes developing more advanced compressor technologies, optimizing heat transfer processes, and incorporating smart controls and automation. Manufacturers need to invest in research and development to stay ahead of the competition and meet the increasingly stringent energy efficiency standards and regulations.
Challenge 2: Market Awareness and Education
Traditionally, combustible fluids like natural gas and heating oil have been used for heating purposes. However, the introduction of heat pumps offers an alternative approach that relies on electricity instead of burning fuels, thus reducing CO2 emissions. While heat pumps work efficiently in moderate climates, they face challenges in extreme cold conditions. It is important to ensure a balance between heat pump usage and the need for auxiliary heating sources like natural gas to meet extreme winter demands.
Despite the benefits of heat pumps, many consumers are still unaware of their advantages and may have misconceptions about their performance and suitability for different climates. This lack of market awareness can limit the adoption of heat pumps. Manufacturers need to invest in marketing and educational campaigns to raise awareness about the benefits of heat pumps, such as energy savings, reduced carbon emissions, and improved indoor air quality. Collaborating with industry associations, participating in trade shows, and engaging with consumers through online platforms can help in educating the market.
Challenge 3: Policy and Regulatory Environment
The policy and regulatory landscape can influence the adoption and growth of heat pump technology. Heat pump manufacturers need to closely monitor and actively engage with policymakers and regulatory bodies to ensure favorable policies and standards that promote the use of heat pumps. This includes advocating for energy efficiency incentives, carbon reduction targets, and building regulations that support the deployment of heat pumps. By staying informed and actively participating in policy discussions, manufacturers can help shape a conducive environment for heat pump adoption.
Benefits for Consumers
One of the major benefits of using heat pumps is the ability to heat homes using electricity instead of natural gas. With variable prices of commodities like electricity and natural gas, consumers have the flexibility to choose the most cost-effective option based on market trends. This allows for greater control and optimization of energy usage, leading to potential cost savings. Additionally, advancements in smart thermostat technology enable consumers to monitor and adjust their energy consumption, resulting in informed decision-making and efficient heating solutions.
Heat Pumps in Unconventional Applications
Heat pumps have found unexpected applications, such as in the beer industry, where companies are striving to become carbon neutral and emission-free. Heat pumps are being used to replace natural gas burners in heating water for the brewing process, contributing to the reduction of climate impact. This case study showcases the versatility and potential of heat pump technology beyond the traditional HVAC system.
The Future of HVACR and the Role Simulation Plays
As the HVACR industry continues to see huge growth, faster product development and improved product efficiency are key for success. Due to these pressures, the integration of modeling and simulation tools like GT-SUITE becomes crucial in designing and optimizing heat pump systems. By considering factors such as transient and steady-state behavior, refrigerants, control systems, and alternate technologies, experts can develop efficient and sustainable HVAC solutions. From -30°F to 100°F, GT’s simulations help design systems that work optimally in various conditions. GT-SUITE also enables experts to simulate various scenarios, such as different heat exchangers, compressors, and capacities, to determine their performance and make informed decisions.
Learn More About HVACR Simulation Solutions
Heat pumps are emerging as a game-changing solution in the HVACR industry, addressing the need for reduced energy consumption and lower carbon emissions. With the introduction of simulation platforms such as GT-SUITE, engineers can further optimize heat pump systems, ensuring maximum efficiency and cost-effectiveness. As the demand for sustainable solutions continues to grow, heat pumps are poised to play a vital role in achieving decarbonization goals worldwide.
If you’re curious about HVACR technology, make sure to watch/listen to the full episode on the GT Tech Talk podcast on Gamma Technologies YouTube channel or on Spotify for Podcasters today!
To learn more about GT’s HVACR modeling capabilities, visit this webpage or speak to a GT expert today!
Subscribe to the GT Tech Talk podcast and learn more about the show as well as upcoming episodes here.
Using Simulation for Battery Engineering: 15 Technical Blogs to Enjoy
At Gamma Technologies, our GT-SUITE and GT-AutoLion simulations provide battery engineers and designers robust solutions for modeling and predicting battery performance throughout its lifecycle.
Enjoy reading our battery-focused technical blogs to learn more about:
- Calculating electric vehicle (EV) range
- Decreasing battery system simulation runtime
- Vehicle modeling: ICEV & BEV correlation procedure
- Reducing battery charging time while maximizing battery life
- Reducing battery testing time and costs
- Predicting system performance with aged li-ion batteries
- Predicting lithium-ion cell swelling, strain, and stress
- Lithium-ion battery modeling automotive engineers
- Non-automotive li-ion applications: aircraft, ships, power tools, cell phones and others
- Battery thermal runaway propagation
- Fuel cell system modeling
- Virtual calibration of fast charging strategies
- Parametric battery pack modeling for all existing cooling concepts
- Robust battery pack simulation by statistical variation analysis
- Sensitivity analysis: ranking the importance of battery model parameters
Since the inception of GT-SUITE, Gamma Technologies has recognized the transformation of automotive and non-automotive industries. Our solutions are powertrain and industry agnostic, and we are looking to guide customers and partners towards a sustainable world.
Learn more about our battery simulation solutions!
If you’d like to learn more about how GT-SUITE and GT-AutoLion can be used to solve your battery pack design challenges, contact us here!
This blog was initially published June 10th, 2022
Designing Thermally Secured Electric Motors with Simulation
Component-Level Design & Analysis for Motor Thermal Security
Most of today’s traction motors in battery electric vehicles (BEVs) are permanent magnet synchronous machines (PMSMs) that use interior permanent magnet (IPM) rotors with rare-earth magnets embedded in the rotor (automotive companies are starting to explore or even build other technologies, but that is a topic for another blog). These magnets generate heat and tend to demagnetize if they reach critical temperatures; moreover, because they are embedded in the rotor, cooling these magnets can be challenging.
In the design process of a traction motor, a variety of stator and rotor cooling options should be studied, and simulation gives motor designers the ability to study trade-offs of different cooling strategies without having to build and test physical prototypes (saving both time and money). Traditionally, steady-state, component-level simulations that couple finite-element approaches for both electromagnetics and thermal conduction and convection are used to ensure the thermal security of the motor.
For an example of this, see the model results below that utilize both GT-FEMAG and GT-SUITE. These simulation solutions from Gamma Technologies were used to couple the electromagnetic and thermal finite element solutions to study different stator cooling topologies for a traction motor. The results below include the trade-offs of structure temperature (windings and magnets), coolant temperature rise, and coolant pressure drop.

Coupling GT-FEMAG electromagnetic and GT-SUITE thermal finite element solutions for motor cooling design analysis
System-Level BEV Design & Analysis
System-level engineering of BEVs, on the other hand, requires an understanding of the global energy management puzzle and temperature distribution of its components (such as batteries, motors, inverters, and the occupants) to predict detrimental hot spots and occupant comfort in either hot or cold ambient temperatures during transient events. For more on this topic, see a blog written by my colleague, Brad Holcomb.
In the case of automotive applications, the most common transient analyses performed are drive cycle tests that can be 30 minutes, or longer. For these long, transient simulations, the traditional finite element model of the motor (introduced earlier) would be too slow to be integrated into system-level simulation for hot spot prediction.
The challenge is how can system-level engineers have an accurate, fast-running representation of a traction motor capable of hot spot prediction that can be integrated into a system-level model? In other words, how can we blur the lines between component-level and system-level simulation to engineer better electric vehicles?
Blurring the Lines Between Component-Level & System-Level Simulation with GT-SUITE and GT-FEMAG
With GT-FEMAG and GT-SUITE, Gamma Technologies offers an innovative way to have physics-based models of these coupled electromagnetic-thermal models to be used in system-level simulation.
First, the electromagnetic solver of GT-FEMAG is integrated into GT-SUITE as a seamless pre-processor that can automatically generate either map-based versions of motors with detailed component losses (for example, winding, iron, or magnet losses) or equivalent circuit models of motors (commonly referred to as “Ld, Lq” models) for the transient solver of GT to use in system-level simulations.

GT-FEMAG as a pre-processor to GT-SUITE System-Level Simulation
Second, the thermal solver of GT-SUITE has a generalized, physics-based, and one-click switch that automatically converts 3D finite element models into 1D lumped thermal network models.

Automatic model order reduction in GT-SUITE to reduce thermal finite element models to thermal network models
These two capabilities enable users to quickly traverse different modeling fidelities between fully detailed electromagnetic and thermal finite element and fast-running and accurate 1D models.
Electric Motor Simulation Demonstration & Results
In a demonstration model we created, we combined these technologies to be able to run back-to-back Worldwide Harmonized Light-Duty Vehicle Test Cycles (WLTC) on an electro-thermal motor model by imposing speed and torque on the motor for one hour of simulation. To give the model more transient warm-up behavior, we connected the motor to a simplified cooling system that includes a thermostat, a pump, and a heat exchanger. We also modeled three different ambient and initial soak temperatures of the system, modeling the warmup of the motor at –10 °C, 0 °C, and 10 °C ambient conditions.
The simulations each only took 70 seconds to complete (over 50 times faster than real-time), but captured the transient warmup of the various components in the motor, including the windings and magnets in the rotor:

Transient warmup of various components in an electric motor
Below is an animation showing the transient coolant temperature through the stator for the -10 °C ambient case over the course of the simulation (please note contour scale is non-linear to help visualize results).
Transient stator coolant temperature through 2 WLTCs at -10 °C ambient conditions
Model Integration
Because the standalone electro-thermal motor model was processed over 50 times faster than real-time, we can bring it directly into a system-level model for integrated simulations. These simulations allowed us to have a deeper understanding of the energy management and trade-offs between different cooling strategies for the entire system, including the motor, inverter, battery, and cabin.

A complete multi-physics BEV model with GT-SUITE and GT-FEMAG
Learn More About our Electric Motor Simulation Solutions
If you’d like to learn more or are interested in trying GT-FEMAG or GT-SUITE for component-level or system-level simulation of electric vehicles, contact us!
Mitigating the Domino Effect of Battery Thermal Runaway with Simulation
What Happens During Battery Thermal Runaway?
As the world continues to move towards a more sustainable future, so does the popularity of electric vehicles. While the benefits of electric vehicles are many, one of the key challenges is ensuring that the battery packs used in these vehicles are safe and reliable. In the context of battery packs, thermal runaway stands out as an inherent hazard that can evoke profoundly negative media attention and public concern.
During a thermal runaway event, undesired exothermic side reactions occur. These reactions are the response of the battery components exposure to extreme operating conditions, including but not limited to: high operating temperatures, fast charging, cell fractures by external objects, and internal short circuits.
Just like the domino effect, a single cell entering thermal runaway can easily spread to the surrounding cells and cause fires and explosions in the whole pack. This is known as thermal runaway propagation.
How to Avoid Battery Thermal Runaway
The question arises, how can you can safeguard your cell from entering thermal runaway? Some common triggers for thermal runaway include excessive heating, electrical faults such as short circuits and nail penetration or even a faulty cell. Therefore, the goal should not be to never have a cell enter thermal runaway but rather to design a battery pack that can withstand a cell entering thermal runaway without causing the thermal runaway to propagate to the rest of the pack.
To effectively tackle this challenge, it is critical to develop precise models capable of predicting and mitigating thermal runaway propagation in battery packs. Well-designed battery pack models ensure appropriate cooling systems and safety features are engineered to minimize the risk of thermal runaway. Additionally, these models can be used to develop early warning systems that can detect when a pack is starting to overheat.
Experimental Costs of Testing Thermal Runaway
Experimentally analyzing thermal runaway propagation in lithium-ion battery cells and packs is ideal, but requires significant resources of both time and money. The process will need designing and constructing different test scenarios and equipment, not to mention the experimental condition variations and the safety risks associated with intentionally inducing such events. Just to test a simple lithium-ion battery pack prototype for thermal runaway propagation could cost nearly $100k per test scenario. What’s more is that thermal runaway propagation is an inherently complex event. It can be influenced by a wide range of factors, such as overcharging, overheating, internal shorting, and nail penetration. It is nearly impossible to replicate all the real-world scenarios in a laboratory setting. In the meantime, cell manufacturers may experience major delays in the release schedule of their final product, whether it is an electric vehicle, an electrical vertical takeoff and landing vehicles (eVTOLs), or others due to the fact that physical testing is often conducted at the late stages of the development cycle.
History of Simulating Thermal Runaway Propagation with GT-SUITE
Cell and pack manufacturers have diligently turned their attention to computer simulation and modeling techniques to analyze thermal runaway propagation. Many cell manufacturers look to 3D computer aided engineering (CAE) simulation to avoid the challenges associated with experimental physical testing of thermal runaway propagation. The components for modeling thermal runaway propagation include:
- Pre-runaway battery model
- Thermal runaway trigger
- Cell-level thermal runaway model
- Heat transfer model
Using 3D CAE serves as an exemplary simulation and modeling technique and is known to provide intricate details regarding thermal runaway propagation. However, this method is known for its considerable time requirement and model renderings that are challenging to implement as well as the difficultly to test numerous “what if” scenarios on.
In a previous blog, we demonstrated how the simulation platform GT-SUITE was employed to model the propagation effect of thermal runaway in a small battery module. GT-SUITE provides a 1D CAE solution that offers faster running models than the common 3D CAE models. We showcased how an equivalent circuit model can be used as the pre-runaway battery model. Simple external heating such as the thermal runaway trigger, a rule-based model for the cell-level thermal runaway model, and 1D thermal networks were used for the heat transfer model. Since then, GT-SUITE has been prolifically used by many battery pack designers not only to predict lithium-ion battery performance metrics but also to simulate the thermal runaway propagation and gain invaluable insights into the behavior of their battery packs under different conditions.
Cell thermal runaway events vary greatly based on the events leading to thermal runaway. For instance, how quickly the cells were heated to a runaway state will affect the mass of vent gases evolved and their composition. This becomes important as the commonly used rule-based models are not able to capture these detailed values. Within GT-SUITE’s battery modeling platform GT-AutoLion, the latest GT-SUITE development includes a unique 1D&3D multi-physics model for thermal runaway propagation. This modeling approach not only provides fast-running models but also demonstrates strong physics.
Cell-level Thermal Runaway Propagation Enabled by P2D Electrochemical Modeling Together with Chemical Reactions
The first step for modeling a thermal runaway propagation is to have a pre-runaway model. More commonly, equivalent circuit models (ECMs) have been used as pre-runaway models to predict the performance of lithium-ion batteries. However, there are multiple shortcomings with this approach as they cannot fully capture the complex electrochemical reactions occurring within a battery cell.
To address these limitations, we will use GT-AutoLion, which is based on a pseudo-two-dimensional (P2D) electrochemical modeling, to calibrate the pre-runaway lithium-ion battery performance, voltage, and heat generation during a normal operation leading up to a thermal runaway event. Using Gamma Technologies’ physics-based modeling, GT-SUITE users will now have access to more meaningful results while running different thermal runaway propagation scenarios.
In addition to the above capabilities, GT-AutoLion can have user-defined chemical side reactions for thermal runaway propagation modeling. In a use-case based on an article by Feng et al., we modeled the thermal runaway reactions based on couple of reactions happening in a lithium-ion battery cell:
- Solid electrolyte interphase (SEI) decomposition
- Anode – electrolyte interface
- Separator melting
- Cathode decomposition (2 reactions)
- Electrolyte vaporization and degradation
The cell-level thermal runaway model we developed by utilizing the new capabilities of GT-AutoLion shows an excellent match with the findings documented in the literature. Below are some of the results that indicate the temperature rise and reactant concentrations for the cell entering the thermal runaway.

Comparing the cell-level electrochemical-thermal coupled modeling results by GT-AutoLion and experimental results by Feng et al. (a) temperature evolution over time, (b) changes in normalized concentration of reactants over time.
Simulating a Module-Level Thermal Runaway Propagation
Using a simple battery module consisting of 20 cells in a series, with fins in between the cells, that are connected to a cold plate to provide cooling. The GEM3D tool in GT-SUITE was used to convert the CAD components to a finite element mesh for the cells, fins, and cold plate material. The model also had a flow volume that represented the air inside the module around the battery cells and was further connected to a burner where combustion reactions were defined. This would potentially be the combustion of chemicals that are released upon cells entering the thermal runaway.
Using GT solutions, we have a strong and fast-running model in which any cell in the module can be selected as the “trigger” cell by applying an external heat until a certain trigger temperature is reached. For this example, thermal runaway was initiated in the center cell (through simple heating and vent gases which were combusted in the burner).
Thermal Runaway Case Studies
Two case studies, using GT solutions, were carried out to observe the battery pack behavior during thermal runaway propagation: (i) without any coolant flow in the cold plate and (ii) with coolant flow of 2kg/s in at 60 °C. Building this model took just a few hours from start to finish.
The 12-minute thermal runaway simulation took about 2 hours to calculate, including thermal, electrical, chemical, and flow physics.
The model results shown in the figures below indicate that when there is no coolant flow in the cold plate, case (i), every cell entered the thermal runaway, one after another. Starting from the center cell and propagating to neighboring cells until all the cells reached high temperatures of 600 to 700 °C. If this were a physical test, the pack would have needed to be re-designed and re-tested! But since no real battery modules were destroyed in this virtual environment, this simulation could now be repeated under different conditions.
Consider the scenario wherein a battery pack is equipped with a coolant flow configuration as delineated in case (ii). As indicated in the figures below, it can be observed that certain cells, primarily the adjacent cells positioned in the middle of the pack, may still undergo thermal runaway. Yet such an occurrence was confined to the limited number of cells located at the center, meaning that the battery pack would not be set on fire.
To see a full tutorial of building models for thermal runaway propagation using GT-SUITE and GT-AutoLion, watch this video here!
Learn More About our Battery Thermal Runaway Solutions
Lithium-ion batteries can experience thermal runaway from a variety of trigger events. Propagation of a thermal runaway event to other cells in the battery pack should be avoided for a safe pack design, but repeated physical testing is expensive and poses significant challenges. GT-SUITE offers a fast-running simulation approach to model this event, combining the electrical, chemical, thermal, and flow domains into a single model. This innovative 1D & 3D multiphysics model enables accurate prediction of the cell heat release under different operating conditions which allows different thermal runaway mitigation strategies to be simulated.
If you’d like to learn more or are interested in trying GT-SUITE and GT-AutoLion to virtually test a battery pack for thermal runaway propagation, view this webpage here. To speak with a GT expert, contact us here!
Optimizing Neural Networks for Modeling and Simulation (Machine Learning Blog Part 2)
Why Neural Networks are Effective in Machine Learning
Neural networks are powerful machine learning [ML] models that can capture highly nonlinear relationships between inputs and outputs within a dataset while being computationally inexpensive to execute. The benefits of neural networks for modeling and simulation activities, using the simulation platform GT-SUITE, were covered in part 1 of this blog series.
In this blog, we’ll look at how to optimize the predictive accuracy and training of neural networks. When utilizing neural networks, a barrier to overcome is hyperparameters. This is a term that describes the configuration properties that affect predictive accuracy and training time. For example, two main hyperparameters that determine the neural network configuration or architecture are the number of hidden layers and the number of neurons in each hidden layer. Unfortunately, the user cannot know the best combination of hyperparameters suited for a given dataset.
One common solution to determining the best set of hyperparameters for a given dataset, and thereby finding an optimal neural network, is to perform what is sometimes called grid search. Grid search consists of defining a list of values to try for each hyperparameter to be studied, then creating a list of neural networks that represents the full combination of all hyperparameters that are varied. The set of neural networks would be trained, and a metric such as validation root-mean-squared error would be used to choose a final metamodel with which to proceed.
For example, if the user wants to use a neural network with two hidden layers and test it with 5, 10, and 15 neurons in the first hidden layer, and 4 and 8 neurons in the second, six total neural networks would be created and trained:
Leverage GT-SUITE’s New Neural Network Capabilities in v2023
Manually creating many dozens or hundreds of neural networks with different combinations of hyperparameters in GT-SUITE’s DOE post-processor would be impractically tedious and time consuming. Fortunately, a new tool available in GT-SUITE v2023 facilitates generating these many neural network candidates with just a few mouse clicks.
In the Create Metamodels page of the DOE post-processor, a button is available to “Create Multiple multi-layer perceptrons (MLP) Metamodels.” The main dialog of this tool appears as follows, which lists hyperparameters and allows the user to choose multiple values to try for each (additional, more advanced hyperparameters are not shown here):
With the values shown in the screenshot, 108 total metamodels will be created, where the following hyperparameters are varied:
- Neural networks with 2 and 3 hidden layers are tested
- 3, 6, and 9 neurons in hidden layer 1 are tested
- 3 and 6 neurons in hidden layer 2 are tested
- 3 and 6 neurons in hidden layer 3 are tested
- Normalization and standardization scaling methods are tested
- To deal with the stochastic nature of the training process, each neural network will be trained 3 times, as entered in the Number of Repeated Trials
GT users can preview a table listing the 108 metamodels before they are sequentially trained (the first 15 are shown below):
After training the 108 metamodels, determining the best one consists of simply sorting the validation root-mean-square error in the Compare Metamodel Metrics page (assuming the dataset was portioned to have training, validation, and test samples).
Here’s a full video demonstration of creating these MLP hyperparameters:
Learn More About Our Machine Learning Simulation Solutions
The next time a lookup table or lookup map is needed for your GT-SUITE model, consider adding more accuracy to the model by training the data to a neural network or other ML model. For a GT-SUITE model that already utilizes one or more lookups, one might also consider upgrading it with ML.
If you are interested in learning more about how you can implement machine learning in GT-SUITE, see our productivity abilities. You can also contact us here.
New Design of Experiments and Machine Learning Training!
If you’re a model builder and an active and/or new GT-SUITE user and would like to learn more about GT-SUITE’s DOE and machine learning capabilities, we now have a new training course!
Click here to access the training
The machine learning training content is divided into 16 separate videos ranging between 3-9 minutes.
Training videos breakdown:
- The first two videos provide background information about the benefits and motivations of the Design of Experiments (DOE) and machine learning
- One video covers creating and running a DOE on a GT-SUITE model
- The remaining videos demonstrate how to use the machine learning tool that is integrated in GT-POST
NOTE: These trainings are only for those with GT-SUITE accounts
Enhancing Model Accuracy by Replacing Lookup Maps with Machine Learning Models (Machine Learning Blog Part 1)
Machine Learning and Modeling Simulation
Machine learning [ML] models, such as neural networks and other types of metamodels, are fast-executing mathematical representations of data that serve a variety of modeling and simulation purposes, including:
- Replacing computationally expensive physics-based sub-systems in integrated simulation models (for instance, we are using GT-SUITE simulation models for the HVACR industry)
- Utilizing a fast surrogate model in a hardware-constrained platform such as hardware-in-the-loop (HiL)
- As an optimization approach, particularly for computationally expensive models where it might be impractical given the number of design iterations needed
Machine Learning Models are Used in GT-SUITE
In this blog post, we’ll focus on another common situation in which ML contributes to modeling, which revolves around the use of lookup tables and maps. Depending on the application, GT-SUITE models can make extensive use of lookup tables and maps. Here they are often constructed from measurement data but can also consist of simulation data from other models. During simulations in which lookup tables and maps are used, they are usually evaluated using multivariate linear interpolation, which can be adequate when the relationship between input and output variables is linear. However, linearity is not the norm, in these situations, and linear interpolation can generate large errors when evaluating the table or map between sampled points. On the other hand, ML models are adept at capturing nonlinearities in datasets and making accurate predictions between sampled points. In GT-SUITE, neural networks and Kriging metamodels are good candidates for replacing lookup tables and maps for more accurate models.
For demonstration, consider a detailed lubrication bearing model which takes as inputs oil temperature, rotational speed, upstream oil pressure, and radial clearance. The bearing model will be run through a design of experiments (DOE) that varies these four inputs, and the predicted oil flow rate and predicted power consumption will be trained to neural networks so that the networks can be used in faster Mean Value models.
The four inputs are varied in a full-factorial DOE with 5 or 6 levels per input, yielding a total of 1080 samples. After running the 1080 simulations, we’ll have the equivalent of a lookup map for evaluating the oil flow and power consumption at any combination of the four inputs.
The 1080-sample dataset was then trained to a neural network in GT-POST consisting of two hidden layers with eight neurons each. To evaluate the predictive accuracy of the neural network in comparison to a lookup map that relies on linear interpolation, a second DOE was configured and run on the detailed bearing model, this time with 200 randomly chosen (via Latin Hypercube sampling) combinations of the four inputs.
For the 200 validation samples, the values of the four inputs were fed into both the neural network and a linear interpolating lookup map, and the predicted oil flows and power consumptions were recorded. Plots of predicted outputs vs. actual outputs are presented below.

Maximum absolute errors are 0.9 L/min for flow rate (a 10% error at that sample) and 103 W for power (a 53% error at that sample)
As can be seen with the red points, linear interpolation tends to overpredict both flow rate and power. In contrast, the blue points lie on the ideal slope=1 line, showing that the neural network provides almost perfect predictions for flow rate and power.
Learn More About Our Machine Learning Simulation Solutions
The next time a lookup table or lookup map is needed for your GT-SUITE model, consider adding more accuracy to the model by training the data to a neural network or other ML model. For a GT-SUITE model that already utilizes one or more lookups, one might also consider upgrading it with ML.
If you are interested in learning more about how you can implement machine learning in GT-SUITE, see our productivity abilities. You can also contact us here.
Engine Manufacturers Leverage Simulation to Engineer Ahead of Increasing Regulations
Environmental agencies, such as the EPA in the United States, play a vital role in controlling nitrogen oxide (NOx) emissions by setting limitations on airborne pollutants that harm public health and the environment. These standards have grown more stringent in recent years for heavy duty and/or road diesel trucks, particularly for engine-out NOx emissions, as shown in Figure 1. It is essential for internal combustion engine (ICE) manufacturers to accurately predict and control engine-out NOx emissions under various operating conditions during the design phase to meet these standards.
Why GT-SUITE Should Be Your Go-To Simulation Platform for NOx prediction
Accurate prediction of engine-out NOx emissions of IC engines requires capturing the in-cylinder interactions among fuel injection, turbulence, chemistry, piston motion, and wall heat transfer. These interactions can lead to the creation of in-cylinder stratification, as shown in Figure 2, which significantly impacts the engine’s NOx emission.

Figure 2: A conceptual model of diesel spray showing in-cylinder stratification in a conventional diesel engine
This is where simulation modeling comes into play. Here are some modeling methods:
3D, computational fluid dynamic (CFD) simulations can possibly provide better accuracy in capturing these interactions. However, these simulations can become computationally time-consuming, especially when trying to evaluate hundreds (or thousands) of designs and operating conditions.
An alternative approach is to use reduced-dimensional models, such as zero-dimensional stochastic reactor models (0D-SRM). These models represent the engine cylinder by hundreds of notional particles, providing a high-fidelity framework to capture in-cylinder stratification using detailed chemistry and accurate mixing models. A 0D-SRM model runs much faster than 3D-CFD simulations, making it a more feasible option for evaluating large numbers of designs and operating conditions. These 0D-SRM models can provide a good trade-off between accuracy and computational cost, making them a useful tool for diesel engine designers and researchers.
Powerful simulation software, such as Gamma Technologies’ GT-SUITE, can be used to predict engine performance and engine-out emissions. It has an implemented zero-dimensional (0D) stochastic reactor model (SRM) that can predict engine performance and engine-out NOx emissions accurately using detailed chemistry [1]. The animation in Figure 3 demonstrates how the 0D-SRM model captures the in-cylinder inhomogeneity using hundreds of notional particles. This model has been extensively validated against experimental data, making it a reliable tool for predicting engine-out NOx emissions.

Figure 3b: Distribution of mass at different equivalence ratio bins using the 0D-SRM model of the GT-SUITE software. Here the 0D-VCF-tPDF model represents 0D-SRM model.
In addition, GT-SUITE offers the ability to optimize the chemical reaction rates during a simulation, which can be particularly useful for improving emission prediction under different designs and operating conditions. In a study [2], different approaches were proposed to improve the accuracy of engine-out NOx predictions by optimizing the chemical reaction rate parameters (see Figure 4).

Figure 4: GT-SUITE predicted the peak pressure, CA50, and NOx emissions for a GM diesel engine, and a comparison with 3D-CFD results is also provided [2].
Learn More About GT-SUITE’s Combustion Modeling
Learn more about combustion and emission simulation solutions here. If you are interested in using GT-SUITE for engine-out emission modeling needs, we encourage you to reach out and speak to a GT expert.
References
[1] Paul, C., Jin, K., Fogla, N., Roggendorf, K. et al., “A Zero-Dimensional Velocity-Composition-Frequency Probability Density Function Model for Compression-Ignition Engine Simulation,” SAE Int. J. Adv. & Curr. Prac. in Mobility 2(3):1443-1459, 2020, https://doi.org/10.4271/2020-01-0659.
[2] Paul C, Gao J, Jin K, Patel D, Roggendorf K, Fogla N, Parrish S E, Wahiduzzaman S, An indirect approach to optimize the reaction rates of thermal NO formation for diesel engines, Fuel 338 (2023) 127287, https://doi.org/10.1016/j.fuel.2022.127287.
How to Perform Battery Electric Vehicle Range Testing Using Simulation
Streamlining BEV Drive Cycles
Welcome to the second blog of this two-part series on how simulation platforms such as GT-SUITE can streamline the various drive cycles of battery electric vehicles (BEVs) to determine the range and adjustment factors.
Read part one to learn more about how electric vehicle (EV) range guidelines are currently determined here.
Building Vehicle Thermal Simulation Models with GT-SUITE
As noted in part one of this series, the BEV range test procedure outlined in the SAE J1634 standard involves varying test conditions and drive cycles to estimate the final range. Simulation platforms such as GT-SUITE offers engineers the chance to streamline the entire EV range estimation process.
Using GT-SUITE, the first step in BEV range testing simulation is creating a model of the vehicle thermal management system. The model needs to represent a system-level thermal management circuit of the electric vehicle and contains an integrated model of the following circuits:
- High-Temperature (HT) – Cooling circuit
- Low-Temperature (LT) – Cooling circuit
- Indirect refrigerant circuit
- Cabin air circuit
- Under-hood air circuit
This thermal management model is then integrated with a vehicle model, representing the full vehicle and electric powertrain. Multiple control elements are implemented to regulate and adjust inputs to the system components such as the electric pump/fan/compressor, controlled valves, and others.
Automating BEV drive cycles with GT-Automation
Once the thermal management model is integrated in the vehicle model, there are multiple cycles that need to be simulated to obtain the adjustment factor for a single vehicle configuration. This is where GT-SUITE’s built-in app, GT-Automation, can be leveraged to efficiently evaluate the 5-cycle testing process.
GT-Automation can be used to create a ProcessMap that allows configuring a workflow of individual processes that are executed in a specific order. It enables users to model the flow of interest to simulate all the required cycles without any manual intervention and extract the relevant energy consumption values into GT’s range calculation Excel tool.
As pointed out in part one of this series, the multi-cycle test (MCT) contains a mid-test constant speed section of varying durations that are dependent on the electric vehicle and its battery pack size. This is another use for GT-Automation to automate the determination of the duration of the mid-test steady state phase in MCT, thereby eliminating the iterative process of selecting the steady-state speed duration for every vehicle configuration.
Calculating Range with GT’s Excel Tool
Lastly, engineers can use GT’s range calculation Excel tool which includes the required formulas from the SAE J1634 regulation embedded in the cells to estimate the adjustment factor and subsequently compute the label range.
Additionally, GTs range calculation Excel tool helps users manually tweak the energy consumption values in independent drive cycles to quickly understand the impact of different drive cycles and their dependencies on the adjustment factor and range. This further helps optimize control strategies and energy consumption for different vehicle configurations.
Example Results for Two Vehicle Configurations 
The range of a typical passenger BEV is simulated according to the 2-cycle and 5-cycle methodology outlined in the SAE J1634 standard with different configurations. Utilizing the adjustment factors, both the vehicle configurations gained approximately 5-7% miles in the EPA’s certified range limits, making the 5-cycle testing a favorable option for the manufacturers.
This, however, does not conclude that the 5-cycle testing option will always improve the adjustment factor and range for any vehicle configuration. It ultimately boils down to the efficiencies of different propulsion components and electrical load requirements at those additional 3 drive cycles that play a vital role in estimating the adjustment factor.
Learn More About our Battery Simulation Capabilities
If you’d like to learn more or are interested in trying GT for automating the 5-cycle test process flow for EV adjustment factor and range improvements, speak to a GT battery expert here.
If you missed part 1 of this series, read more here.
To learn more about more about battery simulation solutions, check out our battery modeling page and learn more about our hybrid and electric simulation solutions. Also, check our top 15 battery-related topics blogs in this list!
Calculating Electric Vehicle Range with Simulation
How is Electric Vehicle Range Tested?
Range anxiety is one of the biggest concerns of consumers when it comes to looking to purchase an electric vehicle (EV). Because of this, manufacturers of EVs need to have accurate range predictions to build trust and quell range anxiety.
The determination of range for battery electric vehicles (BEV) has been historically tested using the 2-cycle test methodology from the SAE J1634 standard in North America. The 2-cycle test procedure, like the single-cycle test (SCT) and multi-cycle test (MCT), generally includes standalone or sometimes a combination of city and highway speed profiles.
The Five-Cycle Testing Guidelines for Electric Vehicles
The single-cycle test (SCT) is a full-deplete test, meaning that the vehicle is driven in a repeating city or highway drive cycle until the battery dies. This can take a long time and consumes significant resources, placing significant logistical strains on test facilities. Also, additional test cycles beyond the SCT are needed to better characterize the effects of temperature and accessory loads on EV range performance.
These constraints led the Environmental Protection Agency (EPA) to adopt new methodologies for testing and determining the range of BEVs called the multi-cycle test (MCT), short-multi cycle test (SMCT+), and 5-cycle test procedures. The MCT and SMCT are full-deplete tests and combine standard dynamic drive cycles (UDDS, HFEDS, or US06) with constant-speed driving phases. The goal of using the standard dynamic drive cycles is to determine the energy consumption associated with specific and established driving patterns. and the goal of the constant speed profiles is to rapidly discharge the battery energy consuming less time and resources compared to the SCT. The standard MCT procedure consists of four UDDS cycles and two HFEDS cycles in a specified sequence including mid-test and end-of-test constant speed “battery discharge phases” (CSC) which vary in duration depending on the vehicle and the size of its battery pack. The speed profile for MCT is shown in Fig1.
The SMCT includes AC energy consumption in its range determination by means of a shorter test as compared to the MCT. It accomplishes this by changing the order of the cycles and including a US06 cycle. SMCT is not a full-deplete test unlike MCT, and the remaining battery energy must be depleted separately in the case of the SMCT, which is often done by simply driving the vehicle at a steady-state speed (called SMCT+).
Lastly, the EPA 5-cycle procedure encompasses high vehicle speeds, aggressive vehicle accelerations, use of climate control system, and cold ambient conditions in addition to the standard City and Highway drive cycles used in SCT, MCT, or SMCT+. The 5-cycle test does a better job of reflecting typical driving conditions and styles. It produces energy consumption ratings that are more representative of a vehicle’s on-road range. Different testing options for 5-cycle EV certification are shown in Fig3.
Leveraging the Adjustment Factor to Improve EV label-range
Every original equipment manufacturer (OEM) is required to run at least 2-cycle to certify range for EVs in North America, but the drive cycles under consideration in 2-cycle tests are low-speed tests that aren’t truly representative of the real world. This forces the EPA to use an adjustment factor to yield a more realistic customer that experiences range. The default adjustment factor is 0.7, which reduces the raw range by 30% when an OEM opts to certify the range with just a 2-cycle methodology. For example, a car that achieves 500 miles of range during a 2-cycle test ends up with a 350-mile label range by using the default adjustment factor. However, the EPA allows manufacturers the option to run three additional drive cycles (US06, SCO3, and FTP cold drive cycle) and use those results to earn a more favorable adjustment factor. The adjustment factor can never be less than 0.7, in the case that the estimated adjustment factor from the 5-cycle test is less than 0.7 then a default adjustment factor of 0.7 can be applied.
Using GT-SUITE to predict the 5-cycle adjustment factor
GT-SUITE, a multi-physics simulation software, is used to automate the entire 5-cycle process outlined in SAE J1634 regulation to predict the adjustment factors for various vehicle configurations, leaving users with testing options to choose for EV range certification. In addition, GT will help users eliminate the iterative steady-state calculations involved in the MCT and manual extraction of energy consumption data required to estimate the adjustment factor by using a python script. The test condition and cycle information for the MCT and the standalone 5-cycle test are highlighted in Fig4.
Fig4: Range testing driving profile and test conditions
The BEV range test procedure outlined in SAE J1634 involves varying test conditions and drive cycles to estimate the final range. GT offers users the chance to streamline the entire EV range estimation process and takes it a step further to automate the required drive cycles to compute the adjustment factor.
Learn More About our Battery Simulation Capabilities
Stay tuned for Part 2 of this blog series, where we will discuss more about the implementation and automation of various 5-cycle test conditions in GT-SUITE to calculate the 5-cycle adjustment factor in EVs.
If you are interested in learning more about battery simulation, check out our battery modeling page and learn more about our hybrid and electric simulation solutions. Also, check our top 15 battery-related topics blogs in this list!
If you would like to reach out, email [email protected] or contact us here.
Decreasing Battery System Simulation Runtime using Distributed Computing
At Gamma Technologies, the goal of our battery suite simulation solutions, through GT-SUITE and GT-AutoLion, is to provide accurate, high-fidelity battery simulation capabilities for reliable prediction of real-world performance. In this blog, I investigate how battery simulation runtime can be saved running hundreds of design optimizations using distributed computing. Depending on the modeling requirements, some optimizations can benefit greatly from scaling the simulation runs using over a high performing computing (HPC) cluster to accelerate the turnaround time or greatly augment the design space.
With GT-AutoLion, you can take actual cells and use our unique, fully physical, pseudo-two-dimensional (P2D) models to predict cell performance of these cells. Different types of analysis are possible, such as: voltage, temperature rise, current, power, and several other metrics.
Additionally, GT-SUITE and GT-AutoLion can also create physics-based models to help predict the aging of a cell aging over time or over a certain number of cycles. Additionally, that cell can be placed in a system-level simulation to make for more meaningful aging predictions. GT battery simulations can provide insights such as the range of an electric vehicle or the number of years of operation for a power tool.
Simulating Electrochemical Models such as Cell Performance and Cell Aging
In an electrochemical model, parameters such as cell dimensions, cell chemistry, and various other material properties can be varied to match the experimental behavior of the cell.
With the use of GT-SUITE’s design optimizer, these parameters can be varied to calibrate model behavior by minimizing the error between experimental and the simulated GT-AutoLion data.
To match data to constant current discharge, calendar aging, or cycle aging for instance, we can iterate hundreds of different designs on an HPC setup or through cloud computing.
The images below show the optimized GT-AutoLion results for constant-current discharge voltage curves, calendar aging and cycle aging data.
Leveraging the Cloud
With distributed computing, we can take a model which would normally be run locally on a machine and send it to a cluster using multiple cores. All that is required is additional solver licenses to increase the number of jobs. If a cluster is not readily available on-site, it is possible to access the cluster of a regional partner.
Likewise, a cloud server can be used to speedup simulation time. You can run long simulations or models which require high computing power. Cluster hours can be purchased from manufacturers such as AWS, Google Cloud, and Microsoft Azure to enable distributed computing.
See figures below that demonstrate these typical use cases.
Faster Runtimes of Complex Battery Simulation Models
One example of an intricate model includes a performance calibration exercise (included in installation of GT software). In this model, 600 designs are run using the design optimizer in GT-SUITE. The designs vary factors including the heat transfer coefficient, thicknesses of the cathode, anode, separator, and particle sizes of the active materials for 4 cases of varying constant-current discharges. For more information on why these factors were selected and to see this 600-design example model, GT’s own Ryan Dudgeon has written a blog post that explains more.
One design where all four cases are run locally on a standard work machine (with one logical core active) takes about 10 seconds to finish. During optimization, where we run 600 of these designs, this can increase the total runtime to roughly an hour.
However, with the aid of a distributed cluster we can run 5 designs in parallel with 5 solvers to decrease the runtime by 14 minutes. Additionally, we can use cloud computing to run even faster, which allows the computing resources to be fully elastic and allows for unlimited parallelization. The same number of 5 designs can be run in about half the time, saving 29 min. Increasing the number of designs in parallel up to 25 with cloud computing, decreased the runtime to just over 15 minutes. A 46-minute time savings!
It’s also interesting to look at an application where the speed increase brings more value, such as aging calibration models. The aging of a cell can be modeled as either calendar aging or cycle aging. For more information on how simulation can be used to predict aging, refer to this blog by my colleague Joe Wimmer.
Calendar aging involves taking a cell and measuring the loss in capacity over time. Running an optimization on a cell to calibrate calendar aging can take quite a bit of time if we are aging for months or years or looking at aging for different temperatures.
The calendar aging calibration model I explored ran for 360 days at 2 different temperatures. As shown in the table below, distributed computing can improve the simulation runtime of the model by nearly 2 and a half hours!
Another typical example is cycle aging calibration. Our cycle aging calibration example model undergoes a constant-current-constant-voltage (CCCV) charge after a constant current discharge.
This cycling protocol is repeated 1000 times at 3 different temperatures. Of course, we are not limited to just CCCV charging. Several different charging profiles such as boost charging and pulse charging can also be implemented, as mentioned in this blog. Since there is a varied load applied to the cell, we can’t run the model with large timesteps like with the calendar aging model, which has no load applied to the cell.
Running all 3 cases, (a total of 3000 cycles for 3 different temperatures), takes roughly 16 and a half minutes. Because of this, running the whole optimization containing 600 designs locally takes nearly 1 whole week! However, with the help of distributed computing, we were able to reduce the total runtime by a whole business week!
Learn More About Our Battery Simulation Solutions
These models can all be found in the GT installation as part of our GT-AutoLion calibration tutorials.
If you are interested in learning more about how you can implement distributed computing to improve your simulation speeds, you can reach out to [email protected] or contact us here.
Top 10 Gamma Technologies Blogs of 2022!
From battery thermal runaway to fleet route optimization, there is a blog for every simulation!
Since the inception of GT-SUITE, Gamma Technologies has offered state-of-the-art simulation solutions for manufacturers. Our simulation solutions help guide customers and partners toward highly optimized products.
In no order, these are the top 10 blogs of 2022!
- Simulating Your Way to HVACR Innovation
- How a Catastrophic Ship Fire Reminded us Why Battery Thermal Runaway Simulation is Important
- Reducing Costs & Increasing Efficiency in Power Converter Design
- Using Simulation to Model Closed-Cycle Argon Hydrogen Engines
- Sensitivity Analysis: How to Rank the Importance of Battery Model Parameters Using Simulation
- Accelerate Electric Aircraft Design Certification with Systems Simulation
- Vehicle Modeling and Simulation: ICEV & BEV Correlation Procedure
- How to Automate Real World Vehicle Route Generation Using Simulation
- A Look Inside Large-Scale Electrochemical Storage Systems Simulation
- Simulating a NASA Hydrogen Powered Rocket
Other Gamma Technologies Blogs to check out in 2022!
- Using Simulation for Battery Engineering: 12 Technical Blogs to Enjoy
- Machine Learning Simulation: HVACR Industry
- Fast, Accurate Full Vehicle Thermal Management Simulation with GT-SUITE and TAITherm
- Using Simulation To Predict Battery Aging for Real World Applications
- How Simulation Can Increase Productivity in Electric Vehicle Thermal Management Design
- Using Simulation to Optimize Driving Routes and Vehicle Emissions
- How Simulation Is Used To Design ICE vs. Battery Electric Vehicle Thermal Management Systems
- Are Your Vehicle Passengers Comfortable? How to Validate An Accurate, Thermal Cabin Management Simulation Solution
Shout-outs to our colleagues for their contributions!
Learn more about our simulation solutions!
If you’d like to learn more about how Gamma Technologies can be used to solve your engineering challenges, contact us here!
Have a great holiday season and wishing you a healthy & prosperous 2023!
A Look Inside Large-Scale Electrochemical Storage Systems Simulation
Why Are Redox Flow Batteries Important?
Renewable electricity produced by solar and wind energy is taking an ever-increasing share of the total electricity generated. However, the fluctuating nature of these renewable energy sources makes grid management challenging without a reliable energy storage system. Electricity production from solar and wind generators is often curtailed when the supply exceeds the demand during the day[i]. Large-scale electrochemical storage systems are expected to play a critical role in managing grid demand fluctuations.
Among the various types of electrochemical storage systems, such as lithium-ion and lead-acid batteries, one that’s well suited for grid energy storage are vanadium redox flow batteries (VRFB). VRFBs are considered due to their fast response rate, long charging/discharging cycle lives, and non-flammable aqueous electrolytes [ii].
Modeling and Simulation of Redox Flow Batteries
Simulation platforms such as GT-SUITE can be used to model different aspects of VRFBs both at the cell level and systems level using components from different physical domains covering fuel cells, battery modeling, fluid flow, thermal management, control, and chemistry applications. Simulations can be used to perform a variety of virtual experiments to assess the performance of VRFBs with different design and operating parameters such as size of tanks and stack, electrolyte flow rate, vanadium concentration, and temperature. In this blog, we will show the effect of a few of these variables on battery performance.
Figure 1 shows the main components and operating principles of VRFBs. The anolyte tank stores the solution consisting of V2+ and V3+ ions, and the catholyte tank stores the solution consisting of VO2+ (V4+) and VO2+ (V5+) ions. Pumps are used to circulate the electrolyte solution through the electrochemical cell consisting of carbon-based positive and negative electrodes separated by a proton exchange membrane.
With GT-SUITE, we have built a model of VRFBs as shown in Figure 2. This model accurately calculates the cell voltage by considering different physical and chemical processes such as:
- Varying concentrations of different vanadium ions within the electrodes and tanks
- Activation losses using the Butler-Volmer equation with proper dependence on vanadium ion concentrations and cell temperature
- Ohmic losses in the electrolyte (using Bruggeman correction), Nafion membrane (empirical relationship with water content and temperature as parameters), and current collectors
Results of Simulating Redox Flow Batteries
We used this model to study how different operating parameters affect battery performance. Figure 3 shows the net voltage and open cell voltage (OCV) during the charging and discharging cycle for two different temperatures. The battery performs better at 313K than 283K primarily due to lower activation losses (i.e., faster reactions at electrodes) at higher temperatures.
Figure 4 shows the voltage for two different volumetric flow rates. A higher flow rate leads to slightly better battery performance because vanadium ion concentrations in the electrodes are rapidly replenished by the flow from tanks.
Finally, total vanadium concentration is varied between 1200 mol/m3 and 1600 mol/m3 by keeping all other parameters the same. As shown in figure 5, higher vanadium concentration allows the battery to be charged and discharged for longer durations (i.e., higher capacity).
Learn How to Simulate a Variety of Electrochemical Devices
Explore the domain libraries and capabilities GT-SUITE has to offer to model a variety of electrochemical devices such as fuel cells, electrolyzers, and batteries.
Contact us to learn more.
Citations
[i] California’s curtailments of solar electricity generation continue to increase. (n.d.). Retrieved October 28, 2022, from https://www.eia.gov/todayinenergy/detail.php?id=49276
[ii] Kebede, A. A., Kalogiannis, T., Van Mierlo, J., & Berecibar, M. (2022). A comprehensive review of stationary energy storage devices for large scale renewable energy sources grid integration. Renewable and Sustainable Energy Reviews, 159, 112213. https://doi.org/10.1016/j.rser.2022.112213
How to Automate Real World Vehicle Route Generation Using Simulation
Gamma Technologies’ Solutions for Creating Driving Routes with Simulation
Ding! your package is eight stops away. Thanks to smartphones and the internet, you can view a live map and watch with excitement as the delivery vehicle arrives at your house. This is quite a transformation from a decade ago when all you’d receive was a tracking number and infrequent updates with nothing more than a location and expected delivery date.
Gamma Technologies is looking to transform simulations of real driving routes by offering GT-RealDrive. Gone are the days of having to physically drive a vehicle and record its GPS data in order to re-create a driving route. Instead, this can now all be done virtually using GT-SUITE with GT-RealDrive and an internet connection.
To get started, simulation engineers, using Gamma Technologies’ GT-SUITE, open GT-RealDrive and simply enter the “Start and End” locations just as you would on a navigation application. The user would then press the “Calculate Route” button and your route is ready to be simulated!
GT-RealDrive takes care of the rest by generating an optimal route using the “Start and End” locations and estimating local traffic conditions. Users may apply this newly calculated route in an existing GT-SUITE vehicle model simulation.
Also within GT-SUITE, to further automate and simulate numerous route conditions, users may use GT’s productivity tool, GT-Automation. With GT-Automation, simulaton engineers may instantly create GT-RealDrive route options with the use of Python scripting.
In the example below, a delivery truck in Midtown Manhattan, New York City going through a last mile delivery route (dropping off packages at customer locations) is simulated. This simulation explores real world vehicle performance using GT-RealDrive and GT-Automation.
STEP 1
First, we need to start with a list of the warehouse and then all the addresses (manifest) that a package needs to be delivered to (last mile delivery). This data can be in any format that is easily read by Python (e.g. csv, txt, xlsx, py, etc.).
STEP 2
Next, write a short Python script that will read in these addresses and create a ProfileGPSRoute object for each leg (from one address to the next). The legs are then combined in a ProfilePGSRouteMulti object with stop times at each address to represent the driver stopping the vehicle and getting out to deliver the package. Writing such a script simply requires some basic knowledge of Python and referencing the GT-SUITE Python API documentation.
STEP 3
Once the script is written, all that is left to do is run the script and sit back as GT-Automation & GT-RealDrive create all the route legs and assemble them all into a single, optimized route (shown below). The delivery route is then ready to be applied and used on any GT-SUITE vehicle model.

Copyright of Mapbox. You may not remove any proprietary notices or product identification labels from Mapbox’s services.
How Simulation Can Be Used Beyond Last Mile Delivery
While we showcased an example using a last mile delivery route, similar challenges are faced with any route that has a lot of stops like buses, garbage trucks, ride shares, and so forth. The same process can easily be applied to any of these situations.
If you are interested in learning more, please contact us.
How Simulation Is Used To Design ICE vs. Battery Electric Vehicle Thermal Management Systems
Understanding Vehicle Thermal Management
Vehicle electrification across the transportation industry is being driven by demands for reducing emissions and increasing fuel economy. However, engineering these electrified vehicles comes with a new set of challenges for thermal management of the powertrain and cabin. In this blog I will discuss some of these new challenges for battery electric vehicle thermal management and how it compares to combustion engine vehicles. But first, I’ll discuss some common traits between thermal management of both vehicle types.
Similarities Between ICE vs. Battery Electric Vehicles Thermal Management Systems
The goals for thermal management system design remain the same regardless of the powertrain: to keep the powertrain components in their desired temperature range, and to provide a comfortable cabin for the occupants. The optimal design should balance energy usage, system cost, and reliability. In cold environments, the thermal management system should enable fast warmup of the vehicle. Both battery electric vehicles (BEV) and internal combustion engine (ICE) vehicles are less efficient at cold temperatures. In warm environments, excess heat from the powertrain needs to be rejected to the environment to prevent damage to the components. In addition, the cabin temperature needs to be controlled for a comfortable driving experience.
Similar types of components are used between combustion engine vehicles and battery electric vehicles. A single-phase coolant loop would likely use an ethylene glycol and water mixture for the working fluid, with a pump, liquid-to-air heat exchanger, and control valve to manage the coolant flow. A cooling fan is used to enhance the air flow through the heat exchanger at low vehicle speeds. Previously mechanically driven pumps and fans were standard, but recently electrically driven components are used for greater system control. A two-phase refrigeration system is necessary for providing additional cooling below the environment temperature.
The integration of other systems is also an important consideration for transient analysis and controls. Different thermal strategies may be needed depending on the powertrain demands, component temperatures, and environment temperatures. For both a combustion engine and battery electric vehicle, a system that performs well at steady state conditions may not be sufficient to manage temperatures for transient driving cycles. The heat produced by powertrain components at ideal operating temperatures will be different than the heat generated at warmer or colder temperatures, and de-rating of the powertrain may be necessary to prevent component damage. In both types of vehicles, the demands for heating or cooling the cabin will impact the cooling circuit temperatures.
Differences Between ICE vs. Battery Electric Vehicles Thermal Management Systems
The most obvious difference between the combustion engine vehicle and the battery electric vehicle is the heat source. In the electric vehicle, the primary waste heat to the coolant is from the motor, power electronics, and battery. If this waste heat is not sufficient, an auxiliary heater or two-phase system can be used to add heat and bring the components up to their operating temperature. Whereas in the combustion engine, the primary heat source is from the combustion process. Additional heat is added to the coolant from the engine and transmission oil caused by friction in those components.
These differences in the heat sources lead to differences in the operating temperatures of the components. The combustion engine operates at high temperatures, which allows the coolant to be used to warm the cabin in cold environments or rejected to the environment at higher temperatures. In more complicated combustion engine cooling systems, a separate lower temperature loop maybe used to provide coolant for a charge air cooler or water-cooled condenser. This separate coolant loop also would be operating at above ambient temperatures and could reject heat to the environment using a coolant to air heat exchanger. In the battery electric vehicle, the motor and power electronics can operate at higher temperatures, but the ideal battery temperature range is between 20 °C and 40 °C. This would require a refrigeration system to provide additional cooling for the battery because the ambient air may not be enough in warm environments.
The differences in temperature requirements and operating conditions among the components in the BEV increase the complexity of its cooling system. Additional cooling is only required for the battery, so a separate cooling loop could be utilized for the battery linked to the refrigeration system. Cooling this smaller loop below ambient rather than the full cooling loop would require less energy to run the compressor, which increases the vehicle range. The requirement to heat the battery in cold environments would require either an auxiliary heater, operating the refrigeration system in a heat pump mode, utilizing waste heat from the motor and power electronics, or some combination of these strategies. To achieve these goals using a single system, multiple pumps and valves are necessary. More complex controls to route the coolant and optimize the pump speeds are required for efficient operation. In contrast, the combustion engine cooling system can typically be satisfied with a single coolant loop unless a charge-air-cooler requires additional cooling at a lower temperature.
How Simulation Is Used For Thermal Management System Designs
With the increased interaction between the vehicle systems in a BEV, an integrated system simulation is necessary for optimal design. Over a transient driving cycle, the thermal management of the battery and cabin need to be energy efficient to maximize the vehicle range. During a fast-charging event, the battery temperature needs to be carefully managed to prevent unnecessary cell aging. For a rapid acceleration or towing event, the motor and inverters need to be properly cooled to prevent component damage. GT-SUITE is the optimal simulation platform to manage these simulation needs by providing:
- Industry leading sub-system models
GT-SUITE simulations are recognized across the automotive industry for their accuracy and flexibility. Our publications page highlights customer use cases for every vehicle system across the electrical, mechanical, thermal, fluid, chemical, and controls domains. - Detailed component models and real-time capability
GT-SUITE provides detailed simulations for individual components that will greatly enhance the model capabilities. For the battery and motor, the temperature distributions over a driving cycle or fast-charging event in a 3D finite element model can predict hot spots and the effects of different cooling strategies. Electro-chemical models of the battery can predict the cell aging over a vehicle life cycle. In addition, the 3D cabin comfort model linked to GT-TAITherm can accurately predict occupant comfort over a wide range of vehicle conditions. These detailed models can be reduced to a real-time capable model for software or hardware in the loop simulations. - Robust model integration
GT-SUITE is designed to properly model the interaction between vehicle systems in an integrated model. For example, the heat generated within the motor and battery can be added as a source term in the thermal component models, with individual component temperatures used to calculate the correct performance within the electrical and mechanical system models. By building these sub-system models in the same tool, it is easy to model the interaction between them and change the simulation parameters for different analyses.
Closing Thoughts on Thermal Management System Design
The design of electric vehicles requires additional complexity for properly managing the battery, motor, power electronics, and cabin temperatures. The interaction between the single-phase and two-phase systems must be included to accurately predict the battery temperatures over a range of operating conditions. More complex controls are needed to create a robust and efficient system. Because of these complexities and enhanced interactions, simulation is necessary for system design. We will be expanding on these topics to discuss the component and system models in subsequent blog posts.
If you’d like to learn more or are interested in trying GT-SUITE to understand thermal management in ICE or xEV, Contact us!
Written by Brad Holcomb
This blog was originally published on May 26, 2021
Using Simulation to Optimize Driving Routes and Vehicle Emissions
Setting Up An Integrated Vehicle and Aftertreatment Simulation
Ever wondered what kind of emissions your car produces when driving to work or your favorite restaurant? With GT-SUITE and a few hours of hard work, you can have your answer! Learn how do set-up such a simulation using our software as detailed in our paper published on SAE entitled, A Study Examining the Effects of Driver Profile and Route Characteristics on Vehicle Performance and Tailpipe Emissions under Virtual Real Driving Scenarios, which is summarized in this blog.
Thanks to the fact that GT-SUITE is a versatile multi-physics simulation platform, an integrated vehicle model can be set-up containing the vehicle, powertrain and aftertreatment system. We used this capability to create a model of a turbo diesel passenger car with an entire aftertreatment system as shown below. This allowed us to simulate the final emissions that the vehicle produces, often referred to as ‘tail-pipe out’.
Next, we needed to create the real driving routes. While we couldn’t agree on who’s favorite restaurant to simulate driving to, we did decide that long cruise around the Los Angeles area was a great choice (see below) along with a cost to coast drive. The two routes were then created using GT-RealDrive, a built-in application tool within GT-SUITE to create real driving routes. It works nearly identical to navigation apps like Google Maps, the only difference is that instead of physically driving the route, we’re just doing it virtually. It considers the live traffic conditions, stop lights, and elevation, all of which have an impact on the vehicle’s emissions.
Lastly, we realized that who was driving the car would impact the emissions. Rather than pick only 1 driver, we decided that we’d set up two different drivers to represent a range of conservative to aggressive, to represent how we all drive slightly differently. With our model and routes set-up, the simulations were run and produced a variety of results, some of which will be highlighted below.
Results of Simulation: Tailpipe Emissions
As expected, the more conservative driver generated less emissions than the aggressive driver and was generally more fuel efficient. During the cruise around Los Angeles, the aggressive driver used 6% more fuel and produced nearly twice the amount of nitrogen oxides (NOx) a pollutant linked to smog and acid rain.
However, for the coast-to-coast route, this difference was quite small as both drivers used cruise control for the majority of the 2,817-mile drive which minimized the impact of their behavior. As a result, the fuel economy and emissions were similar for the two drivers. When normalized on a distance basis, the New York to California route produced less emissions and was more fuel efficient than the Los Angeles cruise.
Further results and more in-depth details such as the impacts of traffic light duration, the start-stop system, and effects of sulfur poisoning and platinum oxidation on the emissions are available in the paper.
Learn More About Generating Real Driving Routes and Predicting Emissions
Be it for regulatory compliance assessment, initial design, robustness or simply curiosity, GT-SUITE can simulate the emissions produced by vehicles while driving real routes. This is further simplified by GT-RealDrive by generating the routes virtually rather than having to use recorded GPS data.
For further information about this paper or information on emissions simulations in GT-SUITE, please contact us at here.
How Simulation Can Increase Productivity in Electric Vehicle Thermal Management Design
What To Consider When Designing Thermal Management Systems in Electric Vehicles
Thermal management system design in the electrification era requires a complex approach to ensure vehicle performance and customer satisfaction. Electric vehicle’s (EV) system design and controls will influence the range of the vehicle, cabin comfort, and performance, and it is important to use a robust approach to understand how relationships and tradeoffs within these systems affect targets. Robust virtual analysis allows such studies to occur efficiently, from fast-running system-level models, to detailed thermal analysis of subsystems.
Simulation Can Assist EV Thermal Management Design Productivity
Critical systems in an EV, such as battery packs and integrated motors, require proper cooling to meet performance and range requirements. With the increasing scope of model capabilities, comes the need to be able to edit the boundary conditions and test cases quickly and efficiently for multiple party’s needs. GT-Play, a web-based interface for GT-SUITE, can directly assist with this using a central location for model download, analysis, and design decisions, with a select group of experts overseeing the model uses and capabilities.
Here’s a walkthrough of an integrated thermal model that has been uploaded to GT-Play for three different end users:
1. Vehicle Test Engineer: The test engineer needs an efficient way to compare experimental results with simulation results to validate the system-level model. This requires the ability to change test conditions and edit boundary parameters, but the engineer does not have experience with GT. To do this, they reached out to the modeling expert to build such a result within the GT-PLAY platform, as highlighted below:
2. CAE Engineer: In this case, a design engineer needs to understand how their decisions with regards to a battery cold plate affect the thermal system performance. With the previous design built into a system model already, they communicated with the model expert and wanted to quantify the differences between the original design and the new layout. After reaching out to the model expert, they have received access to look at this study in GT-PLAY, highlighting the most important outputs. The general problem and setup are overviewed in the images below:
3. Calibration Engineer: This engineer needs understanding of how different thermal control parameters will affect vehicle performance. Specifically, they need to understand how changing valve switching inputs will affect the cooling of critical components (battery, motors) versus how it will affect energy efficiency. Since testing this use case would be time intensive, the engineer reached out to the model expert to build this study virtually and gave them specific results they would need to decide. The controls that will be analyzed and reviewed are highlighted in the image below:
See How These Simulations Can Be Applied to EV Thermal Design in a Live Webinar
On September 7th, SAE and Gamma Technologies will offer a FREE, live webinar: ‘How to Increase Productivity in EV Design by Leveraging Thermal Simulation.’ In this webinar, we will discuss how critical systems, such as battery packs and integrated motors, can meet performance and range requirements through simulation. This starts with component selection and moving forward with detailed CFD analysis before being merged into a larger system using unique productivity tools.
This webinar will help you understand the process of model building, uploading, and analyzing in GT-PLAY for such a complex model. Also, you will further learn how these capabilities can be utilized within GT-SUITE through the three real-world use cases mentioned earlier.
Register today: https://hubs.ly/Q01jRcZb0
Learn More about our Thermal Simulation Applications
View this curated page on our thermal simulation applications here.
Bio of Author:
Joseph Solomon is a Solutions Consultant at Gamma Technologies, focusing on electrification solutions. Joseph assists GT users in battery design, e-powertrain system analysis, thermal management, and controls development. In 2021, Joseph applied GT-AutoLion to complete his Masters in mechanical engineering at the University of Michigan, titled Investigating Lithium Ion battery performance with an electrochemical-mechanical model. Contact Joseph here!



























































































































































