Everything You Wanted to Know About Sodium-Ion Batteries
From Lithium-Ion to Sodium-Ion Batteries: A New Era in Battery Technology
As the demand for energy storage continues to rise, sodium-ion batteries (NIBs) are gaining momentum as a compelling alternative to lithium-ion batteries (LIBs). Leveraging more abundant and cost-effective materials, NIBs are especially well-suited for low-speed electric vehicles—where range is less critical and affordability is key—as well as for renewable energy systems and other large-scale applications. Here’s why sodium-ion technology is drawing increasing interest:
- Cost-effective & abundant materials
Sodium is far more abundant and significantly cheaper than lithium. This makes NIBs highly attractive for large-scale use, particularly where affordability and raw material availability are key concerns, such as in grid storage and renewable integration.
- Environmentally friendly
Sodium is easier to extract and process, which results in a smaller environmental footprint. NIBs align well with global efforts to transition toward cleaner, more sustainable energy technologies.
- Safety & stability
NIBs offer better thermal stability than LIBs. They can safely be discharged to zero volts and use thermally stable sodium salts that generate fewer hazardous byproducts. Their slower heating rates and delayed self-heating under stress conditions make them a safer option, particularly in high-temperature or abusive environments.
- Cold climate performance
Unlike LIBs, Na-ion cells consistently maintain performance in cold temperatures. They are less prone to electrolyte freezing and capacity loss, making them ideal for use in harsh environments.
How do Sodium-Ion Batteries Work?
Sodium-ion (NIBs) operate on electrochemical principles similar to LIBs. During charging, Na⁺ ions move from the cathode to the anode; during discharge, they travel back to the cathode. This process closely resembles the ion movement in LIBs, as illustrated in Figure 1. The materials, however, differ. A typical Na-ion battery includes:
- Cathode: Common materials include layered metal oxides, polyanionic compounds, or Prussian blue analogs.
- Anode: Hard carbon is widely used due to its structural stability and compatibility with sodium.
- Electrolyte: Sodium salts like NaPF₆ or NaClO₄ in carbonate solvents.
- Separator: Same as in LIBs—allows Na-ion transfer while preventing short circuits.
- Current collectors: Aluminum is used for both anode and cathode, lowering material costs compared to copper-based Li-ion systems.
What are the Key Challenges Facing Sodium-Ion Battery Technology?
While sodium-ion technology is promising, it still faces several technical challenges before achieving widespread commercialization:
- Lower energy density: NIBs typically offer energy densities between 100-160 Wh/kg, which is lower than that of LIBs. This makes them less ideal for high-performance electric vehicles or aerospace applications, in which compact size and high energy-to-weight ratios are required.
- Cycle life: Sodium ions are larger and heavier than lithium ions, which causes more mechanical stress during cycling and leads to faster material degradation. Improving cycle durability is essential for NIBs to compete in long-term applications.
- Scaling production: As NIBs are still in the early stages of mass production, improving process efficiency and developing industry standards are key to driving down costs and accelerating adoption.
- Lower operating voltage: NIBs generally operate at lower voltages, reducing energy output per cell, and often requiring more cells in series.
How to Model Sodium-Ion Batteries using GT-AutoLion
At Gamma Technologies, we are enabling the advancement of Na-ion technology through high-fidelity electrochemical simulation with GT-AutoLion. Using a robust pseudo-2D (P2D) framework, GT-AutoLion allows users to simulate Na-ion cells with detailed physics-based models that help optimize performance, thermal behavior, and safety.
To illustrate sodium-ion behavior, AutoLion-1D includes an example model based on NVPF/hard carbon chemistry. Figure 2 shows the calibration of this model against experimental data at different C-rates.
The Future of Sodium-Ion Battery Technology
While LIBs dominate the market, NIBs are emerging as a strong competitor, especially in applications where cost, resource availability, and safety are top priorities. With strengths in large-scale energy storage and reliable cold-weather performance, NIBs represent a promising alternative for the future.
Rapid progress in NIB research is improving their performance and durability. Physics-based simulation tools like GT-AutoLion are essential for bridging the gap between NIBs and LIBs by helping engineers design safer, more efficient, and higher-performing NIBs for real-world use.
Ready to shape the future of energy storage? With GT-AutoLion, you can refine your NIB designs and stay ahead in this fast-evolving market. We’re here to support your journey and help push the boundaries of innovation in energy storage.
At Gamma Technologies, our GT-SUITE and GT-AutoLion simulations provide battery engineers and designers robust solutions for modeling and predicting battery performance throughout its lifecycle. Enjoy reading our battery-focused technical blogs to learn more and contact us to see how Gamma Technologies can support your battery development goals.
References
[1] Yu, Dandan, et al. “Low‐Temperature and Fast‐Charge Sodium Metal Batteries.” Small 20.30 (2024): 2311810.
[2] Zhao, Lina, et al. “Engineering of sodium-ion batteries: Opportunities and challenges.” Engineering 24 (2023): 172-183.
[3] Iwan, Agnieszka, et al. “The Safety Engineering of Sodium-Ion Batteries Used as an Energy Storage System for the Military.” Energies 18.4 (2025): 978.
How Multi-scale Models Are Enhancing Battery Performance and Design
Beyond Experimentation: Predicting Battery Performance with Multi-Scale Models
In today’s electrified world, designing better batteries goes far beyond trial-and-error testing. Engineers and researchers are increasingly turning to simulation to accelerate innovation and reduce development costs. Lithium-ion batteries power modern energy storage systems, from electric vehicles to grid storage. As demand grows for higher performance, longer lifespan, and improved safety, accurate battery modeling becomes increasingly important. A promising path forward involves uniting atomic-scale simulations with continuum models (multi-scale modeling) to enhance the fidelity of performance predictions.
Enhancing Battery Design Through Atomic and Continuum Model Integration
Multi-scale modeling approach accelerates the design of new battery materials, reducing reliance on time-consuming and expensive experimental testing. It enables:
- Faster material development: Atomic-scale simulations allow rapid exploration of new electrolytes and additives.
- Better battery optimization: Engineers can fine-tune battery performance for specific applications, such as high-power EV batteries or high-energy grid storage solutions.
- Improved safety and longevity: Accurate predictions help optimize electrolyte formulations, reducing risks associated with lithium plating and thermal runaway.
The Role of the Electrolyte in Battery Simulation
Achieving high predictability in battery modeling requires a deep understanding of each cell component, particularly the electrolyte, which is essential for lithium-ion transport between the electrodes. The electrolyte significantly impacts overall battery performance by influencing internal resistance, voltage behavior, and degradation over time. However, key electrolyte properties, such as ionic conductivity, lithium transference number, and diffusivity, are difficult to measure and highly sensitive to variables like temperature, concentration, and interactions with electrode materials.
Understanding Atomic-Scale Simulations in Battery Modeling
Continuum models—mathematical approaches that simulate battery behavior at the cell or system level by treating materials as continuous substances—such as the pseudo-two-dimensional (P2D) model, rely heavily on these properties to simulate ion transport and electrochemical behavior. Inaccurate values may lead to significant errors in predicting concentration gradients, voltage losses, and overall battery performance. Atomic-scale simulations—methods that simulate battery behavior at the molecular (or atomic) level by modeling individual particles or molecules—help estimate electrolyte properties across a wide range of conditions, improving the accuracy of continuum models and reducing reliance on experimental measurements.
Combining Atomic-Scale and Continuum Models for Better Insight into Battery Performance
To generate accurate electrolyte property data across various conditions, Gamma Technologies has collaborated with Compular, a company specializing in atomic-scale simulations. By integrating Compular’s molecular dynamics (MD) simulations with our GT-AutoLion continuum model, it is possible to directly calculate fundamental electrolyte properties from atomic-level simulations.
To integrate Compular’s simulation with Gamma Technologies’ (GT) GT-AutoLion, an electrolyte with an additive (LiPF6 in EC:PC:EMC, 1:3:8 by volume, with 2% FEC) is simulated across varying salt concentrations and temperatures to study the performance of both high energy and high power-dense cells.
GT-AutoLion utilizes a physics-based P2D model to simulate battery charging and discharging. It divides the battery into three main regions—anode, cathode, and separator—discretizing them to capture critical electrochemical interactions (Figure 1). However, to ensure the accuracy of these models, precise electrolyte data is necessary. This is where MD simulations come into play.
Compular Lab models electrolytes at the molecular level using MD simulations, simulating systems with about 5000 atoms. These atoms move according to the laws of physics, and we track their motion over time to observe how they interact. The simulations run at specific temperatures and salt concentrations, and for each condition, we record around 15 nanoseconds of ion movement.
To extract useful data from these simulations, Compular’s CHAMPION, a software tool, analyzes how ions move together. It calculates what’s called Onsager coefficients, which describes how different ions affect each other’s motion (see Figure 2). From these, we derive four key transport properties:
- Ionic conductivity: how efficiently ions carry electric charge through the electrolyte
- Salt diffusivity: how fast ions spread out in the electrolyte
- Transference number: the fraction of the current carried by the cation
- Thermodynamic factor: how ion–ion interactions influence diffusion and concentration behavior
Typically, calculating these properties requires long simulations because random motion (or “noise”) from non-interacting ions makes it harder to get accurate results. But CHAMPION improves efficiency by focusing only on the meaningful interactions between nearby ions, reducing the required simulation time by about 90% to approximately 10 nanoseconds.

Figure 2: Schematic representation of the MD simulations; Key transport properties are calculated based on the motion of atoms governed by Newtonian mechanics
The Workflow: From Atomic Interactions to Battery Performance
Here’s how the combined modeling approach works:
- Molecular Dynamics Simulations: Compular’s lab tool runs MD simulations to extract crucial electrolyte properties like conductivity and diffusivity. These simulations provide insights into how electrolyte composition affects battery behavior, particularly under different temperatures and salt concentrations.
- Integrating Data into GT-AutoLion: A Python script transfers the extracted electrolyte data into the GT-AutoLion simulation framework. The data is structured into a reference object (XYZMap) that incorporates temperature/concentration-dependent electrolyte properties.
- Simulating Battery Performance: Using these electrolyte properties, GT-AutoLion predicts voltage vs. capacity curves for both energy-dense and power-dense cells. This allows for comparative analysis under varying temperatures and charging rates.
Simulation Findings: Electrolyte Properties and Their Impact on Battery Performance
When we tested these models, we found some interesting trends.
MD simulations reveal the following (Figure 3):
- Ionic conductivity and salt diffusivity decrease at lower temperatures, affecting overall battery efficiency.
- There is an optimal salt concentration (~1 M) that maximizes conductivity.
- Transport properties vary significantly with electrolyte composition and temperature, influencing battery performance.

Figure 3: Transport properties as a function of salt concentration at three temperatures, as predicted by using molecular dynamics simulations using Compular Lab
Electrochemical Battery Modeling using GT-AutoLion
By incorporating atomic-scale insights, GT-AutoLion enables:
- Accurate predictions of voltage vs. capacity for power-dense and energy-dense cells.
- Insights into how electrolyte behavior impacts capacity, especially under high C-rates and low temperatures.
- A better understanding of trade-offs between power and energy density in different applications.

Figure 4: Voltage vs. capacity for power-dense and energy-dense Li-Ion cells at different temperatures and C-rates
Conclusion: The Future of Battery Simulation
The progression of battery technology will rely heavily on the integration of multiscale simulation techniques that connect molecular-level behavior with system-level performance. With enhanced prediction accuracy and faster innovation, the future of lithium-ion is brighter than ever. If you are interested to read more about battery modeling you can read blogs on using simulation for battery engineering and watch webinar on “Machine Learning for Fast, Integrated Battery Modeling” or contact us to see how Gamma Technologies can support your battery development goals.
Digital Twin Simulation: Engineering Smarter, Faster, and More Reliable Products
Why Digital Twins are Necessary and Important
Imagine being able to predict equipment failures before they happen, optimize system performance in real time, and reduce expensive physical testing. This is the power of digital twins. A digital twin is a virtual replica of a physical asset, enabling real-time monitoring, simulation, and optimization.
With increasing system complexity and the demand for faster, more efficient product development, digital twin technology is revolutionizing engineering across industries.
How Digital Twins Help Address Major Challenges
Downtime and Maintenance: Unplanned machine downtime can disrupt workflows, delay production schedules, and lead to significant revenue losses. Unexpected failures not only impact profitability but also strain resources and damage customer trust. Digital twins help predict failures and optimize maintenance schedules, ensuring uninterrupted operations.
Fault Detection and Prediction: Late fault detection can result in system failures, escalating repair costs, and potential damage to other components. At the same time, excessive false alarms can interrupt production and reduce efficiency. Digital twins enable precise fault detection, balancing accuracy with minimal disruption.
Controls Optimization: Misconfigured control systems can lead to production stoppages and hardware degradation. Control software developed in lab conditions may not perform reliably in real-world applications. Digital twins allow engineers to test and optimize control strategies in a virtual environment before deployment, improving safety margins and efficiency.
What-if Scenarios: Physical testing of all possible operating conditions is expensive and time-consuming. Relying solely on sensor data and human intervention can introduce inconsistencies. Digital twins enable engineers to simulate countless real-world scenarios quickly, reducing reliance on costly prototype testing.
How Simulation is Making an Impact
GT-SUITE provides a comprehensive platform to develop digital twins, integrating robust multi-physics simulation with cutting-edge data science. By seamlessly connecting with a customer’s data collection system in a cloud-based environment, GT-SUITE enhances asset performance, minimizes downtime, and improves decision-making. Let’s explore how digital twins solve critical challenges and how you can build one to unlock the full potential of your engineering systems.
“How to Guide” for Building a Digital Twin
A digital twin functions by continuously updating a virtual replica of a physical asset, system, or process with real-time data. This enables simulation, analysis, optimization, and monitoring of the physical counterpart.
Here’s how to build one:
Step 1: Data Collection; Data can be collected from sensors on the physical asset (e.g., an engine, machine, or compressor), measuring parameters such as temperature, pressure, vibration, speed and more. Additionally, historical test data and field data from previous operations can be leveraged.
Step 2: Data Integration; All collected data—whether from sensors, past tests, or field operations—is aggregated, cleaned, and processed to ensure accuracy and usability for simulation and analysis.
Step 3: Virtual Model Creation;
-
- Physics-based modeling: GT-SUITE enables the creation of high-fidelity digital twins using physics-based models calibrated with real-world measured data, ensuring precise system behavior predictions.
-
- Data-driven machine learning models: GT-SUITE’s Machine Learning Assistant utilizes datasets from sensors, testing, field operations, and design of experiments to create fast-running mathematical models (metamodels). These models leverage real-time sensor data for enhanced predictive accuracy and can also integrate synthetic data from physics-based simulations, reducing reliance on physical testing while improving reliability.
Step 4: Real-Time Interaction; The virtual model continuously updates via live data streams and sensor feedback which enables real-time monitoring, fault detection, and predictive analysis. This ensures that engineering teams can proactively optimize system performance, diagnose failures, and improve reliability.
Case Studies: Real-World Applications of Digital Twins
Case Study 1: Fuel Cell Fault Simulation and Detection for On-Board Diagnostics Using Real-Time Digital Twins
As electrified powertrains become more complex, developing reliable On-board diagnostic (OBD) systems is crucial for ensuring compliance with evolving regulations. However, the lack of prototype hardware makes traditional validation methods impractical. To address this, real-time digital twins enable virtual testing through model-in-the-loop (MiL), software-in-the-loop (SiL), and hardware-in-the-loop (HiL) simulations.
This approach involves creating high-fidelity models to identify and analyze failure modes in key fuel cell components such as the compressor, recirculation pump, humidifier, and cooling system. The process begins with fuel cell stack calibration using measured polarization curves and predictive loss models. This is followed by balance of plant modelling that integrates the stack with sub-systems like the anode recirculation loop, the cathode system including humidifier, intercoolers, compressor, and turbine, and motor controls as well as the cooling systems. To enhance efficiency, the model is optimized using 0D and map-based approaches, ensuring fast simulation without sacrificing accuracy. Fault scenarios are then defined, specifying key variables to be monitored by the control system. In the HiL phase, the fast-running fuel cell model developed in GT-SUITE is integrated with MATLAB Simulink for real-time execution. The model’s performance is assessed by running real-time simulations, comparing speed and accuracy, and introducing faults to evaluate system response alongside sensor data. By leveraging real-time digital twins, engineers can efficiently develop and validate OBD systems, ensuring robust fault detection while reducing reliance on physical prototypes. This accelerates regulatory compliance and enhances the reliability of fuel cell powertrains.
Click here to access the complete webinar.
Case Study 2: Enhancing Cabin Thermal Management with Digital Twin Technology
A major automotive company leveraged digital twin technology to optimize cabin thermal management, improving energy efficiency and driver comfort in electric trucks. The challenge was to balance the battery thermal system and cabin climate control while maintaining efficient energy use.
To achieve this, the company calibrated high-fidelity models, dividing the cabin into flow volumes and creating surface mesh models for precise simulations. A co-simulation was then conducted by integrating GT-SUITE’s fluid solver with GT-TAITherm, leveraging key parameters. The model was rigorously validated against multiple real-world test datasets, ensuring accuracy and reliability.
Building on this foundation, the company aims to take its digital twin implementation to the next level by developing fast-running models for SIL and HIL applications and integrating internet of things (IoT) connectivity for real-time data exchange and enhanced predictive capabilities. These advancements will enable smarter automation, deeper system insights, and more responsive thermal management strategies, bringing them closer to a fully realized digital twin ecosystem.
Click here to access more digital twin related presentations.
Learn More About our Digital Twin Solutions
Digital twins are transforming engineering by enabling predictive maintenance, optimizing system performance, and accelerating product development. By combining physics-based modeling with data-driven machine learning, GT-SUITE provides a powerful platform for creating and deploying digital twins across industries.
Whether you are working on fuel cell powertrains, vehicle thermal management, or other complex systems, leveraging digital twins can drive efficiency, reduce costs, and improve reliability.
Ready to take your engineering processes to the next level? Start building your digital twin with GT-SUITE today! If you’d like to learn more about how GT-SUITE‘s capabilities, contact us!
Making Music with Multiphysics
The capabilities of simulation software appear to be endless (not really, but you know what I mean…) when it comes to modeling different systems and things that may not have been simulated before. This can be especially true when considering how model-based systems engineering (MBSE) has advanced in the past few decades from single-purpose tools focused on one system or set of physics, like engines or general fluid flow simulation, to the current state of multi-physics simulation that encompasses mechanical, fluid flow, electrical, thermal, and other domains. While looking at different possibilities considered by manufacturers, one system came to the attention of a few of us at Gamma Technologies and took us down an unexpected, yet interesting and even amusing path.
Making Noise to Hide Noise...Virtually
In the late 2010’s, different automotive exhaust suppliers were developing active noise cancellation (ANC) systems that could further reduce the sound coming from the passive systems of mufflers and resonators that have been a standard component of engine for many decades. As a result of this investigation, Gamma Technologies took some time to examine what was done and determine if GT-SUITE could be used to simulate such a system and if not, determine what might need to be added to make it possible.
Description of the System and Source of the Noise
The ANC system is made of a volume that is nearly spherical attached to a pipe that connects to the outflow of a muffler near the end of an automotive exhaust system. In the volume are speakers that produce the artificial noise that cancels or modifies the noise coming from the muffler. Engineers at the exhaust system maker Eberspaecher wrote a paper describing the system in more detail in 2017 that was published by the Society of Automotive Engineers. (“Active Cancellation of Exhaust Noise over Broad RPM Range with Simultaneous Exhaust Sound Enhancement”; Riddle, Bemman, Frei, Wu & Padalkar; SAE Technical Paper 2017-01-1753; 2017; doi:10.4271/2017-01-1753.) A simplified depiction of this system is shown in Figure 1 below.
Modeling Exhaust System Acoustics and Active Noise Cancellation
To model this system, engineers at GT considered the possibilities and the physics included among the parts. GT-SUITE has been used for a few decades to model the pulsations of exhaust systems. Suppliers like Eberspaecher, Forvia (formerly Faurecia), and Tenneco have been using simulation as part of their design process for much of that time. Included in the software is a virtual microphone that can calculate the sound at a distance from the exit pipe and store it in a sound file playable on typical computers and other devices, like phones, often in the WAV format. At the same time, GT-SUITE includes multiple multi-physics libraries for electrical and mechanical systems that can be incorporated into the fluid dynamic model to simulate the entire system. There is a template in GT-SUITE that can convert a current signal into a magnetic and mechanical force that can move a mechanical mass, whose motion can be limited by a spring and damper. This is the principle of a speaker, used in sound systems, phones headphones, and so on. These parts are shown in the yellow part of the image below (Figure 2):
The mass of the speaker coil and cone is connected the volume of the ANC system by a connection in GT-SUITE made for this purpose, MechFlowConn. This connection translates the motion of the mass into changes in the volume, and which transmits the sound to the fluid system, shown in the blue region of the image. The sound transmits to the pipe and down to the outlet, which is connected to the microphone and sound file generator in the pink region.
In practice, one might produce a signal from another source, such as a control system model made in Matlab/Simulink, especially to test the controls that would take feedback from the system and change the signal going to the speaker system. However, as this was just a conceptual study, a realistic system was not available to GT to test it and so it was decided to generate signals within the control system library of GT-SUITE and listen to the results. Tests with individual frequencies passed and GT was satisfied with the results.
Engineers Having Fun
By good fortune, in the past weeks, it was remembered that this study had been performed, and the question was asked, “Could we make a little melody using this method?” The answer was YES!
In the spirit of the upcoming holidays, the main melody from Beethoven’s Ninth Symphony (fourth moment) was selected. And so we present to you an original ANC recording, and our own version of Ode to Joy, performed by GT-SUITE:
Learn More About our Acoustic Simulation Solutions
If you are interested in learning more about GT-SUITE’s intake and exhaust acoustic simulation offerings, visit this webpage here. If you’d like to learn more about how GT-SUITE‘s capabilities, contact us!
How Simulation Accelerates the Development of eVTOL Aircraft for Taxi Services
Unlocking the Potential of eVTOL Aircraft for Taxi Services: Advancing On-Demand Transportation Safely and Efficiently
The world is rapidly advancing towards an integrated and accessible on-demand transportation network. Electric Vertical Takeoff and Landing (eVTOL) vehicles have emerged as the ideal solution for the near future, offering faster and more efficient travel options. However, ensuring the safety and reliability of these innovative aircraft is paramount. This blog explores how simulation models play a vital role in evaluating eVTOL designs, identifying potential issues early, and paving the way for reliable and sustainable air mobility solutions.
Understanding the Aging Challenges of Li-ion Batteries in eVTOL Aircraft
Limited durability of lithium-ion batteries poses a significant obstacle in developing long-lasting eVTOL aircraft. Lithium-ion batteries experience capacity decline, increased impedance, and decreased power output over time. A deep understanding of battery aging is crucial to predict lifespan accurately and optimize battery management systems (BMS) for longevity and reliability. Let’s learn how addressing these challenges is vital for ensuring the success of eVTOL technology.
How Simulation Plays a Role in Electric Aircraft Designs
Simulation platforms such as GT-SUITE offer practical, efficient, and reliable solutions for studying various aspects of electric aircraft design. This comprehensive simulation suite allows for in-depth exploration of critical areas such as aerodynamics, flight control, mission definition, propulsion, and battery pack systems (see Figure 1 below). With GT-SUITE, engineers can comprehensively assess and optimize each subsystem, ensuring optimal performance and safety throughout the eVTOL aircraft’s operation. Check out our blog on eVTOL design and this study co-authored with Advanced Rotorcraft Technology on a comprehensive simulation for eVTOL aircraft.
Real-World Aging Scenarios: Using Simulation for Battery Degradation Prediction
To ensure the economic viability of an eVTOL taxi service, maximizing the number of trips during peak traffic hours is crucial. However, it is essential to consider the limitations imposed by the battery pack. In a recent case study, we discussed the evolution of an eVTOL’s range over a span of four years, exploring a scenario where ten trips were scheduled each weekday between 6 AM and 10 AM in the morning, and from 4 PM to 8 PM in the evening. To maintain optimal performance, a 10-minute recharge was performed between each trip, with a full recharge between the morning and evening shifts. The proposed aircraft utilization was inspired by the UberAir Vehicle Requirements and Mission study. The vehicle requirements and mission have been developed through extensive analyses of current and predicted demand, understanding the capabilities of enabling technologies, and focusing on creating the optimal rider experience. Extensive analyses of current and predicted demand, understanding the capabilities of these technologies, and focusing on creating the optimal rider experience.
By leveraging the advanced capabilities of another simulation platform GT-AutoLion, we were able to extrapolate the battery power demand during a representative mission flight from GT-SUITE and simulate the long-term degradation of the battery pack in a real-world scenario. In Figure 2, the evolution of the state of charge (SOC) of the eVTOL battery pack during the morning shift is plotted.
The focus of this study was the ability to model the aging mechanism using GT-AutoLion. GT-AutoLion offers solutions that empower engineers to leverage cycle and calendar aging data, extracting valuable insights into battery degradation within more realistic scenarios. GT-AutoLion’s physics-based and postdictive degradation models can be calibrated to align with this data and subsequently applied to predict the aging behavior of Li-ion cells in various applications. These applications include li-plating, active material isolation, cathode electrolyte interphase (CEI) and solid electrolyte interphase (SEI) layer growth and cracking. These can all be modeled and investigated as aging mechanisms in Gamma Technologies’ software (see Figure 3).
The GT-AutoLion aging simulation generates an external file (.cellstate) capturing the Li-ion cell’s state at each cycle during the aging process. Each cycle represents a mission flight of the eVTOL aircraft. This external file serves as valuable input for the system-level models, enabling accurate predictions of how the aged battery will impact the product performance (see Figure 4).
By harnessing the power of this invaluable tool, we were able to project the anticipated range of the battery pack over a four-year operational period (see Figure 5).
Such insights provide a comprehensive understanding of the technology, aiding both technical exploration, and supporting the business development of the latest eVTOL advancements.
With the ability to simulate and predict battery performance and aging, operators of eVTOL taxi services can make informed decisions about their operations, keeping an eye on profitability while maintaining reliable and sustainable service. By accurately estimating the range evolution over the course of four years, they can strategize the optimal scheduling of trips, maximizing efficiency during peak traffic hours, and mitigating the impact of battery pack limitations.
Learn More About our eVTOL Simulation Solutions
The future of on-demand transportation is bright with eVTOL vehicles leading the way. Through the use of cutting-edge simulation models and advanced battery management solutions like GT-SUITE and GT-AutoLion, the safety, reliability, and performance of eVTOL aircraft is elevated to new heights. Embracing innovation and overcoming obstacles, the sky’s the limit for this exciting and sustainable mode of transportation!
Watch this great video case study of coupling GT-SUITE and Advanced Rotorcraft Technology’s FLIGHTLAB simulation capabilities to provide an easy-to-use, holistic solution for simulation-supported system design during the early design stages of eVTOLs.
If you’d like to learn more about how GT-SUITE and GT-AutoLion are used to solve eVTOL simulation challenges, contact us!
Gamma Technologies and GT-SUITE: Pioneering the Future of Simulation
Unveiling the Power of GT-SUITE
This year, Gamma Technologies celebrated a significant milestone: its 30th anniversary. Since its inception in 1994, Gamma Technologies has been at the forefront of engineering simulation, revolutionizing how industries approach design and innovation. At the heart of this transformation is GT-SUITE, the company’s flagship systems simulation software that has become a cornerstone in various fields, from automotive to aerospace, HVACR, energy, and beyond.
Gamma Technologies grew its prowess in the automotive industry with GT-POWER, the industry standard engine performance simulation tool used by most engine manufacturers and vehicle original equipment manufacturers (OEMs). GT has continuously expanded its simulation capabilities to meet consumer demands with extensive developments in batteries, electric motors, and more with products such as GT-AutoLion, GT-PowerForge, GT-FEMAG, and others. GT continues to accelerate in agnostic powertrain and systems development worldwide.

SOURCE: AFDC (n.d.a). National Academies of Sciences, Engineering, and Medicine. 2021. Assessment of Technologies for Improving Light-Duty Vehicle Fuel Economy—2025-2035. Washington, DC: The National Academies Press. https://doi.org/10.17226/26092.
GT-SUITE is more than just a simulation tool. It’s a comprehensive, multi-domain platform that empowers engineers to model, simulate, and analyze complex systems. With capabilities spanning mechanical, electrical, fluid, and thermal domains, GT-SUITE offers a holistic approach to understanding how different systems interact. This versatility is essential in today’s engineering landscape, where the integration of various technologies and systems is more crucial than ever.
In the transportation industries (automotive, on-and-off highway vehicles) GT-SUITE has made a substantial impact. The software allows for the creation of detailed simulations of vehicle systems, from powertrains to suspension systems.
Almost every vehicle on the road has components simulated and designed with one of Gamma Technologies’ simulations. Most major automotive original equipment manufacturers (OEMs) have used GT-SUITE for engine and vehicle development.
By providing a virtual environment to simulate and optimize designs, GT-SUITE helps manufacturers improve performance, reduce costs, and shorten turnaround times. To accelerate development time, GT’s XiL (X-in-the-Loop) modeling capabilities (method that combines virtual testing with real-world elements to validate components of an Electronic Control Unit or ECU) integrate seamlessly with industry tools, ensuring a streamlined and efficient product design cycle.
The ability to simulate real-world scenarios and interactions is particularly valuable for developing advanced technologies such as electric and hybrid vehicles, where precise predictions and optimization are critical.
Expanding Horizons: Aerospace, HVACR, Marine, Energy, and Beyond
The influence of GT-SUITE extends beyond automotive engineering.
In the HVACR (heating, ventilation, air conditioning, and refrigeration) industry, Gamma Technologies’ comprehensive set of validated 0D/1D/3D multi-physics component libraries have enabled HVACR engineers to tackle challenges in development such as sustainability and efficiency, decarbonization, new refrigerants, and system complexity and controls. GT-SUITE’s combined with GT-TAITherm can model human comfort which allows the user to have an additional target besides traditional temperature and humidity. The human comfort model has localized comfort zones that can be used to determine if cabin insulation or HVAC settings need to be modified. Well-known brands such as Carrier, Copeland, Daikin, Sanden, Trane, Tecumseh, Rheem, and others have found tremendous benefits utilizing GT-SUITE. Learn more about these case studies here.
In aerospace, the software supports the design and analysis of complex systems like propulsion and avionics. Engineers from organizations such as NASA, Roush, SAFRAN, and others have used GT-SUITE to ensure that aircraft systems are both efficient and reliable, contributing to advancements in performance and safety. Some of the applications that Gamma Technologies’ simulation solutions have assisted in include cryogenic systems, propulsion system modeling, environmental controls systems (ECS), fuel cell simulation, thermal management, e-propulsion batteries, flight dynamics and controls, multi-body dynamics, landing gear development, and fuel tank modeling.
GT is proud to say that our solutions have already been used to support the future of urban air mobility by providing simulations for electric aircraft and electric vertical take-off and landing (eVTOL) vehicles (including air taxis) development.
In the energy and oil & gas sectors, GT-SUITE aids in the development of innovative solutions for power generation and renewable energy. Our customers are already choosing GT for upstream, midstream, and downstream applications to optimize production. The ability to simulate energy systems helps companies enhance efficiency and sustainability, addressing some of the most pressing challenges in energy production and consumption.
It might be surprising that the marine industry is aggressively moving towards a sustainable future as well. GT is proud to have partnered and work with organizations such as the Maritime Battery Forum, WIN GD, Yanmar R&D, and Toshiba. These firms have leveraged GT-SUITE’s solutions to simulate engine and drivetrain development for ship modeling and create digital twins for electrified motors.
Gamma Technologies has been on the forefront of implementing AI (artificial intelligence) & ML (machine learning) technologies. These tools elevate simulation capabilities by allowing thousands of variables to be considered and help designers best engineer superior products. AI and ML enhance simulations by creating accurate and dynamic metamodels (mathematical models) that can adapt to complex, real-world scenarios in real-time. These technologies also streamline the analysis of vast data sets, leading to more precise predictions and informed decision-making.
To learn more about our machine learning capabilities, read this two-part blog series on enhancing model accuracy by replacing GT’s lookup maps and optimizing neural networks.
A Legacy of Innovation
As Gamma Technologies celebrated its 30-year milestone, it’s clear that its impact on the engineering world is profound. GT-SUITE’s ability to provide detailed, multi-domain simulations has empowered engineers across industries to tackle complex problems and push the boundaries of what’s possible. This dedication has kept GT-SUITE at the cutting edge of simulation technology, ensuring that it meets the ever-changing demands of its diverse user base.
Looking Ahead
As we look to the future, Gamma Technologies is well-positioned to continue its legacy of pioneering simulation technology. With GT-SUITE leading the way, the company is set to drive further advancements in engineering and design, helping industries navigate the complexities of modern technology and innovate for a better tomorrow.
Learn More About Gamma Technologies’ Simulation Solutions
To learn more about our simulation capabilities, visit our website. Learn more about GT-SUITE here. Contact us here to speak to a GT expert!
Simulating Real Driving Maneuvers in Traffic using SUMO and GT-SUITE
The Need for Realistic Vehicle Operating Conditions
In an era where mobility is becoming increasingly electrified, new engineering strategies are needed to properly optimize an entire vehicle system for both fuel and energy saving potential that account for a multitude of driving scenarios.
Especially during local commutes with traffic, being able to predict both vehicle operational behavior, together with sensitivity to variable human factors such as a drivers’ behavior, enables engineering teams to develop better vehicles.
Available traffic simulation tools like SUMO (Simulation of Urban MObility), VISSIM (Verkehr In Städten – SIMulationsmodell), AIMSUN (Advanced Interactive Microscopic Simulator for Urban and Non-Urban Networks), and others employ the traffic flow theory which describes drivers’ behavior and its impact on overall traffic performance. This approach typically investigates the behavior of the ego vehicle, a vehicle which contains sensors that perceive the external environment, together with various traffic actors, while ignoring the predictive details of the vehicle dynamics.
By combining SUMO with GT-SUITE (a systems simulation platform), an engineer can study the behavior of a high-fidelity powertrain model within a real-world environment, considering the non-deterministic nature of traffic and driver behavior. This approach leverages vehicle simulation to the next level, offering more realistic vehicle operation and assessment of fuel/energy saving potential.
Coupling Multiple Simulation Solutions to Model Traffic Behavior
In this study, we coupled various simulation solutions to best model a realistic traffic scenario. We selected the open-source software, SUMO, which is a well-known tool in the field of traffic simulation. The coupling was implemented by leveraging SUMO’s TraCI (Traffic Control Interface) protocol, which allows users to establish interactions between other simulation tools.
Easy coupling of both GT-SUITE and SUMO with Simulink enables us to use Simulink as a lead tool controlling the co-simulation process and data exchange between the ego vehicle in SUMO, and its dynamic response modeled in GT-SUITE.
Once these solutions are combined, driver traffic decisions are completely managed by the SUMO driver using implemented car following, lane changing models, overtaking models, and traffic constraints. A vehicle model in GT-SUITE replicates the speed trajectory calculating energy consumption at the same time. The GT-SUITE vehicle model then estimates the potential limits of acceleration and/or speed for future vehicle movements/maneuvers. These vehicle dynamic limitations shared with the SUMO driver will keep the ego vehicle operating in accordance with those limits (See Figure 1 below).
Communication between traffic and vehicle dynamic simulation through Simulink enables the integration of additional subsystems or controls to the existing one. Therefore, a vehicle model and simulation of its dynamic can be easily used for numerous studies and development processes as a part of more complex simulation platforms.
Simulating an Electric Vehicle (EV) in “New York City Traffic”
Let’s look at an example. We’ll use one of the available SUMO traffic examples to look at an electric vehicle (EV) model in GT-SUITE.
The main focus of this example model is to establish communication between these simulation platforms by being able to facilitate bidirectional communication while keeping the vehicle running within realistic powertrain operation. This was achieved by using available features of the powertrain physical and controls components to estimate limitations coming from different vehicle subsystems. An additional preprocessing of different powertrains coming from electric motors and battery management systems (BMS) regarding available traction force was also developed.
At each communication update interval, the GT-SUITE plant model provides the current acceleration/deceleration limits to the SUMO driver model. Knowing this information, SUMO drivers will achieve additional level of fidelity and adjust their driving style according to the vehicle powertrain limitations (see Figure 2).
Knowing the vehicle model limits from the SUMO driver behavior allows us to engineer ego vehicle acceleration and speed that’s both realistic and achievable. On the image below (Figure 3), we can see the effects. In the Co-Sim mode (SUMO and GT-SUITE), the driver is less aggressive and keeps the vehicle running, considering available powertrain limits.
From the plot below, we see that driver behavior is the same based on the initial aggressiveness when the powertrain limitations are not imposed. With this approach, the SUMO driver makes decisions freely until the physical model powertrain limitations are not reached.
In the example demonstrated, the limitations are imposed from the BMS which takes care of minimal and maximal battery voltage and maximal charge and discharge rates.
Other vehicle component physical limitations:
- Energy source systems including battery, super-capacitor, fuel cell or other current, voltage or temperature related torque limitations
- E-motor mechanical, electrical or temperature torque limitations
Learn More About our Co-Simulation and Driving Simulation Capabilities
In general, in addition to a vehicle’s physical component limitations, a user can impose acceleration or speed limits as a command from the advanced driving assistance systems (ADAS) controllers.
Here, we recognize a broad usage of integrated simulation solutions for more sophisticated energy consumption and emissions; development of hybrid energy system controls logic, ADAS, applications in the field of vehicle to vehicle (V2V), or vehicle to infrastructure (V2I) communication.
To learn more about our co-simulation capabilities visit this webpage here. Learn more about our hybrid and electric vehicle capabilities here. Contact us here to speak to an expert!
How Will Electric and Hybrid Vehicle Development Be Impacted by the Softening of US Rules
Governmental Regulations Impacting Automotive OEMs
In recent news, new vehicle tailpipe governmental regulations in the United States have softened for original equipment manufacturers (OEMs) development of electric vehicles (EVs) and hybrids (HEVs).
The Department of Energy has significantly slowed the phase-out of existing rules that give automakers extra fuel-economy credit for electric and hybrid vehicles they currently sell. The real-world impact of the complex regulations has helped U.S. automakers meet new federal standards for fleetwide fuel efficiency continuing to sell traditional, internal combustion engine (ICE) vehicles.
The Role Simulation Plays in New Vehicle Development
With these changes, it’s now imperative for the engineering community to leverage simulation platforms such as GT-SUITE in today’s automotive development for several reasons:
- Cost Reduction: Developing new automotive technologies, especially in the context of EVs and hybrids, can be expensive. Simulation allows OEMs to test various designs and configurations virtually, reducing the need for physical prototypes and costly trial-and-error processes.
- Time Efficiency: With simulation, OEMs can accelerate the development process. They can quickly assess the performance of different components and systems, identify potential issues, and iterate on designs much faster than with traditional methods. This agility is crucial in a competitive market where time-to-market can make a significant difference.
- Regulatory Compliance: Although regulations may slow down, they are unlikely to disappear. OEMs still need to meet stringent emissions standards and fuel efficiency requirements. Simulation enables them to explore different powertrain configurations, optimize efficiency, and ensure compliance with current and future regulations.
- Technology Exploration: Even as regulations ease, the demand for cleaner and more efficient vehicles continues to grow due to environmental concerns and consumer preferences. Simulation allows OEMs to experiment with emerging technologies, such as advanced battery chemistries or fuel cell systems, and stay ahead of the curve in the evolving automotive landscape.
- Risk Mitigation: Investing in new technologies carries inherent risks. Simulation helps OEMs mitigate these risks by providing insights into potential challenges and performance limitations before committing to large-scale production. This allows them to make informed decisions and allocate resources more effectively.
- Optimization and Innovation: Simulation enables OEMs to optimize the performance of electric powertrains, hybrid systems, and fuel cell technologies. By fine-tuning parameters such as energy efficiency, range, and power output, they can deliver vehicles that meet or exceed customer expectations while staying competitive in the market.
Learn More About Our Simulation Solutions
While phased-in regulations may temporarily ease the pressure on OEMs, simulation remains a crucial tool for innovation, efficiency, and competitiveness in the automotive industry. Especially in the context of evolving technologies such as electric powertrains and fuel cells.
To learn more about GT-SUITE, visit our website here. Speak to GT expert today as well here and see how to incorporate simulation for your vehicle development needs.
Understanding Fuel Cell Systems Simulation for Vehicle Integration
In Episode 3 of the Gamma Technologies Tech Talk podcast, the team delved into the world of fuel cell systems simulation and its integration with gas turbines (GT). Navin Fogla, PhD (Senior R&D Manager, Reactive Flow Systems) and Jake How (Senior Staff Application Engineer, Reactive Flow Systems), shared insights into the mechanics behind this advanced technology.
Watch the full episode on Gamma Technologies’ YouTube channel.
Mapping the Integrated Vehicle-Level Perspective & Achieving Fidelity in Simulation
Towards the end of the podcast, Jake provided a walkthrough of GT’s fuel cell modeling capabilities via the simulation platform, GT-SUITE. This walkthrough emphasized how it is possible to scale GT’s simulations from one level of fidelity to another, ensuring a comprehensive understanding of the fuel cell system’s behavior within an integrated vehicle model.
At the heart of the integrated vehicle system lies the fuel cell stack, but this is just one piece of the puzzle. The integrated vehicle-level map above showcases how the fuel cell stack is connected to various components such as hydrogen tanks, air handling systems, cooling systems, and an electrical powertrain. The powertrain also includes a DC-DC converter to regulate voltage and a motor that propels the vehicle. Fuel cell system simulation also can be conducted at two ends of the fidelity spectrum. At the lower end, models can be simplified to allow for faster simulations, optimization, and design of experiments.
On the other hand, more advanced simulations, such as pseudo-3D or 3D-1D modeling, provide a high-fidelity analysis of the fuel cell stack or individual cells. This level of detail allows for the investigation of coolant rates, local hotspots and cold spots, as well as oxygen and water distribution.
Learn More About Fuel Cell System Modeling Capabilities
If you’re curious about fuel cell technology, hydrogen safety, and system simulation, make sure to watch/listen to the full episode on the GT Tech Talk podcast on Gamma Technologies YouTube channel or on Spotify for Podcasters today!
To learn more about GT’s fuel cell system modeling capabilities, visit this webpage or speak to a GT expert today!
Subscribe to the GT Tech Talk podcast and learn more about the show as well as upcoming episodes here.
This blog was originally published on September 22, 2023.
How to Analyze Noise, Vibration and Harshness in Electric Powertrains (e-NVH) using Simulation
What are the Sources of Noise and Vibration in Electric Drives?
The general shift towards electrification in the electric vehicle (EV) market and beyond has created a need for higher fidelity simulation of electric powertrains. One aspect of this trend that has been getting attention is the desire for detailed analysis of an electric motor’s noise, vibration, and harshness (NVH) characteristics early in the design stage.
The sources of the characteristic high-pitch whine of electric motors are the interaction between different airgap field harmonics inside the machine, as well as the switching voltage inputs from the inverter. These elements generate force waves in the airgap, which can excite the structure of the motor and cause vibrations, particularly at specific resonant frequencies. Imperfect torque and stator load profiles cause further vibration of attached machinery components and the gearbox housing (e-axle). Unlike internal combustion engines, where the engine sound is often a prominent feature that we want to accentuate, with electric drive units, any sound that is produced is usually undesirable, so the goal is to minimize it.
A Complete & Fully Integrated Workflow
To properly analyze how this noise and vibration is created and to mitigate it, a system-level simulation of the motor, the inverter and the mechanical components is necessary. To capture the NVH characteristics of an electric drive unit, a new workflow, that spans GT-FEMAG’s electromagnetic finite element analysis simulations and GT-SUITE’s electrical and mechanical transient simulations, was developed (noted in Figure 1 below).
Electrical Section
The first step in this process is to use GT-FEMAG, a finite element electromagnetic modeling tool built for motor design to design a motor that meets speed and torque requirements for the traction motor. After the motor design has been finalized, GT-FEMAG can export a very high-fidelity model of the motor, used to populate the datasets of a new lookup-table–based permanent magnet synchronous motor (PMSM) template in GT-SUITE, that can capture the torque ripple and the spatial harmonics inside the machine. This motor template is coupled with a detailed 3-phase inverter and controlled with a closed loop feedback control (Figure 2).
This simulation outputs the 3-phase currents in the motor windings, at multiple different speeds (see Figure 3 below). These currents will be used in the next step to calculate the motor forces, in the mechanical part of the workflow.
Mechanical Section
Moving on to the mechanical section of the workflow, using the ABC currents that were calculated previously, FEMAG evaluates the magnetic pressure as a function of space and time, which we can then use to predict the forces that are generated in the motor, as a function of rotor position and stator tooth, for each operating speed and torque combination (Figure 4). These forces will be the boundary conditions for the mechanical analysis in the next step.
Next, these excitation loads are used in GT-SUITE as an input for a forced frequency analysis to get the structural steady–state response of the overall gearbox housing. By performing a Fourier transformation of the loads from the previous step, we can obtain the amplitudes of the applied loads at each frequency, resulting from the various speed and order combinations of the simulated drivetrain. With this information, it is possible to directly identify areas that end up in excessive surface vibration and react accordingly by modifying the system. Additionally, the surface vibration response can be used to perform an acoustic analysis using a rapid sound assessment method that will provide the sound pressure level at any location around that structure (see Figure 5 below).

Figure 5: Surface Normal Velocity at a given frequency, Campbell diagram at a given node, and sound pressure of the powertrain in 3D space
An All-in-One Package for e-NVH Analysis
This workflow can offer a very straightforward and convenient way to analyze the NVH performance of any electric powertrain. As a tightly linked system, contained fully within GT’s library of tools, it enables users to run many iterations easily and quickly and to optimize their designs based on many parameters, like the geometric characteristics of the motor, the switching frequency, or the modulation strategy of the inverter etc., and see how these changes affect the NVH performance. The high degree of integration between the electromagnetic, the electrical and the mechanical domains of this workflow provides a seamless user experience, without having to resort to multiple different simulation tools, as is typically the case.
Learn More About our e-Powertrain NVH Solutions
The full workflow is presented in more detail by GT’s experts in this 30-minute SAE webinar. If you’d like to learn more or are interested in trying GT-FEMAG and GT-SUITE for e-powertrain and NVH simulation, contact us!
Top 10 Gamma Technologies Blogs of 2023!
From calculating EV range to heat pump design, there is a blog for every simulation!
As we kick off 2024, let’s look back at the best blogs of 2023! Since the inception of Gamma Technologies, GT-SUITE has optimized system simulation solutions for manufacturers! In no order, these are the top 10 blogs written in 2023 that highlight the vast application use cases and technical capabilities GT-SUITE can deliver!
- Decreasing Battery System Simulation Runtime using Distributed Computing
- Calculating Electric Vehicle Range with Simulation
- Engine Manufacturers Leverage Simulation to Engineer Ahead of Increasing Regulations
- Enhancing Model Accuracy by Replacing Lookup Maps with Machine Learning Models (Machine Learning Blog Part 1)
- Optimizing Neural Networks for Modeling and Simulation (Machine Learning Blog Part 2)
- Mitigating the Domino Effect of Battery Thermal Runaway with Simulation
- Designing Thermally Secured Electric Motors with Simulation
- Understanding Fuel Cell Systems Simulation for Vehicle Integration
- Addressing Heat Pump Challenges, from Home to Industry with Simulation
- Simulating Predictive Cruise Control for a Heavy-Duty Truck: Quickly and Easily
Shout-outs to our colleagues for their contributions!
Learn more about our simulation solutions!
If you’d like to learn more about how Gamma Technologies can be used to solve your engineering challenges, contact us here!
Wishing you a healthy & prosperous 2024!
Using Simulation for Battery Engineering: 15 Technical Blogs to Enjoy
At Gamma Technologies, our GT-SUITE and GT-AutoLion simulations provide battery engineers and designers robust solutions for modeling and predicting battery performance throughout its lifecycle.
Enjoy reading our battery-focused technical blogs to learn more about:
- Calculating electric vehicle (EV) range
- Decreasing battery system simulation runtime
- Vehicle modeling: ICEV & BEV correlation procedure
- Reducing battery charging time while maximizing battery life
- Reducing battery testing time and costs
- Predicting system performance with aged li-ion batteries
- Predicting lithium-ion cell swelling, strain, and stress
- Lithium-ion battery modeling automotive engineers
- Non-automotive li-ion applications: aircraft, ships, power tools, cell phones and others
- Battery thermal runaway propagation
- Fuel cell system modeling
- Virtual calibration of fast charging strategies
- Parametric battery pack modeling for all existing cooling concepts
- Robust battery pack simulation by statistical variation analysis
- Sensitivity analysis: ranking the importance of battery model parameters
Since the inception of GT-SUITE, Gamma Technologies has recognized the transformation of automotive and non-automotive industries. Our solutions are powertrain and industry agnostic, and we are looking to guide customers and partners towards a sustainable world.
Learn more about our battery simulation solutions!
If you’d like to learn more about how GT-SUITE and GT-AutoLion can be used to solve your battery pack design challenges, contact us here!
This blog was initially published June 10th, 2022
Designing Thermally Secured Electric Motors with Simulation
Component-Level Design & Analysis for Motor Thermal Security
Most of today’s traction motors in battery electric vehicles (BEVs) are permanent magnet synchronous machines (PMSMs) that use interior permanent magnet (IPM) rotors with rare-earth magnets embedded in the rotor (automotive companies are starting to explore or even build other technologies, but that is a topic for another blog). These magnets generate heat and tend to demagnetize if they reach critical temperatures; moreover, because they are embedded in the rotor, cooling these magnets can be challenging.
In the design process of a traction motor, a variety of stator and rotor cooling options should be studied, and simulation gives motor designers the ability to study trade-offs of different cooling strategies without having to build and test physical prototypes (saving both time and money). Traditionally, steady-state, component-level simulations that couple finite-element approaches for both electromagnetics and thermal conduction and convection are used to ensure the thermal security of the motor.
For an example of this, see the model results below that utilize both GT-FEMAG and GT-SUITE. These simulation solutions from Gamma Technologies were used to couple the electromagnetic and thermal finite element solutions to study different stator cooling topologies for a traction motor. The results below include the trade-offs of structure temperature (windings and magnets), coolant temperature rise, and coolant pressure drop.

Coupling GT-FEMAG electromagnetic and GT-SUITE thermal finite element solutions for motor cooling design analysis
System-Level BEV Design & Analysis
System-level engineering of BEVs, on the other hand, requires an understanding of the global energy management puzzle and temperature distribution of its components (such as batteries, motors, inverters, and the occupants) to predict detrimental hot spots and occupant comfort in either hot or cold ambient temperatures during transient events. For more on this topic, see a blog written by my colleague, Brad Holcomb.
In the case of automotive applications, the most common transient analyses performed are drive cycle tests that can be 30 minutes, or longer. For these long, transient simulations, the traditional finite element model of the motor (introduced earlier) would be too slow to be integrated into system-level simulation for hot spot prediction.
The challenge is how can system-level engineers have an accurate, fast-running representation of a traction motor capable of hot spot prediction that can be integrated into a system-level model? In other words, how can we blur the lines between component-level and system-level simulation to engineer better electric vehicles?
Blurring the Lines Between Component-Level & System-Level Simulation with GT-SUITE and GT-FEMAG
With GT-FEMAG and GT-SUITE, Gamma Technologies offers an innovative way to have physics-based models of these coupled electromagnetic-thermal models to be used in system-level simulation.
First, the electromagnetic solver of GT-FEMAG is integrated into GT-SUITE as a seamless pre-processor that can automatically generate either map-based versions of motors with detailed component losses (for example, winding, iron, or magnet losses) or equivalent circuit models of motors (commonly referred to as “Ld, Lq” models) for the transient solver of GT to use in system-level simulations.

GT-FEMAG as a pre-processor to GT-SUITE System-Level Simulation
Second, the thermal solver of GT-SUITE has a generalized, physics-based, and one-click switch that automatically converts 3D finite element models into 1D lumped thermal network models.

Automatic model order reduction in GT-SUITE to reduce thermal finite element models to thermal network models
These two capabilities enable users to quickly traverse different modeling fidelities between fully detailed electromagnetic and thermal finite element and fast-running and accurate 1D models.
Electric Motor Simulation Demonstration & Results
In a demonstration model we created, we combined these technologies to be able to run back-to-back Worldwide Harmonized Light-Duty Vehicle Test Cycles (WLTC) on an electro-thermal motor model by imposing speed and torque on the motor for one hour of simulation. To give the model more transient warm-up behavior, we connected the motor to a simplified cooling system that includes a thermostat, a pump, and a heat exchanger. We also modeled three different ambient and initial soak temperatures of the system, modeling the warmup of the motor at –10 °C, 0 °C, and 10 °C ambient conditions.
The simulations each only took 70 seconds to complete (over 50 times faster than real-time), but captured the transient warmup of the various components in the motor, including the windings and magnets in the rotor:

Transient warmup of various components in an electric motor
Below is an animation showing the transient coolant temperature through the stator for the -10 °C ambient case over the course of the simulation (please note contour scale is non-linear to help visualize results).
Transient stator coolant temperature through 2 WLTCs at -10 °C ambient conditions
Model Integration
Because the standalone electro-thermal motor model was processed over 50 times faster than real-time, we can bring it directly into a system-level model for integrated simulations. These simulations allowed us to have a deeper understanding of the energy management and trade-offs between different cooling strategies for the entire system, including the motor, inverter, battery, and cabin.

A complete multi-physics BEV model with GT-SUITE and GT-FEMAG
Learn More About our Electric Motor Simulation Solutions
If you’d like to learn more or are interested in trying GT-FEMAG or GT-SUITE for component-level or system-level simulation of electric vehicles, contact us!
Mitigating the Domino Effect of Battery Thermal Runaway with Simulation
What Happens During Battery Thermal Runaway?
As the world continues to move towards a more sustainable future, so does the popularity of electric vehicles. While the benefits of electric vehicles are many, one of the key challenges is ensuring that the battery packs used in these vehicles are safe and reliable. In the context of battery packs, thermal runaway stands out as an inherent hazard that can evoke profoundly negative media attention and public concern.
During a thermal runaway event, undesired exothermic side reactions occur. These reactions are the response of the battery components exposure to extreme operating conditions, including but not limited to: high operating temperatures, fast charging, cell fractures by external objects, and internal short circuits.
Just like the domino effect, a single cell entering thermal runaway can easily spread to the surrounding cells and cause fires and explosions in the whole pack. This is known as thermal runaway propagation.
How to Avoid Battery Thermal Runaway
The question arises, how can you can safeguard your cell from entering thermal runaway? Some common triggers for thermal runaway include excessive heating, electrical faults such as short circuits and nail penetration or even a faulty cell. Therefore, the goal should not be to never have a cell enter thermal runaway but rather to design a battery pack that can withstand a cell entering thermal runaway without causing the thermal runaway to propagate to the rest of the pack.
To effectively tackle this challenge, it is critical to develop precise models capable of predicting and mitigating thermal runaway propagation in battery packs. Well-designed battery pack models ensure appropriate cooling systems and safety features are engineered to minimize the risk of thermal runaway. Additionally, these models can be used to develop early warning systems that can detect when a pack is starting to overheat.
Experimental Costs of Testing Thermal Runaway
Experimentally analyzing thermal runaway propagation in lithium-ion battery cells and packs is ideal, but requires significant resources of both time and money. The process will need designing and constructing different test scenarios and equipment, not to mention the experimental condition variations and the safety risks associated with intentionally inducing such events. Just to test a simple lithium-ion battery pack prototype for thermal runaway propagation could cost nearly $100k per test scenario. What’s more is that thermal runaway propagation is an inherently complex event. It can be influenced by a wide range of factors, such as overcharging, overheating, internal shorting, and nail penetration. It is nearly impossible to replicate all the real-world scenarios in a laboratory setting. In the meantime, cell manufacturers may experience major delays in the release schedule of their final product, whether it is an electric vehicle, an electrical vertical takeoff and landing vehicles (eVTOLs), or others due to the fact that physical testing is often conducted at the late stages of the development cycle.
History of Simulating Thermal Runaway Propagation with GT-SUITE
Cell and pack manufacturers have diligently turned their attention to computer simulation and modeling techniques to analyze thermal runaway propagation. Many cell manufacturers look to 3D computer aided engineering (CAE) simulation to avoid the challenges associated with experimental physical testing of thermal runaway propagation. The components for modeling thermal runaway propagation include:
- Pre-runaway battery model
- Thermal runaway trigger
- Cell-level thermal runaway model
- Heat transfer model
Using 3D CAE serves as an exemplary simulation and modeling technique and is known to provide intricate details regarding thermal runaway propagation. However, this method is known for its considerable time requirement and model renderings that are challenging to implement as well as the difficultly to test numerous “what if” scenarios on.
In a previous blog, we demonstrated how the simulation platform GT-SUITE was employed to model the propagation effect of thermal runaway in a small battery module. GT-SUITE provides a 1D CAE solution that offers faster running models than the common 3D CAE models. We showcased how an equivalent circuit model can be used as the pre-runaway battery model. Simple external heating such as the thermal runaway trigger, a rule-based model for the cell-level thermal runaway model, and 1D thermal networks were used for the heat transfer model. Since then, GT-SUITE has been prolifically used by many battery pack designers not only to predict lithium-ion battery performance metrics but also to simulate the thermal runaway propagation and gain invaluable insights into the behavior of their battery packs under different conditions.
Cell thermal runaway events vary greatly based on the events leading to thermal runaway. For instance, how quickly the cells were heated to a runaway state will affect the mass of vent gases evolved and their composition. This becomes important as the commonly used rule-based models are not able to capture these detailed values. Within GT-SUITE’s battery modeling platform GT-AutoLion, the latest GT-SUITE development includes a unique 1D&3D multi-physics model for thermal runaway propagation. This modeling approach not only provides fast-running models but also demonstrates strong physics.
Cell-level Thermal Runaway Propagation Enabled by P2D Electrochemical Modeling Together with Chemical Reactions
The first step for modeling a thermal runaway propagation is to have a pre-runaway model. More commonly, equivalent circuit models (ECMs) have been used as pre-runaway models to predict the performance of lithium-ion batteries. However, there are multiple shortcomings with this approach as they cannot fully capture the complex electrochemical reactions occurring within a battery cell.
To address these limitations, we will use GT-AutoLion, which is based on a pseudo-two-dimensional (P2D) electrochemical modeling, to calibrate the pre-runaway lithium-ion battery performance, voltage, and heat generation during a normal operation leading up to a thermal runaway event. Using Gamma Technologies’ physics-based modeling, GT-SUITE users will now have access to more meaningful results while running different thermal runaway propagation scenarios.
In addition to the above capabilities, GT-AutoLion can have user-defined chemical side reactions for thermal runaway propagation modeling. In a use-case based on an article by Feng et al., we modeled the thermal runaway reactions based on couple of reactions happening in a lithium-ion battery cell:
- Solid electrolyte interphase (SEI) decomposition
- Anode – electrolyte interface
- Separator melting
- Cathode decomposition (2 reactions)
- Electrolyte vaporization and degradation
The cell-level thermal runaway model we developed by utilizing the new capabilities of GT-AutoLion shows an excellent match with the findings documented in the literature. Below are some of the results that indicate the temperature rise and reactant concentrations for the cell entering the thermal runaway.

Comparing the cell-level electrochemical-thermal coupled modeling results by GT-AutoLion and experimental results by Feng et al. (a) temperature evolution over time, (b) changes in normalized concentration of reactants over time.
Simulating a Module-Level Thermal Runaway Propagation
Using a simple battery module consisting of 20 cells in a series, with fins in between the cells, that are connected to a cold plate to provide cooling. The GEM3D tool in GT-SUITE was used to convert the CAD components to a finite element mesh for the cells, fins, and cold plate material. The model also had a flow volume that represented the air inside the module around the battery cells and was further connected to a burner where combustion reactions were defined. This would potentially be the combustion of chemicals that are released upon cells entering the thermal runaway.
Using GT solutions, we have a strong and fast-running model in which any cell in the module can be selected as the “trigger” cell by applying an external heat until a certain trigger temperature is reached. For this example, thermal runaway was initiated in the center cell (through simple heating and vent gases which were combusted in the burner).
Thermal Runaway Case Studies
Two case studies, using GT solutions, were carried out to observe the battery pack behavior during thermal runaway propagation: (i) without any coolant flow in the cold plate and (ii) with coolant flow of 2kg/s in at 60 °C. Building this model took just a few hours from start to finish.
The 12-minute thermal runaway simulation took about 2 hours to calculate, including thermal, electrical, chemical, and flow physics.
The model results shown in the figures below indicate that when there is no coolant flow in the cold plate, case (i), every cell entered the thermal runaway, one after another. Starting from the center cell and propagating to neighboring cells until all the cells reached high temperatures of 600 to 700 °C. If this were a physical test, the pack would have needed to be re-designed and re-tested! But since no real battery modules were destroyed in this virtual environment, this simulation could now be repeated under different conditions.
Consider the scenario wherein a battery pack is equipped with a coolant flow configuration as delineated in case (ii). As indicated in the figures below, it can be observed that certain cells, primarily the adjacent cells positioned in the middle of the pack, may still undergo thermal runaway. Yet such an occurrence was confined to the limited number of cells located at the center, meaning that the battery pack would not be set on fire.
To see a full tutorial of building models for thermal runaway propagation using GT-SUITE and GT-AutoLion, watch this video here!
Learn More About our Battery Thermal Runaway Solutions
Lithium-ion batteries can experience thermal runaway from a variety of trigger events. Propagation of a thermal runaway event to other cells in the battery pack should be avoided for a safe pack design, but repeated physical testing is expensive and poses significant challenges. GT-SUITE offers a fast-running simulation approach to model this event, combining the electrical, chemical, thermal, and flow domains into a single model. This innovative 1D & 3D multiphysics model enables accurate prediction of the cell heat release under different operating conditions which allows different thermal runaway mitigation strategies to be simulated.
If you’d like to learn more or are interested in trying GT-SUITE and GT-AutoLion to virtually test a battery pack for thermal runaway propagation, view this webpage here. To speak with a GT expert, contact us here!
Engine Manufacturers Leverage Simulation to Engineer Ahead of Increasing Regulations
Environmental agencies, such as the EPA in the United States, play a vital role in controlling nitrogen oxide (NOx) emissions by setting limitations on airborne pollutants that harm public health and the environment. These standards have grown more stringent in recent years for heavy duty and/or road diesel trucks, particularly for engine-out NOx emissions, as shown in Figure 1. It is essential for internal combustion engine (ICE) manufacturers to accurately predict and control engine-out NOx emissions under various operating conditions during the design phase to meet these standards.
Why GT-SUITE Should Be Your Go-To Simulation Platform for NOx prediction
Accurate prediction of engine-out NOx emissions of IC engines requires capturing the in-cylinder interactions among fuel injection, turbulence, chemistry, piston motion, and wall heat transfer. These interactions can lead to the creation of in-cylinder stratification, as shown in Figure 2, which significantly impacts the engine’s NOx emission.

Figure 2: A conceptual model of diesel spray showing in-cylinder stratification in a conventional diesel engine
This is where simulation modeling comes into play. Here are some modeling methods:
3D, computational fluid dynamic (CFD) simulations can possibly provide better accuracy in capturing these interactions. However, these simulations can become computationally time-consuming, especially when trying to evaluate hundreds (or thousands) of designs and operating conditions.
An alternative approach is to use reduced-dimensional models, such as zero-dimensional stochastic reactor models (0D-SRM). These models represent the engine cylinder by hundreds of notional particles, providing a high-fidelity framework to capture in-cylinder stratification using detailed chemistry and accurate mixing models. A 0D-SRM model runs much faster than 3D-CFD simulations, making it a more feasible option for evaluating large numbers of designs and operating conditions. These 0D-SRM models can provide a good trade-off between accuracy and computational cost, making them a useful tool for diesel engine designers and researchers.
Powerful simulation software, such as Gamma Technologies’ GT-SUITE, can be used to predict engine performance and engine-out emissions. It has an implemented zero-dimensional (0D) stochastic reactor model (SRM) that can predict engine performance and engine-out NOx emissions accurately using detailed chemistry [1]. The animation in Figure 3 demonstrates how the 0D-SRM model captures the in-cylinder inhomogeneity using hundreds of notional particles. This model has been extensively validated against experimental data, making it a reliable tool for predicting engine-out NOx emissions.

Figure 3b: Distribution of mass at different equivalence ratio bins using the 0D-SRM model of the GT-SUITE software. Here the 0D-VCF-tPDF model represents 0D-SRM model.
In addition, GT-SUITE offers the ability to optimize the chemical reaction rates during a simulation, which can be particularly useful for improving emission prediction under different designs and operating conditions. In a study [2], different approaches were proposed to improve the accuracy of engine-out NOx predictions by optimizing the chemical reaction rate parameters (see Figure 4).

Figure 4: GT-SUITE predicted the peak pressure, CA50, and NOx emissions for a GM diesel engine, and a comparison with 3D-CFD results is also provided [2].
Learn More About GT-SUITE’s Combustion Modeling
Learn more about combustion and emission simulation solutions here. If you are interested in using GT-SUITE for engine-out emission modeling needs, we encourage you to reach out and speak to a GT expert.
References
[1] Paul, C., Jin, K., Fogla, N., Roggendorf, K. et al., “A Zero-Dimensional Velocity-Composition-Frequency Probability Density Function Model for Compression-Ignition Engine Simulation,” SAE Int. J. Adv. & Curr. Prac. in Mobility 2(3):1443-1459, 2020, https://doi.org/10.4271/2020-01-0659.
[2] Paul C, Gao J, Jin K, Patel D, Roggendorf K, Fogla N, Parrish S E, Wahiduzzaman S, An indirect approach to optimize the reaction rates of thermal NO formation for diesel engines, Fuel 338 (2023) 127287, https://doi.org/10.1016/j.fuel.2022.127287.
How to Perform Battery Electric Vehicle Range Testing Using Simulation
Streamlining BEV Drive Cycles
Welcome to the second blog of this two-part series on how simulation platforms such as GT-SUITE can streamline the various drive cycles of battery electric vehicles (BEVs) to determine the range and adjustment factors.
Read part one to learn more about how electric vehicle (EV) range guidelines are currently determined here.
Building Vehicle Thermal Simulation Models with GT-SUITE
As noted in part one of this series, the BEV range test procedure outlined in the SAE J1634 standard involves varying test conditions and drive cycles to estimate the final range. Simulation platforms such as GT-SUITE offers engineers the chance to streamline the entire EV range estimation process.
Using GT-SUITE, the first step in BEV range testing simulation is creating a model of the vehicle thermal management system. The model needs to represent a system-level thermal management circuit of the electric vehicle and contains an integrated model of the following circuits:
- High-Temperature (HT) – Cooling circuit
- Low-Temperature (LT) – Cooling circuit
- Indirect refrigerant circuit
- Cabin air circuit
- Under-hood air circuit
This thermal management model is then integrated with a vehicle model, representing the full vehicle and electric powertrain. Multiple control elements are implemented to regulate and adjust inputs to the system components such as the electric pump/fan/compressor, controlled valves, and others.
Automating BEV drive cycles with GT-Automation
Once the thermal management model is integrated in the vehicle model, there are multiple cycles that need to be simulated to obtain the adjustment factor for a single vehicle configuration. This is where GT-SUITE’s built-in app, GT-Automation, can be leveraged to efficiently evaluate the 5-cycle testing process.
GT-Automation can be used to create a ProcessMap that allows configuring a workflow of individual processes that are executed in a specific order. It enables users to model the flow of interest to simulate all the required cycles without any manual intervention and extract the relevant energy consumption values into GT’s range calculation Excel tool.
As pointed out in part one of this series, the multi-cycle test (MCT) contains a mid-test constant speed section of varying durations that are dependent on the electric vehicle and its battery pack size. This is another use for GT-Automation to automate the determination of the duration of the mid-test steady state phase in MCT, thereby eliminating the iterative process of selecting the steady-state speed duration for every vehicle configuration.
Calculating Range with GT’s Excel Tool
Lastly, engineers can use GT’s range calculation Excel tool which includes the required formulas from the SAE J1634 regulation embedded in the cells to estimate the adjustment factor and subsequently compute the label range.
Additionally, GTs range calculation Excel tool helps users manually tweak the energy consumption values in independent drive cycles to quickly understand the impact of different drive cycles and their dependencies on the adjustment factor and range. This further helps optimize control strategies and energy consumption for different vehicle configurations.
Example Results for Two Vehicle Configurations 
The range of a typical passenger BEV is simulated according to the 2-cycle and 5-cycle methodology outlined in the SAE J1634 standard with different configurations. Utilizing the adjustment factors, both the vehicle configurations gained approximately 5-7% miles in the EPA’s certified range limits, making the 5-cycle testing a favorable option for the manufacturers.
This, however, does not conclude that the 5-cycle testing option will always improve the adjustment factor and range for any vehicle configuration. It ultimately boils down to the efficiencies of different propulsion components and electrical load requirements at those additional 3 drive cycles that play a vital role in estimating the adjustment factor.
Learn More About our Battery Simulation Capabilities
If you’d like to learn more or are interested in trying GT for automating the 5-cycle test process flow for EV adjustment factor and range improvements, speak to a GT battery expert here.
If you missed part 1 of this series, read more here.
To learn more about more about battery simulation solutions, check out our battery modeling page and learn more about our hybrid and electric simulation solutions. Also, check our top 15 battery-related topics blogs in this list!
Calculating Electric Vehicle Range with Simulation
How is Electric Vehicle Range Tested?
Range anxiety is one of the biggest concerns of consumers when it comes to looking to purchase an electric vehicle (EV). Because of this, manufacturers of EVs need to have accurate range predictions to build trust and quell range anxiety.
The determination of range for battery electric vehicles (BEV) has been historically tested using the 2-cycle test methodology from the SAE J1634 standard in North America. The 2-cycle test procedure, like the single-cycle test (SCT) and multi-cycle test (MCT), generally includes standalone or sometimes a combination of city and highway speed profiles.
The Five-Cycle Testing Guidelines for Electric Vehicles
The single-cycle test (SCT) is a full-deplete test, meaning that the vehicle is driven in a repeating city or highway drive cycle until the battery dies. This can take a long time and consumes significant resources, placing significant logistical strains on test facilities. Also, additional test cycles beyond the SCT are needed to better characterize the effects of temperature and accessory loads on EV range performance.
These constraints led the Environmental Protection Agency (EPA) to adopt new methodologies for testing and determining the range of BEVs called the multi-cycle test (MCT), short-multi cycle test (SMCT+), and 5-cycle test procedures. The MCT and SMCT are full-deplete tests and combine standard dynamic drive cycles (UDDS, HFEDS, or US06) with constant-speed driving phases. The goal of using the standard dynamic drive cycles is to determine the energy consumption associated with specific and established driving patterns. and the goal of the constant speed profiles is to rapidly discharge the battery energy consuming less time and resources compared to the SCT. The standard MCT procedure consists of four UDDS cycles and two HFEDS cycles in a specified sequence including mid-test and end-of-test constant speed “battery discharge phases” (CSC) which vary in duration depending on the vehicle and the size of its battery pack. The speed profile for MCT is shown in Fig1.
The SMCT includes AC energy consumption in its range determination by means of a shorter test as compared to the MCT. It accomplishes this by changing the order of the cycles and including a US06 cycle. SMCT is not a full-deplete test unlike MCT, and the remaining battery energy must be depleted separately in the case of the SMCT, which is often done by simply driving the vehicle at a steady-state speed (called SMCT+).
Lastly, the EPA 5-cycle procedure encompasses high vehicle speeds, aggressive vehicle accelerations, use of climate control system, and cold ambient conditions in addition to the standard City and Highway drive cycles used in SCT, MCT, or SMCT+. The 5-cycle test does a better job of reflecting typical driving conditions and styles. It produces energy consumption ratings that are more representative of a vehicle’s on-road range. Different testing options for 5-cycle EV certification are shown in Fig3.
Leveraging the Adjustment Factor to Improve EV label-range
Every original equipment manufacturer (OEM) is required to run at least 2-cycle to certify range for EVs in North America, but the drive cycles under consideration in 2-cycle tests are low-speed tests that aren’t truly representative of the real world. This forces the EPA to use an adjustment factor to yield a more realistic customer that experiences range. The default adjustment factor is 0.7, which reduces the raw range by 30% when an OEM opts to certify the range with just a 2-cycle methodology. For example, a car that achieves 500 miles of range during a 2-cycle test ends up with a 350-mile label range by using the default adjustment factor. However, the EPA allows manufacturers the option to run three additional drive cycles (US06, SCO3, and FTP cold drive cycle) and use those results to earn a more favorable adjustment factor. The adjustment factor can never be less than 0.7, in the case that the estimated adjustment factor from the 5-cycle test is less than 0.7 then a default adjustment factor of 0.7 can be applied.
Using GT-SUITE to predict the 5-cycle adjustment factor
GT-SUITE, a multi-physics simulation software, is used to automate the entire 5-cycle process outlined in SAE J1634 regulation to predict the adjustment factors for various vehicle configurations, leaving users with testing options to choose for EV range certification. In addition, GT will help users eliminate the iterative steady-state calculations involved in the MCT and manual extraction of energy consumption data required to estimate the adjustment factor by using a python script. The test condition and cycle information for the MCT and the standalone 5-cycle test are highlighted in Fig4.
Fig4: Range testing driving profile and test conditions
The BEV range test procedure outlined in SAE J1634 involves varying test conditions and drive cycles to estimate the final range. GT offers users the chance to streamline the entire EV range estimation process and takes it a step further to automate the required drive cycles to compute the adjustment factor.
Learn More About our Battery Simulation Capabilities
Stay tuned for Part 2 of this blog series, where we will discuss more about the implementation and automation of various 5-cycle test conditions in GT-SUITE to calculate the 5-cycle adjustment factor in EVs.
If you are interested in learning more about battery simulation, check out our battery modeling page and learn more about our hybrid and electric simulation solutions. Also, check our top 15 battery-related topics blogs in this list!
If you would like to reach out, email [email protected] or contact us here.
Top 10 Gamma Technologies Blogs of 2022!
From battery thermal runaway to fleet route optimization, there is a blog for every simulation!
Since the inception of GT-SUITE, Gamma Technologies has offered state-of-the-art simulation solutions for manufacturers. Our simulation solutions help guide customers and partners toward highly optimized products.
In no order, these are the top 10 blogs of 2022!
- Simulating Your Way to HVACR Innovation
- How a Catastrophic Ship Fire Reminded us Why Battery Thermal Runaway Simulation is Important
- Reducing Costs & Increasing Efficiency in Power Converter Design
- Using Simulation to Model Closed-Cycle Argon Hydrogen Engines
- Sensitivity Analysis: How to Rank the Importance of Battery Model Parameters Using Simulation
- Accelerate Electric Aircraft Design Certification with Systems Simulation
- Vehicle Modeling and Simulation: ICEV & BEV Correlation Procedure
- How to Automate Real World Vehicle Route Generation Using Simulation
- A Look Inside Large-Scale Electrochemical Storage Systems Simulation
- Simulating a NASA Hydrogen Powered Rocket
Other Gamma Technologies Blogs to check out in 2022!
- Using Simulation for Battery Engineering: 12 Technical Blogs to Enjoy
- Machine Learning Simulation: HVACR Industry
- Fast, Accurate Full Vehicle Thermal Management Simulation with GT-SUITE and TAITherm
- Using Simulation To Predict Battery Aging for Real World Applications
- How Simulation Can Increase Productivity in Electric Vehicle Thermal Management Design
- Using Simulation to Optimize Driving Routes and Vehicle Emissions
- How Simulation Is Used To Design ICE vs. Battery Electric Vehicle Thermal Management Systems
- Are Your Vehicle Passengers Comfortable? How to Validate An Accurate, Thermal Cabin Management Simulation Solution
Shout-outs to our colleagues for their contributions!
Learn more about our simulation solutions!
If you’d like to learn more about how Gamma Technologies can be used to solve your engineering challenges, contact us here!
Have a great holiday season and wishing you a healthy & prosperous 2023!
How to Automate Real World Vehicle Route Generation Using Simulation
Gamma Technologies’ Solutions for Creating Driving Routes with Simulation
Ding! your package is eight stops away. Thanks to smartphones and the internet, you can view a live map and watch with excitement as the delivery vehicle arrives at your house. This is quite a transformation from a decade ago when all you’d receive was a tracking number and infrequent updates with nothing more than a location and expected delivery date.
Gamma Technologies is looking to transform simulations of real driving routes by offering GT-RealDrive. Gone are the days of having to physically drive a vehicle and record its GPS data in order to re-create a driving route. Instead, this can now all be done virtually using GT-SUITE with GT-RealDrive and an internet connection.
To get started, simulation engineers, using Gamma Technologies’ GT-SUITE, open GT-RealDrive and simply enter the “Start and End” locations just as you would on a navigation application. The user would then press the “Calculate Route” button and your route is ready to be simulated!
GT-RealDrive takes care of the rest by generating an optimal route using the “Start and End” locations and estimating local traffic conditions. Users may apply this newly calculated route in an existing GT-SUITE vehicle model simulation.
Also within GT-SUITE, to further automate and simulate numerous route conditions, users may use GT’s productivity tool, GT-Automation. With GT-Automation, simulaton engineers may instantly create GT-RealDrive route options with the use of Python scripting.
In the example below, a delivery truck in Midtown Manhattan, New York City going through a last mile delivery route (dropping off packages at customer locations) is simulated. This simulation explores real world vehicle performance using GT-RealDrive and GT-Automation.
STEP 1
First, we need to start with a list of the warehouse and then all the addresses (manifest) that a package needs to be delivered to (last mile delivery). This data can be in any format that is easily read by Python (e.g. csv, txt, xlsx, py, etc.).
STEP 2
Next, write a short Python script that will read in these addresses and create a ProfileGPSRoute object for each leg (from one address to the next). The legs are then combined in a ProfilePGSRouteMulti object with stop times at each address to represent the driver stopping the vehicle and getting out to deliver the package. Writing such a script simply requires some basic knowledge of Python and referencing the GT-SUITE Python API documentation.
STEP 3
Once the script is written, all that is left to do is run the script and sit back as GT-Automation & GT-RealDrive create all the route legs and assemble them all into a single, optimized route (shown below). The delivery route is then ready to be applied and used on any GT-SUITE vehicle model.

Copyright of Mapbox. You may not remove any proprietary notices or product identification labels from Mapbox’s services.
How Simulation Can Be Used Beyond Last Mile Delivery
While we showcased an example using a last mile delivery route, similar challenges are faced with any route that has a lot of stops like buses, garbage trucks, ride shares, and so forth. The same process can easily be applied to any of these situations.
If you are interested in learning more, please contact us.
How Simulation Is Used To Design ICE vs. Battery Electric Vehicle Thermal Management Systems
Understanding Vehicle Thermal Management
Vehicle electrification across the transportation industry is being driven by demands for reducing emissions and increasing fuel economy. However, engineering these electrified vehicles comes with a new set of challenges for thermal management of the powertrain and cabin. In this blog I will discuss some of these new challenges for battery electric vehicle thermal management and how it compares to combustion engine vehicles. But first, I’ll discuss some common traits between thermal management of both vehicle types.
Similarities Between ICE vs. Battery Electric Vehicles Thermal Management Systems
The goals for thermal management system design remain the same regardless of the powertrain: to keep the powertrain components in their desired temperature range, and to provide a comfortable cabin for the occupants. The optimal design should balance energy usage, system cost, and reliability. In cold environments, the thermal management system should enable fast warmup of the vehicle. Both battery electric vehicles (BEV) and internal combustion engine (ICE) vehicles are less efficient at cold temperatures. In warm environments, excess heat from the powertrain needs to be rejected to the environment to prevent damage to the components. In addition, the cabin temperature needs to be controlled for a comfortable driving experience.
Similar types of components are used between combustion engine vehicles and battery electric vehicles. A single-phase coolant loop would likely use an ethylene glycol and water mixture for the working fluid, with a pump, liquid-to-air heat exchanger, and control valve to manage the coolant flow. A cooling fan is used to enhance the air flow through the heat exchanger at low vehicle speeds. Previously mechanically driven pumps and fans were standard, but recently electrically driven components are used for greater system control. A two-phase refrigeration system is necessary for providing additional cooling below the environment temperature.
The integration of other systems is also an important consideration for transient analysis and controls. Different thermal strategies may be needed depending on the powertrain demands, component temperatures, and environment temperatures. For both a combustion engine and battery electric vehicle, a system that performs well at steady state conditions may not be sufficient to manage temperatures for transient driving cycles. The heat produced by powertrain components at ideal operating temperatures will be different than the heat generated at warmer or colder temperatures, and de-rating of the powertrain may be necessary to prevent component damage. In both types of vehicles, the demands for heating or cooling the cabin will impact the cooling circuit temperatures.
Differences Between ICE vs. Battery Electric Vehicles Thermal Management Systems
The most obvious difference between the combustion engine vehicle and the battery electric vehicle is the heat source. In the electric vehicle, the primary waste heat to the coolant is from the motor, power electronics, and battery. If this waste heat is not sufficient, an auxiliary heater or two-phase system can be used to add heat and bring the components up to their operating temperature. Whereas in the combustion engine, the primary heat source is from the combustion process. Additional heat is added to the coolant from the engine and transmission oil caused by friction in those components.
These differences in the heat sources lead to differences in the operating temperatures of the components. The combustion engine operates at high temperatures, which allows the coolant to be used to warm the cabin in cold environments or rejected to the environment at higher temperatures. In more complicated combustion engine cooling systems, a separate lower temperature loop maybe used to provide coolant for a charge air cooler or water-cooled condenser. This separate coolant loop also would be operating at above ambient temperatures and could reject heat to the environment using a coolant to air heat exchanger. In the battery electric vehicle, the motor and power electronics can operate at higher temperatures, but the ideal battery temperature range is between 20 °C and 40 °C. This would require a refrigeration system to provide additional cooling for the battery because the ambient air may not be enough in warm environments.
The differences in temperature requirements and operating conditions among the components in the BEV increase the complexity of its cooling system. Additional cooling is only required for the battery, so a separate cooling loop could be utilized for the battery linked to the refrigeration system. Cooling this smaller loop below ambient rather than the full cooling loop would require less energy to run the compressor, which increases the vehicle range. The requirement to heat the battery in cold environments would require either an auxiliary heater, operating the refrigeration system in a heat pump mode, utilizing waste heat from the motor and power electronics, or some combination of these strategies. To achieve these goals using a single system, multiple pumps and valves are necessary. More complex controls to route the coolant and optimize the pump speeds are required for efficient operation. In contrast, the combustion engine cooling system can typically be satisfied with a single coolant loop unless a charge-air-cooler requires additional cooling at a lower temperature.
How Simulation Is Used For Thermal Management System Designs
With the increased interaction between the vehicle systems in a BEV, an integrated system simulation is necessary for optimal design. Over a transient driving cycle, the thermal management of the battery and cabin need to be energy efficient to maximize the vehicle range. During a fast-charging event, the battery temperature needs to be carefully managed to prevent unnecessary cell aging. For a rapid acceleration or towing event, the motor and inverters need to be properly cooled to prevent component damage. GT-SUITE is the optimal simulation platform to manage these simulation needs by providing:
- Industry leading sub-system models
GT-SUITE simulations are recognized across the automotive industry for their accuracy and flexibility. Our publications page highlights customer use cases for every vehicle system across the electrical, mechanical, thermal, fluid, chemical, and controls domains. - Detailed component models and real-time capability
GT-SUITE provides detailed simulations for individual components that will greatly enhance the model capabilities. For the battery and motor, the temperature distributions over a driving cycle or fast-charging event in a 3D finite element model can predict hot spots and the effects of different cooling strategies. Electro-chemical models of the battery can predict the cell aging over a vehicle life cycle. In addition, the 3D cabin comfort model linked to GT-TAITherm can accurately predict occupant comfort over a wide range of vehicle conditions. These detailed models can be reduced to a real-time capable model for software or hardware in the loop simulations. - Robust model integration
GT-SUITE is designed to properly model the interaction between vehicle systems in an integrated model. For example, the heat generated within the motor and battery can be added as a source term in the thermal component models, with individual component temperatures used to calculate the correct performance within the electrical and mechanical system models. By building these sub-system models in the same tool, it is easy to model the interaction between them and change the simulation parameters for different analyses.
Closing Thoughts on Thermal Management System Design
The design of electric vehicles requires additional complexity for properly managing the battery, motor, power electronics, and cabin temperatures. The interaction between the single-phase and two-phase systems must be included to accurately predict the battery temperatures over a range of operating conditions. More complex controls are needed to create a robust and efficient system. Because of these complexities and enhanced interactions, simulation is necessary for system design. We will be expanding on these topics to discuss the component and system models in subsequent blog posts.
If you’d like to learn more or are interested in trying GT-SUITE to understand thermal management in ICE or xEV, Contact us!
Written by Brad Holcomb
This blog was originally published on May 26, 2021
Using Simulation to Optimize Driving Routes and Vehicle Emissions
Setting Up An Integrated Vehicle and Aftertreatment Simulation
Ever wondered what kind of emissions your car produces when driving to work or your favorite restaurant? With GT-SUITE and a few hours of hard work, you can have your answer! Learn how do set-up such a simulation using our software as detailed in our paper published on SAE entitled, A Study Examining the Effects of Driver Profile and Route Characteristics on Vehicle Performance and Tailpipe Emissions under Virtual Real Driving Scenarios, which is summarized in this blog.
Thanks to the fact that GT-SUITE is a versatile multi-physics simulation platform, an integrated vehicle model can be set-up containing the vehicle, powertrain and aftertreatment system. We used this capability to create a model of a turbo diesel passenger car with an entire aftertreatment system as shown below. This allowed us to simulate the final emissions that the vehicle produces, often referred to as ‘tail-pipe out’.
Next, we needed to create the real driving routes. While we couldn’t agree on who’s favorite restaurant to simulate driving to, we did decide that long cruise around the Los Angeles area was a great choice (see below) along with a cost to coast drive. The two routes were then created using GT-RealDrive, a built-in application tool within GT-SUITE to create real driving routes. It works nearly identical to navigation apps like Google Maps, the only difference is that instead of physically driving the route, we’re just doing it virtually. It considers the live traffic conditions, stop lights, and elevation, all of which have an impact on the vehicle’s emissions.
Lastly, we realized that who was driving the car would impact the emissions. Rather than pick only 1 driver, we decided that we’d set up two different drivers to represent a range of conservative to aggressive, to represent how we all drive slightly differently. With our model and routes set-up, the simulations were run and produced a variety of results, some of which will be highlighted below.
Results of Simulation: Tailpipe Emissions
As expected, the more conservative driver generated less emissions than the aggressive driver and was generally more fuel efficient. During the cruise around Los Angeles, the aggressive driver used 6% more fuel and produced nearly twice the amount of nitrogen oxides (NOx) a pollutant linked to smog and acid rain.
However, for the coast-to-coast route, this difference was quite small as both drivers used cruise control for the majority of the 2,817-mile drive which minimized the impact of their behavior. As a result, the fuel economy and emissions were similar for the two drivers. When normalized on a distance basis, the New York to California route produced less emissions and was more fuel efficient than the Los Angeles cruise.
Further results and more in-depth details such as the impacts of traffic light duration, the start-stop system, and effects of sulfur poisoning and platinum oxidation on the emissions are available in the paper.
Learn More About Generating Real Driving Routes and Predicting Emissions
Be it for regulatory compliance assessment, initial design, robustness or simply curiosity, GT-SUITE can simulate the emissions produced by vehicles while driving real routes. This is further simplified by GT-RealDrive by generating the routes virtually rather than having to use recorded GPS data.
For further information about this paper or information on emissions simulations in GT-SUITE, please contact us at here.
How Simulation Can Increase Productivity in Electric Vehicle Thermal Management Design
What To Consider When Designing Thermal Management Systems in Electric Vehicles
Thermal management system design in the electrification era requires a complex approach to ensure vehicle performance and customer satisfaction. Electric vehicle’s (EV) system design and controls will influence the range of the vehicle, cabin comfort, and performance, and it is important to use a robust approach to understand how relationships and tradeoffs within these systems affect targets. Robust virtual analysis allows such studies to occur efficiently, from fast-running system-level models, to detailed thermal analysis of subsystems.
Simulation Can Assist EV Thermal Management Design Productivity
Critical systems in an EV, such as battery packs and integrated motors, require proper cooling to meet performance and range requirements. With the increasing scope of model capabilities, comes the need to be able to edit the boundary conditions and test cases quickly and efficiently for multiple party’s needs. GT-Play, a web-based interface for GT-SUITE, can directly assist with this using a central location for model download, analysis, and design decisions, with a select group of experts overseeing the model uses and capabilities.
Here’s a walkthrough of an integrated thermal model that has been uploaded to GT-Play for three different end users:
1. Vehicle Test Engineer: The test engineer needs an efficient way to compare experimental results with simulation results to validate the system-level model. This requires the ability to change test conditions and edit boundary parameters, but the engineer does not have experience with GT. To do this, they reached out to the modeling expert to build such a result within the GT-PLAY platform, as highlighted below:
2. CAE Engineer: In this case, a design engineer needs to understand how their decisions with regards to a battery cold plate affect the thermal system performance. With the previous design built into a system model already, they communicated with the model expert and wanted to quantify the differences between the original design and the new layout. After reaching out to the model expert, they have received access to look at this study in GT-PLAY, highlighting the most important outputs. The general problem and setup are overviewed in the images below:
3. Calibration Engineer: This engineer needs understanding of how different thermal control parameters will affect vehicle performance. Specifically, they need to understand how changing valve switching inputs will affect the cooling of critical components (battery, motors) versus how it will affect energy efficiency. Since testing this use case would be time intensive, the engineer reached out to the model expert to build this study virtually and gave them specific results they would need to decide. The controls that will be analyzed and reviewed are highlighted in the image below:
See How These Simulations Can Be Applied to EV Thermal Design in a Live Webinar
On September 7th, SAE and Gamma Technologies will offer a FREE, live webinar: ‘How to Increase Productivity in EV Design by Leveraging Thermal Simulation.’ In this webinar, we will discuss how critical systems, such as battery packs and integrated motors, can meet performance and range requirements through simulation. This starts with component selection and moving forward with detailed CFD analysis before being merged into a larger system using unique productivity tools.
This webinar will help you understand the process of model building, uploading, and analyzing in GT-PLAY for such a complex model. Also, you will further learn how these capabilities can be utilized within GT-SUITE through the three real-world use cases mentioned earlier.
Register today: https://hubs.ly/Q01jRcZb0
Learn More about our Thermal Simulation Applications
View this curated page on our thermal simulation applications here.
Bio of Author:
Joseph Solomon is a Solutions Consultant at Gamma Technologies, focusing on electrification solutions. Joseph assists GT users in battery design, e-powertrain system analysis, thermal management, and controls development. In 2021, Joseph applied GT-AutoLion to complete his Masters in mechanical engineering at the University of Michigan, titled Investigating Lithium Ion battery performance with an electrochemical-mechanical model. Contact Joseph here!
Using Simulation To Predict Battery Aging for Real World Applications
What is The Warranty On a Lithium-ion Battery?
Over the years, lithium-ion technology has expanded into numerous applications, ranging from products as large as planes and ships to products as small as power tools and cell phones and everything in between. Because of the inevitable degradation of lithium-ion cells, the lifespan of these products will likely be limited by the degradation of the Li-ion battery it uses. In many instances, products may have special warranties for their battery system. For instance, the Tesla Model S and Model X have a special battery warranty of 8 years, 150,000 miles; Dell Laptops offer 1 or 3 year battery warranties; and Makita offers 3 year warranties on most of the batteries in their power tools.
Determining how to warranty a battery is no easy task. If the warranty is too short, there is a risk that less consumers will purchase your product; conversely, if the warranty is too long, there is a risk that there may be a significant amount of warranty claims down the road. Mistakes in either direction are expensive.
To compound the difficulty of warrantying a battery pack, most of the companies selling products with Li-ion batteries in them do not manufacture the Li-ion cells. They simply buy cells from cell suppliers and package them in their system. The engineers at these companies are focused on developing battery electric vehicles (BEVs), electric vertical take-off and landing (EVTOLs) aircrafts, power tools, or consumer electronics, not Li-ion cells. Therefore, the people tasked with warrantying a battery are often not equipped with the proper information or knowledge about Li-ion technology to accurately predict battery lifetime. However, by utilizing a minimal amount of available data combined with physics-based simulation software, these engineers can predict battery lifetime, and therefore make more confident battery warranty decisions.
How is Battery Degradation Measured & How Simulation Can Predict Battery Aging
To help cell buyers determine how to warranty a battery, cell suppliers often quantify battery degradation in two ways: calendar degradation and cycle degradation.
Calendar degradation measures how a cell degrades while it is exposed to zero current for an extended period of time. These are sometimes referred to as “shelf life” tests because the cells can simply be placed on a shelf, forgotten about, and periodically tested.
Cycle degradation measures how a cell degrades while being cycled between fully charged and fully discharged over and over at constant currents.
Generally, cell suppliers will inform their customers about their cell’s degradation by including calendar and cycle life data in detailed cell documentation, where calendar and cycle aging are given in plots using capacity retention (% of Beginning of Life Capacity) on the Y-axis and cycle or calendar days on the X-axis. As seen in the example images below, both tests can be run at different temperatures.
Neither of these tests are truly representative of what a Li-ion cell will experience in the real world, so an engineer responsible for determining the warranty of a battery in a system (such as a cell phone, hybrid vehicle, plane, etc.), may not find this data very helpful.
No BEV, EVTOL, or other product will be used in scenarios reflective of calendar or cycle aging tests. BEVs will be used to commute back and forth to work, pick up groceries from supermarkets, and go on the occasional road trip. EVTOLs will subject their batteries to demanding loads to transport people around large cities. Power tools and consumer electronics may see extended periods of rest with occasional intense usage. Additionally, due to weather patterns and varying usage and charging patterns between different consumers, the realistic demands that a battery may see in its lifetime can be very difficult to replicate in laboratory conditions.
Simulation tools like GT-AutoLion and GT-SUITE provide a unique solution that enables engineers to use the provided cycle and calendar aging to gain meaningful insights into battery aging under more realistic scenarios.
As shown in many technical papers, physics-based models of Li-ion battery performance and aging in GT-AutoLion can be calibrated to match experimental data, such as capacity fade and resistance growth during calendar and cycle aging.
Because the degradation models in GT-AutoLion are physics-based and postdictive, they can be calibrated to match this type of data and then used in other conditions to predict how Li-ion cells may age in any application. In the case of a power tool supplier, these conditions can include typical usage patterns for various applications, including chain saws, drills, and even rotary tools.
Using Simulation to Predict Battery Degradation of Battery Electric Vehicles (BEV)
In the case of a complex system, like a battery electric vehicle (BEV), it’s important to capture interactions between the battery and other systems, such as thermal management systems, in order to accurately capture how the complete system affects battery life. For example, to predict the battery power demand of a BEV’s Li-ion battery during a drive cycle of a typical owner’s commute, a system-level model of the vehicle can be built using GT-DRIVE+. These drive cycles can then be repeatedly applied to a GT-AutoLion model, along with realistic rest times, in order to predict how a pack degrades in a real-world scenario. On top of this, other variables can be studied to understand their effect on battery degradation, such as weather patterns and even charging patterns and strategies.
By combining the power of GT-DRIVE+ and GT-AutoLion, engineers have more meaningful aging predictions that can be quantified in terms of “Miles” or “Years of Operation” as opposed to “Cycles,” which then increases confidence when determining how to effectively warranty batteries.

Once Aging Models are calibrated, system behavior can be incorporated to predict more meaningful aging metrics.
In my next blog, I’ll discuss how GT-AutoLion predicts not only the capacity fade of a cell, but also predicts the performance of a system after a battery has begun degrading.
NOTE: This piece was originally published in June 2020.
Vehicle Modeling and Simulation: ICEV & BEV Correlation Procedure
Vehicle Level Simulation is a rapidly expanding technique which most OEMs are exploring to help reduce testing cost and time. To be effective, these simulations must accurately represent the vehicle being simulated. This can be achieved in two steps, first by collecting the data required for modeling and feeding it into a simulation tool to create a virtual replica. The second step is to make sure that the model is well validated or correlated. In this blog, we are going to highlight some tips which simplifies model validation process.
Conventional Vehicle (ICEV) Correlation Procedure:
Let us consider that user-A is working on a conventional vehicle and trying to extract Fuel Economy output. User fed all the inputs required by the vehicle model which included some assumptions due to lack/absence of data. He found that mileage of the vehicle is having slight mismatch when compared to test results. To give you some background, mileage is an output from simulation which indicates how much distance a vehicle travels on an average per unit volume of fuel being consumed. Typically, in the units of mpg or kmpl. Final outcome (Mileage) from the model depends on various quantities, few of them involves BSFC/Fuel-Rate map input (for a map-based engine) and engine operating points which indirectly depends on drag coefficient, frontal area, rolling resistance, tire-rolling-radius, gear reductions, driveline efficiencies, inertias, effective mass, GVW, shift pattern, etc. to name some of them. It might be hard to identify which of these quantities is the culprit for mismatch with respect to test results. Hence, we have come up with a procedure that can help you in eliminating few parameters at a time to make your model correlation task easier with an ultimate goal of creating a virtual replica of your actual system/subsystem. Additionally, calibrated GT-Predictive Engine (GT-POWER) models can be used to generate engine maps like BMEP, BSFC, etc.
Battery Electric Vehicle (BEV) Correlation Procedure:
On the other hand, let us consider that user-B is working on an electrified vehicle and trying to extract current and voltage response of a Battery to a standard drive cycle like IDC/NEDC. He found that there is a mismatch for current and voltage results in simulation when he compares it with test data. To give you some background about typical vehicle model workflow, driver decides the power demand based on the target drive cycle and corresponding resistive forces (aerodynamic drag, rolling resistance, road grade, driveline inefficiency, etc.). Motor speed primarily depends on gear reductions, tire rolling radius and vehicle speed at that instance. Based on power demand and current operating speed, torque demand can be derived which the motor is asked to deliver to meet cycle demands. Power delivered by motor is known as brake power. Due to motor and inverter efficiency, there will be some losses and a summation of these losses, aux loads, and Brake power is what the battery needs to deliver which is also known as Electrical power. Finally based on the OCV-IR maps defined for an ECM (Equivalent Circuit Model), using fundamental equations of Electrical circuits, we arrive at current and voltage. Hence you can clearly see that current and voltage response is one of the final outcomes which depends on many other factors, hence it’s hard to predict the exact reason for mismatch of simulation results when compared to test data. Similarly, as we explained for an ICEV, we have listed a procedure which could simplify your task and help you eliminate and verify few parameters at a time.
As described in the correlation flowcharts, at an initial stage of vehicle development, users might not have all the data required for vehicle modeling. Hence to assist them, we have many options available within our tool GT-SUITE. Some of the relevant ones are discussed below:
- Characterization: for ECM (Equivalent Circuit Model) model creation using test data.
- ECM Database: for ECM model creation in absence of test data created using GT-Autolion database.
- Static Analysis: for gear shift pattern generation.
Characterization:
GT offers a quick and easy to use tool which is capable of parameter estimation for Electrical Equivalent Circuit models including 0 to 3 RC branches. We call it Characterization tool. Glimpse shown below:
Electrical Equivalent Model aka ECM Database:
Starting with v2022B1, you can find a database of 30-coin cells comprising of different chemistries and applications. This was developed using actual test data and is quite reliable in absence of data at initial stages of development process. Database comprises of 10 different chemistry combinations and 3 variants of each based on application (power dense, energy dense or balanced). This can be easily scaled up to a cylindrical/pouch/prismatic cell and even up to a pack following instructions mentioned in the template help of our electrical equivalent battery template. This comes as a part of installation and can be located in the following directory: %GTIHOME%\v2022\resrc\BatteryLibrary\ElecEq.
Static Analysis:
GT offers two modes for vehicle level simulations which are mainly dynamic and kinematic. Kinematic or static mode is used to perform these following tasks:
- Imposed speed analysis for drive cycle demand calculations.
- Grade Climbing Ability: Gradeability analysis over different gears, Tractive Force Calculations (gross and net), Tractive power calculations (gross and net), N-V curve, Acceleration potential, and others.
- Shift Strategy Generation and Optimization: Generation of shift strategy based on acceleration potential curves and drivability involving additional FE constraints.
- Controls Optimization for HEVs: ECMS, DP and Dynamic ECMS.
View Testimonials from the New Energy Leadership Summit in Bengaluru, India
Gamma Technologies partnered with ET Auto at a 1 day-summit in Bengaluru, India that brought together eminent leadership from various firms (vehicle OEMs, component manufacturers, R&D and CAE leaders, simulation professionals, testing agencies, and others) who shared their thoughts on the new energy ecosystem, the challenges associated with it, and the role of simulation in tackling these challenges.
Click here to view our series of testimonials from this event on our YouTube channel!
Acronyms:
- RPM: Revolutions Per Minute
- MPG: Miles Per Gallon
- KMPL: Kilometers Per Liter
- BSFC: Brake Specific Fuel Consumption
- GVW: Gross Vehicle Weight
- BMEP: Brake Mean Effective Pressure
- IDC: Indian Drive Cycle
- NEDC: New European Driving Cycle
- OCV: Open Circuit Voltage
- IR: Internal Resistance
- VKA: Vehicle Kinematic Analysis
- ECM: Equivalent Circuit Models (Resistive/Thevenin)
- N-V: Motor/Engine RPM vs Vehicle Speed curve
- FE: Fuel Economy
- HEV: Hybrid Electric Vehicles
- ECMS: Equivalent Consumption Minimization Strategy
- DP: Dynamic Programming
- RC: Resistance and Capacitance
- FDR: Final Drive Ratio or Sprocket and Chain Ratio
- PGR: Primary Gear Reduction if present
- GR: Gear Ratio of transmission if present
Using Simulation to Model Closed-Cycle Argon Hydrogen Engines
Evaluating Renewable Hydrogen Fuel
With the increasing demands for fuel-efficient and low-to zero-emissions technologies in the automotive industry, renewable hydrogen fuel is regarded as a promising energy storage form for vehicles. Pure hydrogen combustion emits no greenhouse gas CO2, and noble gases can eliminate environmental pollutants such as NOx by replacing nitrogen. As a result, hydrogen combustion in a noble gas is expected to eliminate both carbon emissions and other pollutant emissions.
Additionally, the higher ratio of specific heats, can increase the theoretical thermal efficiency of an Otto cycle, k = Cp/Cv, as can be determined from the equation below:
For which CR is the compression ratio.
Therefore, the use of a monoatomic working gas with a high specific heat ratio such as argon would ideally achieve much higher thermal efficiency than conventional internal combustion engine using air (nitrogen) as the working gas. Figure 1 shows the relationship of theoretical thermal efficiency and the specific heat ratio of working gas [1]. For a compression ratio of 10, the thermal efficiency drastically improves by about 30% on a relative basis, or 18% on an absolute basis from k = 1.4 to k = 1.67.
Simulating Hydrogen Engines with GT-SUITE
To demonstrate the possibility that GT-SUITE can model a closed-cycle argon-hydrogen engine, Gamma Technologies has built and included a model of such a system within the recently released v2022 build 1. In the example model ‘Ar-H2_ClosedCycle_Engine’, several advanced modeling concepts are applied, including condensation of combustion products, removal of water from the system and a semi-predictive condenser. Argon is recirculated in the system and oxygen is supplied via an injector object; hydrogen, as the only fuel present in this system, is injected directly into the engine cylinder. Upon the completion of combustion, the exhaust gas passes through the condenser to convert the water vapor to liquid water, which is later removed from the system.
A quite critical aspect for the closed-cycle simulation is to ensure the steady solution, which requires a strict balance of the system mass. Namely, it is required to maintain a balance between the mass entering the system and the mass exiting the system. Otherwise, the constantly changing system mass would prevent simulations from converging on a steady result. Consequently, the oxygen supply and hydrogen injection need to be carefully controlled to maintain the stable simulation.
The example model simulates at different levels of argon fractions in the working gas, namely the ratio of argon gas in the argon and oxygen mixture. The trends of thermal efficiency and specific heat ratio of the in-cylinder gas are observed to vary with the argon fraction as shown in Figure 3, which are consistent with the theory described in the Introduction section. The slopes of efficiency and specific heat ratio shown in Figure 3 are dependent on the condenser design parameters in the example model. Engine efficiency and specific heat ratio should be more sensitive to the change in argon fraction with a condenser of better performance.
Benefits of Hydrogen/Noble Gas Engines Simulation
The discussed Ar-H2 closed cycle engine simulation in this blog is demonstrated with a non-predictive combustion model in GT-SUITE. It allows the user to perform a similar proof-of-concept hydrogen/noble gas engine simulation and serves as a reference/example for closed-cycle engine simulations. The research and development work on this topic can be extended to combine with a predictive combustion model in the future, such as the SI Turbulent Flame Combustion Model which has been widely used for gasoline combustion engine simulations.
References
[1] Kuroki, R., Kato, A., Kamiyama, E., and Sawada, D., “Study of High Efficiency Zero-Emission Argon Circulated Hydrogen Engine,” SAE Technical Paper 2010-01-0581, 2010, https://doi.org/10.4271/2010-01-0581.
Reducing Costs & Increasing Efficiency in Power Converter Design
Optimizing Power Converter Design through Simulation
Optimal power converter design requires a fine balance between design efficiency and physical testing costs. Finding the right efficiency point can be expensive for companies — simply jumping from 98% to 99% efficiency could double the converter cost. In this blog, we’re going to explore the technical approaches power converter design engineers may take to optimize their product designs as well as highlight simulation solutions such as GT-PowerForge.
Reducing Switching Losses
In power electronic conversion, losses are created by power semiconductor devices in conduction and switching. Conduction losses can be reduced by using a larger semiconductor section, but costs will increase as a bigger die or module needs to be used. Moreover, this solution negatively impacts the switching losses as the parasitic elements are increased.
When switching losses are reduced, here’s how this impacts costs:
- Lowering the switching frequency: will increase the filtering needs as it needs more inductance and capacitance to keep the same ripple level. In this case, the inductors and capacitors are going to be more expensive.
- Using wide bandgap (WBG) semiconductor devices: increasing switching speed becomes more expensive (up to 5 to 10 times the cost of silicon devices).
- Using multi-level topologies: the voltage switched by the semiconductor is lower with a lower voltage rating. From a cost standpoint, more semiconductor devices can be used but with a lower voltage rating. The device cost is non-linear to the voltage rating, therefore the semiconductor cost will depend on the applications. The filtering multilevel needs are usually more important in terms of input filters (Flying Capacitance, NPC, T-type topologies) but with smaller output filters. The complexity cost is also to be considered.
- Reconsider cooling device size: by proactively analyzing and optimizing the power converters to reduce the losses, the cooling demands are also reduced resulting in a lower cost device.
Electric Vehicle Power Converter Simulation Case Study
To illustrate these previous assertions, let’s explore these solutions on an electric vehicle’s (EV) 3-phase inverter with a sine filter using power converter design software such as GT-PowerForge. (NOTE: for this example, the inverter nominal operating point will be at 200kW with a DC bus of 800V, 400V between phases, a frequency of 50Hz, cos(φ)=0.8 and Space vector modulation)
How using simulation can find the best trade-off between COST and EFFICIENCY
Si IGBT vs SiC MOSFET
The figure of merit between Si IGBT and SiC MOSFET in the efficiency vs cost plan allows us to separate the plan in two:
- The solutions with efficiency below 98.3% equipped with Si IGBT device have the lowest cost
- The solutions above this efficiency equipped with SiC MOSFET have the lowest cost. To explain this result, while SiC MOSFET are indeed more expensive, increasing the switching frequency has a small impact on the loss, while reducing greatly the sine filter cost. At high switching frequency however, the gain on filter will be compensated by an additional heatsink price and additional losses.
Si IGBT vs SiC MOSFET
The figure of merit between Si IGBT and SiC MOSFET in the efficiency vs cost plan allows to separate the plan in two :
- The solutions with efficiency below 98.3% equipped with Si IGBT device have the lowest cost
- The solutions above this efficiency equipped with SiC MOSFET have the lowest cost. To explain this result, SiC MOSFET is more expensive but increasing the switching frequency has a small impact on the loss while reducing greatly the sine filter cost. At high switching frequency, the gain on filter will be reduced by an additional heatsink price and additional losses.

2 Level topologies vs Multilevel topologies
The result from this comparison are not intuitive at a first glance. Below 98.7% efficiency, 4L Flying capacitor with Si IGBT are less expensive. Indeed, while being more expensive in terms of semiconductor devices and capacitor, the AC filter is greatly reduced by the apparent output switching frequency. At low frequency, the Si IGBT is more performant in terms of efficiency and cost. In this application T-Type and NPC are not competitive because they need a large amount of capacitor in the DC bus to keep the imposed 1% DC bus voltage ripple between the middle point and the bus voltage, which greatly impacts the cost.

2 Level topologies vs Multilevel topologies vs Si IGBT vs SiC MOSFET
The combination of the topologies and the semiconductor device technology has been tested as shown in Fig. 3. In term of efficiency vs cost, Flying Capacitor with Si and 2 Levels with SiC are forming best solutions.
The solutions presented do not exclude the existence of a better solution. In particular, loosen the ripple constraints on NPC and T-Type middle point, evaluate the modulation strategy influence or other semiconductor device as well as refine the optimal switching frequencies for this converter. Finally, the best presented conclusion only have a meaning for this specific converter specifications and cannot be generalized.
Figure 3: (eff vs cost) 2L vs ML and Si vs SiC
Now that all dimensions have been explored and pareto fronts identified, it is easier to identify a preferred solution for our converter efficiency vs. cost trade-off. But while efficiency and cost are key, we also need to dive deeper into estimating the concrete impacts of this solution into our vehicle system.
Power Converter Design and Vehicle Systems Simulation
GT-PowerForge offers the ability to export a loss map destined to be integrated into GT-SUITE’s GT-ISE model. This loss map will evaluate the power converter when its operating condition varies, effectively allowing a simulation in GT-ISE to take more dimensions into account, such as input power or temperature fluctuations. This therefore allows for a more precise and detailed evaluation of the vehicle capabilities such as: the estimation of range capabilities, focus on battery sizing and powertrain optimization.
Interested in power converter design software?
If you find this piece insightful, see how GT-PowerForge‘s simulation capabilities can assist with your power converter design needs.
How a Catastrophic Ship Fire Reminded us Why Battery Thermal Runaway Simulation is Important
A few weeks ago, a 656-foot-long ship known as the Felicity Ace caught fire in the North Atlantic as it was transporting its cargo, including various automotive luxury brands, from Germany to Rhode Island. The fire broke out in the ship’s hold and continued to spread. Thankfully, all 22 crew members safely abandoned the vessel.
Of concern was the large number of electric vehicles on board this ship. Experts are still speculating if the cause of the fire was due to EV batteries. Nonetheless, unexpected battery fires are always a concern. The batteries complicated efforts in extinguishing the blaze said SMIT Salvage, the Dutch experts contracted to salvage the vessel. But as the ship eventually cooled down near a safe area off the Azores and began being towed, it “lost stability and sank,” according to the Portuguese Navy.
Why Battery Pack Safety for Electrified Vehicles (EVs) is Important
For years, there have been concerns over lithium-ion battery safety due to highly publicized thermal runaway events. This recent event brought to light the importance of battery safety and the need to use state-of-the-art engineering to mitigate the risk of thermal runaway.
Battery engineers are tasked with this challenging problem: to package a set of lithium-ion (Li-ion) cells as tightly as possible and minimize the amount of non-cell weight in the battery. In order to achieve this, engineers need to maintain proper temperature levels of cells, protect against premature cell degradation, and ensure safe operations.
How to Cost-Effectively Mitigate Battery Thermal Runaway
Historically, battery thermal runaway has been evaluated using physical testing—an expensive and dangerous endeavor.
With physical testing, costly prototype versions of the battery packs are assembled and thermal runaway is intentionally induced on a selected li-ion cell (either with a nail or by heating it to extreme temperatures).
A palatable, alternative solution is using simulation. The use of simulation throughout the design and development process allows for extensive testing, immediate results and analysis, and actionable feedback to ensure battery packs are engineered for optimal safety.
Here at Gamma Technologies, we offer GT-SUITE and GT-AutoLion, the ideal simulation design platforms to run virtual thermal runaway propagation tests. To learn more about our thermal runaway simulation capabilities, read this April 2021 blog written by GT’s own Joe Wimmer.
If you would like to learn more or are interested in trying GT-SUITE to virtually test a battery pack for thermal runaway propagation, Contact us!
Lithium-ion Battery Modeling for the Automotive Engineer
Most people are aware of the dramatic and fast-paced trend towards electrification within the automotive industry which has led to the increased use of Lithium-ion (Li-ion) batteries for energy storage in electrified vehicles. Here at Gamma Technologies, we work with many modeling, simulation, and controls engineers in the automotive industry, and one thing I have noticed is that this trend towards electrification is not only a shift in the products that OEMs sell, but also a personal shift for many automotive engineers. I’ve spoken to many engineers who have historically worked on conventional vehicles and are now being asked to focus more on electrified vehicles. Because of this, I decided to write a blog introducing the basics of Li-ion technology and popular modeling techniques for these automotive engineers trying to keep up with this major trend in the industry.
There are many ways to model a battery, but the two most commonly-used are electrical-equivalent behavioral models and electrochemical physical models. Both methods provide value for engineers, and each one takes a different approach to modeling the behavior of batteries. In this blog, you’ll learn about the basic operation of Li-ion batteries, how these different approaches model the behavior of a battery, and the benefits and tradeoffs associated with each.
How does a Lithium-ion battery work?
Before I go into detail about different modeling methods, let’s make sure we understand how a Li-ion battery operates. Figure 1 shows a cross-sectional view of a Li-ion cell.
Shown on the left and right sides of this cross-sectional view, the positive and negative terminals allow electrical connections between them. This is where battery or cell voltage can be measured. During charging and discharging, Li-ions (Li+) shuttle back and forth between the cathode and anode through the separator (from the cathode to anode while charging, and from the anode to cathode while discharging). Within the cathode and anode, Li+ reacts with the active materials, releasing or absorbing electrons in the process that are then able to flow between positive and negative terminals.
What’s Important in a Battery Model?
For most applications using battery models, it is generally important to accurately predict the electrical characteristics of the battery, including the voltage across it and the current flowing through it. Additionally, an estimation of State of Charge (SOC, a normalized estimation of how much chemical energy is stored in a cell) is required. If battery models are linked to thermal management simulations, accurate heat dissipation from the battery must be captured.
As anyone with a cell phone, tablet, or laptop can tell you – Li-ion batteries degrade over time. There are many electrochemical factors that lead to the degradation of a Li-ion battery. For instance, the active materials in the cathode and anode tend to crack over time and there are undesirable side reactions that lead to growth of films on active material particles in both the cathode and anode. Additionally, under extreme conditions, Li+ may react with electrons to form Lithium metal in the anode during a process referred to as Lithium plating, which can be very detrimental to the health of a battery.
Safety simulation and testing is also an important topic. Abuse conditions such as nail penetration, as well as extremely hot or cold environments could be catastrophic for the battery. This is where predictive 3D models of cells and the conditions around them are required in order to capture the electrothermal response of the battery.
Electrical-Equivalent Modeling
Now that we understand more about Li-ion batteries, let’s take a look at the different modeling methods used, starting with electrical-equivalent modeling. This modeling method is done primarily to give electrical representations of how batteries react under certain loading conditions.
Non-Dynamic (Resistive Models)
The simplest test done on a battery or cell is a constant-current discharge/charge test, where cells are discharged and charged to and from 0% and 100% state of charge at a constant current. During these tests, the terminal voltage of the battery is observed to be similar to Figure 2.
Similarly, these discharge/charge tests can be done at multiple currents. Figure 3 shows what a typical Li-ion cell’s terminal voltage may be for constant-current discharge tests at multiple currents.
The behavior observed here is that the voltage changes while the battery depletes depending on the direction of the voltage and the current. Additionally, the terminal voltage drops with increasing currents.
With electrical-equivalent modeling, this type of behavior is represented in an electrical circuit using the resistive battery model, pictured in Figure 4. In this model, the open circuit voltage and internal resistance are often characterized as functions of temperature and state of charge. Using this model, the behavior observed in constant-current charge and discharge cycles can be replicated quite accurately.
Dynamic (Thevenin Models)
The resistive models are good representations of a battery’s electrical behavior observed in steady-state conditions; however, if loads on a battery are more dynamic, the resistive model may not follow observed behavior. For instance, when Li-ion batteries are loaded with pulses of current, the voltage response can be very non-linear. Figure 5, below, shows how a typical Li-ion cell reacts to a pulse of discharge current and a pulse of charge current.
With a resistive electrical-equivalent model, the non-linear, exponential-decay portion of the voltage response would not be captured. In order to be able to capture the exponential decay behavior in the voltage response, one or multiple resistor-capacitor (RC) branches can be included in the electrical-equivalent model. See Figure 6 for a circuit diagram of such a battery model, often referred to as a Thevenin battery model.
With this type of model, the step changes in the voltage captured are represented with the internal ohmic resistance (R0) and the exponential decay of the voltage response is captured with the RC branches.
SOC Estimation and Aging
In each of these electrical-equivalent models, there are a few ways to estimate SOC of a cell or battery. The simplest and most common way is a process called “Coulomb Counting” where the model counts the charge (integral of current over time) moving across the battery and very simply adds or subtracts that charge from the initial charge of the battery.
With these electrical-equivalent models, heat generation of the battery is calculated by the summation of the power losses across each resistor (I2R losses). Aging phenomena of Li-ion cells are often characterized as capacity fade and resistance growth over time with non-physics based empirical models.
Summary
It’s important to note that electrical-equivalent battery modeling is not inherently a physics-based approach. Both the dynamic and non-dynamic approaches are electrical representations of how the terminal voltage of a battery may react to different battery loads. The capacitors and resistors are not meant to represent physical capacitances and resistances but the dynamics of the battery.
The circuit parameters are often characterized at multiple temperatures and states of charge by matching experimental results. This means that electrical-equivalent models are not predictive outside of calibrated temperature or SOC range.
Electrochemical Modeling
If this predictive capability outside of calibrated temperatures and states of charge, or if insight into cell operation is desired, physics-based electrochemical models are available to model Li-ion batteries. For Li-ion batteries, the widely accepted electrochemical modeling approach is often referred to as the “Newman Pseudo 2D model,” named after John Newman, the creator of this model.
In the Newman Pseudo 2D (P2D) model, illustrated below in Figure 7, the cathode, separator, and anode are discretized in the thickness direction (horizontal direction in Figure 7) using a finite control volume approach. Additionally, in each sub-volume of the cathode and anode, there is a spherical representation of an active particle, each of which are discretized using the finite control volume approach in the radial direction. This combination of horizontal and radial discretization is where the term “Pseudo 2D” comes from. The governing equations for the charge transfer and Li+ diffusion are all solved using this finite control volume approach in the model.
In these electrochemical battery models, the concentration of Li+ in different parts of the cell is solved, ultimately leading to a physics-based estimation of state of charge. These models are also able to predict the heat generated from Li-ion batteries, including heat generated ohmic losses, reaction losses, and entropic heating.
With electrochemical models, physics-based representations of aging mechanisms are used. For instance, the SEI layer, cathodic film layer, and their growth over time can be modeled. Additionally, Lithium plating and Li+ isolation due to active material cracking can also be modeled.
These physics-based aging models have many benefits, including:
- Decreasing the amount of battery testing time required
- Understanding how batteries age in real-world scenarios
- Understanding how system-level performance degrades over time
- Understanding how charging strategies can affect battery life
How Do These Methods Compare?
Now that we have an understanding of how each method works, let’s compare the two to understand the advantages and tradeoffs associated with each.
First up – electrical-equivalent models. With this method of modeling, it is very easy to build and calibrate a battery model. There are standard curve-fitting methods to calibrate electrical-equivalent battery models to match a voltage. Because these models have a simple calibration process, they usually give good results for voltage, current, SOC, and heat rate.
Next – electrochemical models. Unlike electrical-equivalent models, electrochemical models give insight into what occurs inside the cell and they can be used outside the calibrated temperature range. Another key difference is the ability to use physics-based (as opposed to empirical) aging models, which give insight into growth rate of the SEI and cathodic film layers, material isolation due to cracking, and Lithium plating. These models also give cell designers the ability to quickly iterate on cell design parameters without having to re-calibrate an electrical-equivalent model.
I know that’s a lot of information, so some advantages of each modeling approach are summarized in Figure 8.
When Do I Use These Models and How Do I Use Them?
As you can see, both modeling methods offer their own unique set of advantages and tradeoffs, but it still may not be clear when electrical-equivalent models and electrochemical models should be used.
For many system-level and thermal engineers, electrical-equivalent models are all that is required because these models give accurate results for battery voltage, current, SOC, and heat rates. However, depending on the questions that are trying to be answered with simulation, electrical-equivalent models may not be sufficient.
Electrochemical models are often used in more advanced applications of battery modeling. For instance, if a model needs to predict the aging of a Li-ion cell, physics-based aging models offer a more flexible and predictive solution than the empirical aging models available in electrical-equivalent models. Additionally, when designing a cell, choosing a charging strategy, or studying battery behavior in extreme climates, electrical-equivalent models may not provide enough insight into the behavior of the cell, which is often required for these applications.
Because both the electrical-equivalent and the electrochemical approaches to modeling Li-ion batteries provide value to simulation engineers, Gamma Technologies offers accurate capabilities for both methods. When electrical-equivalent models are required, engineers are able to use GT-SUITE’s flexible solution for a resistive or Thevenin (flexible number of R-C branches) battery model. When electrochemical models are required, AutoLion provides cell designers and system simulation engineers accurate and fast-running electrochemical models. These two tools can also be combined in order to incorporate electrochemical battery models into multi-domain system-level models.
What’s Next?
I know that this was a long read, but I hope this blog helped make Li-ion batteries feel less like “black magic” and that it built up an understanding of both electrical-equivalent and electrochemical modeling approaches to them.
If you’d like to learn more or are interested in trying GT-SUITE or AutoLion for battery simulation, Contact us!
NOTE: This piece was originally published on July 2018
Robust Battery Pack Simulation by Statistical Variation Analysis
Simulating Battery Packs – Not All Cells are the Same
When simulating a large battery module, typically we assume that all the cells in the module are going to be the same. However, that is not always the case. Factors such as the capacities and resistances of the cell can vary from cell-to-cell. This brings up the question – How can we model the variance of different cells within the module?
In v2021 of GT-SUITE, we added some new features to help model this cell-to-cell difference. One new feature is a new statistical variance analysis tool.
This new tool opens a wizard which walks users through choosing an object to select and vary the mean and standard deviation for the attribute variation. This will create unique parameters for each part associated with the object. To put it simply, each cell uses part overrides to define different capacities and resistances for each cell in a battery module.
Another new feature is in our design of experiments setup. The Monte Carlo method has been added as a new DOE distribution method in DOE Setup. This allows users to vary any parameter according to a normal distributed mean or standard deviation.
Let’s take a look at this example below.
In this example, we have a battery module with 444 cylindrical cells – 74 in series and 6 in parallel. We were told that the cell capacity was 3.2 Ah. Additionally, we were given distributions of the capacity and the resistance of 200 cells of the same type. The distributions followed a standard normal distribution and included the mean and standard deviation of the distribution.
With the statistical variance tool, we can add unique parameters for the capacity and resistance multiplier for each of the 444 cells with our simple wizard. Once that is done, we can open up DOE Setup and select how many experiments we want to run to see how the distribution of capacity and resistance affect our module. GT’s Monte Carlo solution enables normal distribution to be setup for the capacity and resistance multiplier parameters
After running the model, we can look at the responses for each individual cell in the module including the state of charge and total energy dissipated. The boundary conditions for this model include discharging at a 2C rate and using a simple thermal model of a 1-D convective boundary condition at an ambient temperature of 25oC and a convective heat transfer coefficient of 10 W/m2K. The total energy dissipated varies by around 3% and the SOC varies by around 1% within the various cells in our module.
Those these results may seem small, but depending on the ambient temperature, discharging/charging protocols, or other factors, they can have a great effect on the battery module. In some instances, we might be able to see that one section of the battery module is heating much faster than another, meaning that some cells might need to be replaced faster than others or need to be designed to allow for more cooling compared to other areas of the module.
With these new tools and with GT-AutoLion, users can take large battery modules like these and analyze the responses of each individual cell and extrapolate to various C-rates and temperatures with physics-based modeling – allowing the user to gain more insight into their battery module, evaluate the battery degradation, and improve their BMS in the process.
Written by Vivek Pisharodi
Simulating Battery Thermal Runaway Propagation with GT-SUITE
Learn how to use a model to evaluate battery pack safety during thermal runaway events
Designers of battery packs are tasked with a very challenging problem: to package a set of cells as tightly as possible and minimize the amount of non-cell weight in the battery while maintaining proper temperature levels of cells, protecting against premature cell degradation, and ensuring safe operation. The last item on the list is, of course, the most important: ensuring that the battery pack is safe under any circumstance.
The most common challenge to ensuring battery safety is thermal runaway. Thermal runaway is a phenomenon that occasionally occurs in Lithium-ion (Li-ion) cells when extreme temperatures are reached. During thermal runaway, undesired exothermic side reactions heat up the cell, and as the cell heats up, the rate at which the undesired reactions occur accelerates, eventually causing a catastrophic loop of events that concludes with a destroyed Li-ion cell and a lot of heat released. This loop of events is summarized in the image below.
There are many potential causes of thermal runaway. For example, if a cell is heated to extreme temperatures, thermal runaway can occur. If a cell is pierced by a nail or crushed, this can cause an internal short which eventually leads to thermal runaway. Other times, thermal runaway can occur for seemingly no reason at all – in these cases it is often manufacturing issues or even internal dendrite growth that lead to internal shorts inside the Li-ion cell.
With all these potential causes for thermal runaway, as a pack designer, how are you supposed to protect your cells from entering thermal runaway? The unfortunate answer: you can’t.
In fact, this is the wrong question for a pack designer to be asking. Because thermal runaway can occur for so many different reasons, and occasionally for no apparent reason, pack designers must assume that at some point a cell in a battery pack will enter thermal runaway. The correct question to be asking is, “Is my pack designed well enough to withstand one cell entering thermal runaway without starting a chain reaction of neighboring cells entering thermal runaway?”
Thermal Runaway Propagation
Thermal Runaway Propagation is the key phenomenon to consider when designing a safe battery pack, this refers to the event of a single cell entering thermal runaway, releasing a large quantity of heat, and heating neighboring cells to the point of thermal runaway, essentially starting a chain reaction in which all cells in a battery pack are eventually destroyed.
There are various levels of success for this type of thermal runaway propagation scenario. There are the intuitive “pass” or “fail” results where a “pass” would mean that after a cell enters thermal runaway, it does not cause a chain reaction and a “fail” would mean that after a cell enters thermal runaway, it does cause a chain reaction. There is a less intuitive middle ground in these scenarios, too. For instance, maybe a chain reaction is set off, but the time delay between the first cell entering thermal runaway and the entire battery pack being destroyed is a long period of time, this may also be a “passing” result, depending on the application. If a cell is sensed to have entered thermal runaway while a vehicle is at highway speeds, does a family have enough time to stop and safely exit the vehicle before a fast-moving chain reaction is set off? If a cell enters thermal runaway during an EVTOL flight, is a pilot able to land before the chain reaction becomes unstable?
Without Simulation
Testing the design of a battery pack against thermal runaway propagation is an expensive and dangerous endeavor. First, expensive prototype versions of battery packs must be assembled, then a single cell is selected (either at random or with engineering discretion to determine which would be most likely to cause the undesired chain reaction), and finally thermal runaway is intentionally induced on the selected cell (either with a nail or by heating it to extreme temperatures). After that, it is up to the design of the pack to determine whether or not neighboring cells enter thermal runaway, and if they do, how fast.
This experimental setup has two major downsides. First, battery packs are expensive, and prototype versions of battery packs are even more expensive. To build these and intentionally destroy them can result in a high cost for battery safety testing. Second, this physical test is often done very late in the development cycle for the battery-powered product (e.g battery electric vehicle, EVTOL, electric bicycle). If a battery pack fails this test, it can be a major setback for the release schedule of the product, which can be detrimental to businesses.
With Simulation
Using simulation to run virtual thermal runaway propagation tests for Li-ion battery packs is a great way to avoid the costs and risks associated with experimental testing. In addition to that, single cells do not need to be picked out at random. Instead, multiple tests can be setup testing the “what if” scenario for every cell in a battery pack.
GT-SUITE is the ideal platform to run virtual thermal runaway propagation tests.
Modeling Thermal Runaway Propagation in GT-SUITE
In a paper published with NASA, who has extensive experimental data on thermal runaway of Li-ion cells, GT-SUITE was used to model the propagation effect of thermal runaway in a small battery module. The thermal runaway propagation model was built by converting CAD geometry and validated with experimental data.
Nominal Electrothermal Model of Battery Module
The study shows a number of test cases, including two of the battery modules during normal operation, which do not have cells entering thermal runaway. The animation below shows one of these tests, a battery module discharging at a C-rate of 1C. In the animation below, the blue – red contour animates local temperatures where blue is cool temperatures and red is hot temperatures. From the animation below, we can see the battery slowly warms up while being discharged at 1C.
To take this electrothermal battery model and setup the thermal runaway propagation model, a few extra steps were required.
Cell-Level Experimental Thermal Runaway Tests
NASA has created specialized bomb calorimeters that impose thermal runaway on a single cell through a variety of causes (internal short, nail penetration, excessive heating). With this type of cell-level testing, NASA was able to measure the amount of energy released during a thermal runaway event. Some example results from their testing of cylindrical cells are shown below.
Alterations to Battery Model
The nominal battery model that was setup for the previous electrothermal model was upgraded to include a model of thermal runaway. This included the following changes:
- The Trigger: If any jelly roll temperature rose above 180°C, the cell would immediately enter thermal runaway
- The Heat Release: Once a cell entered thermal runaway, the cell would release energy in the form of heat (in this case 70 kJ)
- 40% of the heat released would be absorbed by the jelly roll in 1.5 seconds
- 60% of the heat released would be released as ejecta in 1.5 seconds
- The Electrical Disconnection: Once a cell entered thermal runaway, it would no longer participate in the module, which means the neighboring cells, which are placed in parallel, would have more current flowing through them.
Module-Level Model of Thermal Runaway Propagation
Once these alterations were made to the battery model, any cell in the module can be selected as the “trigger” cell by applying an external heat until the trigger temperature of 180°C is reached.
In the first study, a cell in the corner of the module was selected to be the trigger cell. It was artificially heated to its runaway temperature of 180°C and it immediately entered thermal runaway. The animation below shows the results of the thermal runaway simulation with the corner cell (top of the image) selected as the trigger cell. Once again, blue cells are relatively cool and red cells are hot. From viewing the animation, we can see that the corner cell does not cause a chain reaction of neighboring cells entering thermal runaway.
Since no real battery modules were destroyed in this simulation, this simulation can be repeated under different conditions. The next study conducted was to test how the module behaves when a cell in the center of the module enters thermal runaway. The image below shows how the module reacts when a cell in the center of the module is the trigger cell for thermal runaway. Once again, we can see that the thermal runaway event does not propagate to neighboring cells.
The virtual thermal runaway propagation tests shown above both show “passing” results. The trigger cell self-heats to extremely high temperatures; however, the neighboring cells do not pass the 180°C threshold to enter thermal runaway.
In order to illustrate a “failing” test result, some changes were made to the module to make it more likely to propagate thermal runaway to neighboring cells. The busbar in the module was included, which increased the amount of heat conducted between neighboring cells. Additionally, the ratio of self-heat and heat released as ejecta was altered to be 30%-70% instead of the 40%-60% previously mentioned.
With these changes, the following results were observed. In this case, the trigger cell very quickly causes a chain reaction among neighboring cells and causes a much more catastrophic event than the previous two test cases presented.
Time-to-Results
Because time is one of the most important resources of battery pack designers, one of the key considerations when faced with a modeling challenge such as thermal runaway propagation is the total time that it takes to get results. The time that it takes to get results is the sum of the time it takes to build a model (“time-to-model”) and the time that it takes the model to run (“time-to-run”). With GT-SUITE, both time-to-model and time-to-run are minimized.
Time-to-Model
In the examples given above, the CAD geometry was converted into a GT model using GT’s built-in CAD geometry pre-processing tool GEM3D and models were further setup using GT’s integrated simulation environment in roughly half of a day.
Time-to-Run
In the examples given above, the models run roughly 2-4 times faster than real-time on a laptop PC, resulting in a 30-minute simulation taking 7-15 minutes to run. The finite element structure in this model consisted of 6,000 nodes and 13,000 elements.
This fast time-to run enables users to experiment with some of the uncertainty that comes with battery thermal runaway propagation. Which cell initiates thermal runaway? How much heat does it release? How is that heat released? How much material is ejected from it? All of these are sources of variability that can be explored with the help of fast-running models (look for a future blog on this specific topic!). This type of variability analysis would not be possible when using extremely detailed 3D CFD models.
Conclusion
When designing a battery module or a battery pack, the battery’s response to a cell entering thermal runaway needs to be studied to analyze whether or not the cell causes a chain reaction of cells entering thermal runaway, known as thermal runaway propagation. This can be done experimentally by building prototype modules and packs and imposing thermal runaway on a trigger cell; however, this can be extremely expensive and if the pack fails the test, can be a substantial setback in the development of the battery.
With GT-SUITE, these thermal runaway propagation tests can be done virtually. This provides a number of advantages, including the large cost advantage and the ability to run any number of hypothetical thermal runaway propagation tests.
If you’d like to learn more or are interested in trying GT-SUITE to virtually test a battery pack for thermal runaway propagation, Contact us!
Written by Joe Wimmer and Jon Harrison
How to Optimize Electric Vehicle (EV) Drivetrains in Less Than 1 Day Using Simulation
Improving Hybrid Electric Vehicle Controls:
The recent proliferation of hybrid electric vehicles has greatly complicated the world of vehicle controls engineers. Multiple energy sources and propulsion systems applied to sophisticated hybrid drivetrains necessitate a much more intricate controls strategy than conventionally powered vehicles.
Determining when to distribute power to the engine, motor(s), or both is no simple task, and the time typically taken to develop these controls strategies reflects that. Even developing controls for simple hybrid vehicle models can take precious time away from the rest of the design process, and cutting corners can lead to sub-optimal fuel economy and vehicle performance results during simulation. Fortunately, GT-SUITE’s embedded tools include two different methods to automatically generate optimized, charge-sustaining hybrid controls strategies on a per drive cycle basis:
- Equivalent Consumption Minimization Strategy (ECMS)
- Dynamic Programming
Using these tools allows for quick evaluation of a hybrid system’s peak capabilities without the hassle of developing and testing multiple controls options.
Model Generation & Evaluation In Minutes Vs. Days:
In Part 1 of this blog series, we employed GT-DRIVE+, Integrated Design Optimizer, and JMAG-Express to properly size and characterize an electric motor for a P4 hybrid system in a compact passenger car. These tools streamlined a traditionally time-consuming design process, with model generation and evaluation taking minutes rather than days. The goal was to select a motor for a P4 hybrid to meet the following requirements:
Metric | Requirement |
Acceleration (0-60 mph) | 8.5 seconds |
Fuel Economy (City/Highway) | 50/52 mpg |
This blog will build upon our previous work, applying two of GT-SUITE’s hybrid controls optimization solutions to evaluate the previously selected motor’s impact on drive cycle fuel economy. Applying these tools within our workflow allows us to evaluate estimated fuel economy under optimized control without spending time developing complex hybrid controls.
Previous evaluation of our example model revealed that our 27.5 kW motor selection met the acceleration and highway fuel economy requirements but could not meet the city fuel economy demand. These tests, however, were performed using a rule-based control strategy that was not necessarily optimized for city or highway driving. Applying ECMS and Dynamic Programming to the city drive cycle should provide a better idea of this configuration’s fuel economy capabilities.
Equivalent Consumption Minimization Strategy (ECMS)
ECMS in GT-SUITE assigns a “fuel consumption” rate to energy pulled from the vehicle’s battery. Calculation of this energy-equivalent rate is influenced by several user-defined parameters including:
- Equivalence Factor – this represents the relationship between battery energy and fuel energy
- Target State of Charge – this sets a target SOC to develop a charge-sustaining strategy
- Penalty Function Exponent – this influences a penalty function that increasingly penalizes battery energy consumption as the battery deviates farther from the target state of charge
For an ECMS run, the user specifies a variety of independent control variables that are altered at every timestep with the goal of minimizing combined ‘fuel’ consumption from both the engine and the battery. For our example, the following variables were selected:
Variable | Values |
P4 Motor Torque (27.5 kW motor) | -105 Nm to 105 Nm |
Transmission Gear Number | 1st to 6th Gear |
Vehicle Mode | Hybrid, Electric, or Conventional |
At every timestep, all combinations of the independent control variable values are considered. Any combinations that can meet the drive cycle power demand while obeying the defined constraints are evaluated to determine total fuel consumption. This calculation is heavily influenced by the battery energy-equivalent rate parameters. For example, if the SOC deviates too far from its target, then a larger penalty will be levied on battery consumption to incentivize a charge-sustaining strategy – this means scenarios where more engine power and less motor power is used may be deemed more favorable at that timestep. The variable combination that locally optimizes fuel consumption is then selected, and the process repeats for the remaining timesteps. The process at each timestep is summarized below:
Applying an ECMS control strategy to our city driving cycle, we will see a significant improvement in fuel economy that meets our initial requirements:
FTP-75 (City) Minimum Fuel Economy Requirement | Reported FTP-75 (City) Fuel Economy | |
Heuristic Control | 50 mpg | 42.93 mpg |
ECMS Local Optimization | 50 mpg | 58.30 mpg |

Figure 3. ECMS and Dynamic Programming runs vary the selected variables at every timestep to minimize fuel consumption
Despite evaluating 612 different control scenarios at every timestep, this ECMS run completed in less than 3 minutes. After completion, we can see that our motor selection will be sufficient to meet the initial fuel economy requirements – all it needed was a better control strategy. However, optimizing locally at each timestep will likely result in slightly sub-optimal performance over the entire drive cycle.
In other words: This is good, but we can do even better.
Dynamic Programming (Global Optimization)
Dynamic Programming will provide an even clearer picture of our example vehicle’s fuel economy capabilities under optimal control. Dynamic Programming uses similar strategies to minimize fuel consumption but seeks to do so in the context of an entire drive cycle. A global cost function is created and minimized using similar parameters to those defined for ECMS. The run begins at the end of the drive cycle and marches backwards in time to the initial state, where the fuel costs for all possible states and controls are calculated and saved. By referencing these saved values, a controls solution is determined by computing the ‘optimal cost-to-go’. This may not necessarily minimize fuel consumption at every timestep but will produce a solution that cumulatively has the lowest fuel consumption from start to finish.
Applying dynamic programming to our city driving cycle, we will see fuel economy further improve to 62.3 mpg:
This blog series has demonstrated 5 different GT-SUITE tools that will significantly streamline your design process. In our motor sizing example, this increased efficiency was apparent:
- GT-DRIVE+ instantly generated a P4 HEV vehicle model to use for evaluation – 5 minutes
- Integrated Design Optimizer automatically selected the correct motor size to meet our acceleration requirements – 20 minutes
- JMAG-Express instantly created an efficiency map from our selected motor characteristics – 10 minutes
- Optimization Tools generated controls for our drive cycles to understand motor/vehicle performance under optimal control – 2 hours
One iteration of this design process could conceivably take less than one day. If we are unhappy with the results after evaluating this final design, we can easily iterate through again – tweaking our initial model and motor characteristics and applying all the tools again with relatively little time lost. If you are interested in learning more about any of these tools, feel free to contact us for additional information!
How to Optimize Hybrid Vehicle Design using Simulation
As emissions regulations tighten and an electrified future grows more and more imminent, automotive engineers have been tasked with applying decades of traditional vehicle engineering knowledge to increasingly complex xEV architectures. The good news is that automakers have become very proficient at building conventionally powered vehicles – so much so that they still comprise the basic underpinnings for most electric propulsion architectures. But even implementing hybrid-electric components into an existing conventional vehicle architecture introduces a long list of questions, such as:
- Where in my driveline should I integrate my motor?
- What size battery do I need to meet design requirements?
- And how should I optimally distribute power between the engine and (potentially multiple) motors?
Fortunately, GT-SUITE offers a variety of embedded tools that help answer these questions.
This will be the first blog in a two-part blog series highlighting solutions for hybrid-electric component sizing, assisted electric motor design, power split controls optimization, and much more. Proper use of these tools will significantly aid development and increase efficiency throughout various stages of the design process.
Hybrid Component Integration and Optimization
To demonstrate the power of incorporating these tools into your hybrid vehicle design process, we will walk through a simple example of hybrid component integration and optimization. The goal of the exercise will be to appropriately size and optimize control of an electric motor in a standard compact passenger vehicle.
This process can traditionally be burdensome. Assessing multiple motor configurations often requires repeated manual manipulation of models. Defining motor characteristics typically relies on data from expensive testing, and developing effective hybrid controls strategies is often a time-intensive, trial-by-error process. Fortunately, GT-SUITE’s embedded tools address these problems within one easy workflow. The efficiency of these tools also allows extra time to continually iterate and fine-tune your design.
Requirements:
For this example, we will focus on hybthat are reasonable for a compact hybrid sedan. Selecting a smaller, lower output motor will save costs, so we will seek to minimize motor size while still meeting these requirements.
Metric | Requirement |
---|---|
Acceleration (0-60 mph) | 8.5 seconds |
Fuel Economy (City/Highway) | 50/52 mpg |
Quickly generate a hybrid vehicle model
GT-DRIVE+ starts with a model generator that allows for easy generation of vehicle systems and quick evaluation between multiple architectures and components. When creating a hybrid vehicle, the user will be presented with several options for vehicle architectures as well a large library of pre-defined engines, transmissions, electric motors, batteries, and drive types.
The components were selected in the model generator to closely match many of the hybrid vehicle offerings currently on the market:
Hybrid Configuration | P4 |
Vehicle | Compact Car |
Drive Type | FWD |
Battery | Lithium-Ion 247 V 5Ah |
Engine | Gasoline Direct Injection, 1.4 L Turbocharged I4 |
E-machine | To be configured within model |
Transmission | 6 Speed Automated Manual |
Now, after just a few simple selections, a full vehicle model is generated – complete with test cases for city/highway driving cycles, and a pre-configured hybrid control strategy. If necessary, the exact same components can also be generated within a P0, P2, or P0/P4 hybrid architecture to compare how motor placement impacts vehicle performance.
Evaluate motor sizing
Next, maximum and minimum motor torque curves can be modulated to determine the minimum acceptable motor output. This can be done by configuring GT’s integrated design optimizer to sweep through one or more parameters to target a specific output. Here, the 8.5 second acceleration performance requirement is targeted. The design optimizer can be setup to modify torque values, run the model, evaluate the outcome, then modify torque values again, rerunning until results converge on the requirement. In this case, 151 possibilities were automatically evaluated in under 10 minutes. Upon completion, the design optimizer outputs several plots that help visualize the evaluation process, along with an updated model containing the final optimized parameter values.
After doing so, a motor with the following characteristics was selected to meet the acceleration performance requirement:
Note that while two of the requirements are satisfied, city fuel economy does not meet the 50 mpg requirement. This is not cause for immediate concern. GT-DRIVE+ automatically generates a rule-based hybrid controls strategy that effectively follows standard drive cycles but is not necessarily optimized for city or highway driving. A hybrid control optimization tool should be applied to this model to better understand city/highway fuel consumption. The second blog posting in this series will demonstrate the application of these optimization tools to this example.
Generate motor efficiency maps
GT has partnered with JSOL to bring their motor design tool, JMAG-Express, directly into GT-SUITE. The approximated motor characteristics determined in our initial investigation can be input directly into this integrated tool to calculate a complete motor efficiency map, as well as estimate many other fundamental motor characteristics, in only a few minutes.
This efficiency map can be pulled into our initial model to more accurately characterize the efficiency and power demands of a motor this size.
Instantly generating efficiency and loss maps eliminates the need to choose between over-simplifying motor efficiency for convenience, or spending time and money to test motor hardware. Instead, we can quickly calculate efficiency of our 27.5 kW machine, plug it in to our original model, and move on towards applying our hybrid controls optimization tools.
In this blog, we walked through three different GT tools that facilitate the hybrid vehicle design process. These tools assist and accelerate a highly iterative design process where components can be evaluated, optimized, changed, and evaluated again without sacrificing large amounts of time or resources. The next blog entry showcases GT’s hybrid control optimization solutions and how it can be used in the context of the example presented here.
If you would like to learn more about GT’s integration with JMAG-Express, [CLICK HERE] to read an additional blog post on the topic.
Accurate, Concept Level Electric Motor Design Using Simulation (Part I)
How to evaluate electric vehicle performance and behavior using simulation
A typical task of a vehicle simulation engineer is to evaluate the effect of different technologies or component selections on overall vehicle performance and behavior. One of the main challenges of this task is the lack of accurate data available for components, especially for engines, batteries, and electric motors. This lack of data availability can lead to false assumptions or extrapolations which may lead to inaccurate results. In this first blog of a two-part series, we will introduce a new integration between GT-SUITE and JMAG-Express Online that provides a method for accurate concept-level electric motor design. In the context of vehicle electrification, motors are a key powertrain component. What is required for a motor is not only high performance as a component but also high consistency with the system. This includes, for instance, matching the motor and battery sizes, but also cooling system size and performance as well. Figure 1 below shows an example of how different losses, and therefore cooling requirements, vary throughout the motor operating range.
High-fidelity efficiency map-based modeling
Vehicle engineers use either a map-based approach measured by the prototype or lower fidelity motor model-based approach at the system design phase. When using a map-based approach, the engineer commonly needs to wait for the prototype to be ready, or relies on other empirical approaches. Alternatively, using a lower fidelity motor model-based approach causes a lot of rework as the design matures. To eliminate these errors and inefficiencies, GT and JSOL have partnered together and are excited to release new software functionality. With GT-SUITE v2020, GT users can now create a high-fidelity motor model by using the embedded JMAG-Express Online interface. JMAG is a comprehensive software suite for electromechanically design and development. It enables users to make a high-fidelity efficiency map model with less than 1% error compared to measurement. It allows the user to see various kinds of motor characteristics within 1 second by changing motor types, slot combinations, dimensions and other machine parameters.
Figure 2 shows a high-level overview of the workflow.
The above workflow is accomplished through an integrated interface, shown in Figure 3.
To the GT user, the experience of concept-level motor design is intuitive and seamless, as well as fast. In this embedded interface, the user has flexibility to change machine types, geometry, as well as requirements for torque, power, and maximum speed. It is also possible to add additional constraints on the system, such as voltage and current limitations, as well as geometry constraints such as maximum motor diameter or stack height, air gap, etc. Based upon “rules of thumb” and common motor design principles, JMAG-Express Online will create a motor which meets the requirements, subject to the constraints. The user can refine the design or proceed with the configured motor design. Because of JMAG’s history in the area of motor design, the end user does not need to be an expert in motor design to be effective in exploring different design possibilities.
Through this embedded workflow, users quickly and efficiently analyze different motor types and create maps for each, such as in Figure 4.
Because the JMAG-Express Online interface is natively integrated with GT-SUITE, users not only analyze the motor behavior in a standalone environment but integrate the JMAG motor models directly in a complete system-level model, as shown in Figure 5.
Such a model can be exercised through standard drive cycles, and by reviewing the residency of plots for motor operating points by going through vehicle simulation, it enables users to reflect immediately on the motor specification and run the next simulations, shown in Figure 6.
By connecting vehicle simulation engineers with parametric and template-driven motor design solutions with JMAG-Express Online, it is now possible to make earlier, and more confident design decisions, or motor selections. The push-button integration allows for design-space studies which more quickly explore all possibilities for the most effective motor solution at the vehicle level. Check out Part 2 of the blog series, where we discuss further integration possibilities between GT and JMAG, which move beyond map-based models into more predictive capabilities.
Written By: Jon Zeman and Yusaku Suzuki
How to Reduce Battery Charging Time While Maximizing Battery Life
In the age of electrification, promising technologies like battery electric vehicles and electric aircrafts are coming into the forefront of societal advancements. However, one major hurdle in electrification is speeding up the vehicle battery charging time. Re-fueling conventional vehicles and aircrafts generally takes minutes, whereas electric vehicles and aircrafts can take hours. For airlines, this down time can be very expensive, and a long charging period has been shown to reduce the likelihood of consumers purchasing pure electric vehicles.
To combat this, battery engineers are exploring options to reduce the amount of time required to charge a battery. Unfortunately, fast charging Li-ion batteries can cause premature battery degradation by initiating lithium plating, so an aging cost for fast charging must also be considered. Because of this, the system manufacturer (i.e automotive OEM, aircraft OEM, power tool OEM, or consumer electronics OEM) has to strike a balance between decreasing time required to charge a battery and the expected life of the battery (which can have a great effect on brand perception).
There are various charging protocols that both improve the battery life and shorten the charging time, when compared to traditional charging protocols. The effects of these charging protocols vary from cell to cell and need to be tested individually to fully understand their effects on cell charging time and degradation rate.
Testing and optimizing charging protocols is extremely resource intensive because it intentionally degrades lithium-ion cells, which can be expensive and time consuming. GT-AutoLion helps reduce this cost by supplementing experimental tests with virtual tests.
What is Lithium Plating?
One of the key contributors to battery degradation that comes with fast charging is lithium plating. Lithium plating is the reduction of lithium ions into lithium solid. It is caused when the potential in the anode falls below zero volts and cycling lithium-ions (Li+) reacts with electrons (e–) to form lithium metal (Li+ + e– -> Li). This lithium metal is deposited into the anode, lowering the porosity of the anode. Because Lithium-ions are consumed in this reaction, it decreases the capacity of the cell. Additionally, because it lowers the porosity of the anode, it increases the resistance of the cell.
Lithium plating occurs most frequently when Li-ion cells are charged with very high currents, especially at low temperatures.
Figure 1, taken from a paper using GT-AutoLion, shows how GT-AutoLion can be used to match experimental data of capacity fade with its built-in model for lithium plating. With this model, GT-AutoLion allows engineers to virtually test various charging strategies and their effect on both charging time and cell degradation.
Example Charging Protocols
A charging protocol is an algorithm which defines the charging methodology of a cell. Each charging protocol has different implementation costs and unique implications on charging time and cell degradation. Figure 2 summarizes three of the most common charging protocols.
The most common charging protocol is a constant-current-constant-voltage (CCCV) charge. During a CCCV charge, the cell is charged with constant current until a certain max voltage is reached. After, the cell discharges while maintaining the voltage at the previous max voltage, as shown in Figure 2 (left). A CCCV protocol is considered to be the simplest, safest, and most widely-used protocol to implement.
In boost charging (BC), the cell is charged with a constant boost current that is significantly higher than the subsequent constant current charge. The cell then discharges while maintaining a constant voltage. The BC protocol is shown in Figure 2 (middle). Implementing a BC protocol can decrease charging time without potentially losing cycle life.
Pulse charging (PC) is another charging protocol that can also be used. During PC, the current alternates between a high current and a low current and the voltage increases until an upper cutoff voltage is reached, as shown in Figure 2 (right). Pulse charging can reduce resistance due to diffusion, which reduces charging time and aging and improves the cycle life of a cell.
Fast Charge Strategy Development in Real-World Aging Simulation
While various charging patterns can be studied experimentally, these experimental tests often are not reflective of the real use case a battery may see in a vehicle, aircraft, power tool, or consumer electronic. As presented in a previous blog, GT-AutoLion and GT-SUITE can be used together to predict how a Li-ion battery will degrade over time while considering any use case such as various load profiles, drive cycles, and weather conditions. These analyses can also be upgraded to test the effect of the charging protocol on real-world charging time and battery degradation.
Conclusion
With GT-AutoLion and GT-SUITE, system manufacturers better understand the tradeoff between reducing the time required to charge a battery and maximizing the life of a battery. This tradeoff is imperative to understand because it has a profound effect on customer satisfaction and brand perception.
Written By: Vivek Pisharodi[/vc_column_text][/vc_column][/vc_row]