At Gamma Technologies, the goal of our battery suite simulation solutions, through GT-SUITE and GT-AutoLion, is to provide accurate, high-fidelity battery simulation capabilities for reliable prediction of real-world performance. In this blog, I investigate how battery simulation runtime can be saved running hundreds of design optimizations using distributed computing. Depending on the modeling requirements, some optimizations can benefit greatly from scaling the simulation runs using over a high performing computing (HPC) cluster to accelerate the turnaround time or greatly augment the design space.
With GT-AutoLion, you can take actual cells and use our unique, fully physical, pseudo-two-dimensional (P2D) models to predict cell performance of these cells. Different types of analysis are possible, such as: voltage, temperature rise, current, power, and several other metrics.
Additionally, GT-SUITE and GT-AutoLion can also create physics-based models to help predict the aging of a cell aging over time or over a certain number of cycles. Additionally, that cell can be placed in a system-level simulation to make for more meaningful aging predictions. GT battery simulations can provide insights such as the range of an electric vehicle or the number of years of operation for a power tool.
Simulating Electrochemical Models such as Cell Performance and Cell Aging
In an electrochemical model, parameters such as cell dimensions, cell chemistry, and various other material properties can be varied to match the experimental behavior of the cell.
With the use of GT-SUITE’s design optimizer, these parameters can be varied to calibrate model behavior by minimizing the error between experimental and the simulated GT-AutoLion data.
To match data to constant current discharge, calendar aging, or cycle aging for instance, we can iterate hundreds of different designs on an HPC setup or through cloud computing.
The images below show the optimized GT-AutoLion results for constant-current discharge voltage curves, calendar aging and cycle aging data.
Leveraging the Cloud
With distributed computing, we can take a model which would normally be run locally on a machine and send it to a cluster using multiple cores. All that is required is additional solver licenses to increase the number of jobs. If a cluster is not readily available on-site, it is possible to access the cluster of a regional partner.
Likewise, a cloud server can be used to speedup simulation time. You can run long simulations or models which require high computing power. Cluster hours can be purchased from manufacturers such as AWS, Google Cloud, and Microsoft Azure to enable distributed computing.
See figures below that demonstrate these typical use cases.
Faster Runtimes of Complex Battery Simulation Models
One example of an intricate model includes a performance calibration exercise (included in installation of GT software). In this model, 600 designs are run using the design optimizer in GT-SUITE. The designs vary factors including the heat transfer coefficient, thicknesses of the cathode, anode, separator, and particle sizes of the active materials for 4 cases of varying constant-current discharges. For more information on why these factors were selected and to see this 600-design example model, GT’s own Ryan Dudgeon has written a blog post that explains more.
One design where all four cases are run locally on a standard work machine (with one logical core active) takes about 10 seconds to finish. During optimization, where we run 600 of these designs, this can increase the total runtime to roughly an hour.
However, with the aid of a distributed cluster we can run 5 designs in parallel with 5 solvers to decrease the runtime by 14 minutes. Additionally, we can use cloud computing to run even faster, which allows the computing resources to be fully elastic and allows for unlimited parallelization. The same number of 5 designs can be run in about half the time, saving 29 min. Increasing the number of designs in parallel up to 25 with cloud computing, decreased the runtime to just over 15 minutes. A 46-minute time savings!
It’s also interesting to look at an application where the speed increase brings more value, such as aging calibration models. The aging of a cell can be modeled as either calendar aging or cycle aging. For more information on how simulation can be used to predict aging, refer to this blog by my colleague Joe Wimmer.
Calendar aging involves taking a cell and measuring the loss in capacity over time. Running an optimization on a cell to calibrate calendar aging can take quite a bit of time if we are aging for months or years or looking at aging for different temperatures.
The calendar aging calibration model I explored ran for 360 days at 2 different temperatures. As shown in the table below, distributed computing can improve the simulation runtime of the model by nearly 2 and a half hours!
Another typical example is cycle aging calibration. Our cycle aging calibration example model undergoes a constant-current-constant-voltage (CCCV) charge after a constant current discharge.
This cycling protocol is repeated 1000 times at 3 different temperatures. Of course, we are not limited to just CCCV charging. Several different charging profiles such as boost charging and pulse charging can also be implemented, as mentioned in this blog. Since there is a varied load applied to the cell, we can’t run the model with large timesteps like with the calendar aging model, which has no load applied to the cell.
Running all 3 cases, (a total of 3000 cycles for 3 different temperatures), takes roughly 16 and a half minutes. Because of this, running the whole optimization containing 600 designs locally takes nearly 1 whole week! However, with the help of distributed computing, we were able to reduce the total runtime by a whole business week!
Learn More About Our Battery Simulation Solutions
These models can all be found in the GT installation as part of our GT-AutoLion calibration tutorials.