Tolerance Study of a Diesel Fuel Injector Model using Sensitivity Analysis and Variability Analysis in GT-SUITE
Tolerance Study of a Diesel Fuel Injector Model using Sensitivity Analysis and Variability Analysis in GT-SUITE
This example demonstrates a tolerance study using GT-SUITE. The application is a detailed diesel injector model, but the concepts and tools described here can apply to a tolerance study applied to any other model or engineering domain. It demonstrates ranking of influential factors, identification and removal of negligible factors, evaluation of probability distributions of output variables, and execution of what-if studies to predict improvements to the output variable distribution. These objectives are accomplished by running a couple different Designs of Experiments (DOE) and performing various analyses in GT-SUITE’s DOE post-processor (DOE-POST).
Problem Description
We consider a detailed diesel injector model where the model map layout and component schematic is shown here:
Although the application details are not the primary focus of this writing, more details about diesel fuel injectors are included here for readers that are interested. In a diesel injector the fuel flow to the cylinder is governed by a large number of parameters. In this type of injector, a solenoid actuates the control valve, which connects the high and low pressure sides of the fuel system. The resulting flow passes through the control chamber, for which an inlet and outlet orifice diameter significantly influences the static pressure drop. Because the injector’s needle, which is close to the cylinder, is still subject to higher pressure, an upward motion of the control rod and needle releases fuel to the cylinder. The transient mass flow, or rate shape, is an important characteristic for the control of the combustion processes quality and is dependent on a large number or parameters. Aside from the actuation duration of the solenoid, mechanical parameters such as spring characteristics and clearances, as well as flow parameters such as restriction diameters at various locations, are important factors and can interact with each other. If the control piston and needle never reach their designed upper stop, the injector is referred to as being operated in the ballistic regime. Because of the floating nature of the needle, this regime is most sensitive to manufacturing tolerances.
For this study 11 known sources of variability are investigated. A single operating point is studied at 1300 bar rail pressure and 0.5 ms energizing time, which puts the injectors in the ballistic regime.
The 11 input variables, or factors, are listed in the table below, along with their known tolerances. It is assumed that all variations conform to normal distributions such that 95% of the variance falls within the tolerances listed, and therefore the standard deviation is half of the tolerance value. For example, the standard deviation for peak current duration is 0.025 ms, such that the real operating duration is between 0.095 and 0.105 ms 95% of the time (+/- 2 standard deviations).
The main output, or response, of interest is the injected fuel mass. Another key output is the cumulative flow through the control valve for each injection event, which represents fuel mass for which energy is expended to push through the control valve, but which doesn’t get injected. Ideally this mass would be minimized. The objectives of this study are the following:
Use factor screening to rank the 11 factors with respect to their influence on the two responses.
In accordance with the first objective, identify negligible factors that can be omitted from further analysis.
Evaluate the expected probability distributions of the two responses caused by the variation in the factors.
Predict the improvement in the injected mass probability distribution if the two most influential factors can be adjusted.
The third objective requires evaluating the model with Monte Carlo sampling, where the factor values are sampled from their normal distributions. The second objective is important for achieving the third, because as the number of factors increases, the number of Monte Carlo model evaluations will need to increase to obtain confidence in the results. If the number of factors can be decreased, the Monte Carlo model evaluations can either decrease or provide more resolution for the results.
Identifying Important and Unimportant Factors
We approach the first two objectives by performing the Morris Method on the model. The Morris Method is a global sensitivity analysis that utilizes one-at-a-time step changes in factor values. It calculates k different elementary effects in different areas of the factor domain, where an elementary effect, EE, is the change in response, R, that occurs with a step change in a factor, F.
It is computationally efficient and only requires k(F+1) model evaluations, where k is typically 15-30. The results of the analysis get conveniently summarized in a single plot.
For a given factor, the mean and standard deviation of the multiple elementary effects are calculated and placed on a plot of “EE standard deviation” vs. “EE mean”, with one point per factor. Factors whose points lie near the origin (0, 0) have negligible effect on the response. Factors whose points have large “EE mean” have large main effect on the response, and factors whose points have large “EE standard deviation” contain higher-order effects or interaction effects. However, the Morris Method cannot distinguish between higher-order effects and interaction effects, and it cannot identify specific interaction terms.
For this study, k was chosen to be 20, resulting in 20(11+1) = 240 model evaluations, and the lower and upper values in the table were used as variable bounds. The Morris sampling was configured in the DOE Setup (within Case Setup) as shown here:
The Morris sampling is a reduced version of a 4-level full-factorial scheme which includes interior points and not just the lower and upper bounds of the factor ranges. The parallel coordinates plot shown below, which is available in DOE-POST’s Select Experiments page, also illustrates the spread of the sampling for the 11 factors. In contrast, a 2-level full-factorial sampling scheme would result in 2^11 = 2048 points, but would not contain any interior points. A 3-level full-factorial sampling scheme would result in 3^11 = 177,147. As a result, Morris sampling has a computational advantage over 2-level and 3-level full-factorial methods.
After running the DOE simulations, the resulting Morris Method plots are provided in the Analyze Experiments page of the DOE post-processor and are shown below, where the analysis uses standardization and normalization such that the maximum allowed mean and standard deviation are 1.0. Using a maximum y-axis scaling of 1.0, the first observation from these plots is that the standard deviation is very low and negligible for all factors, meaning there are no higher-order or interaction effects within the relatively small ranges over which they were varied.
For easier readability, the y-axis is scaled to better show the individual points.
The plots show that, according to x-axis magnitude which represents the first-order effect, the OutletOrificeDiameter has the largest effect on the injected mass, followed by InletOrificeDiameter and CtrlValvePreload. For the control valve mass loss response, the OutletOrificeDiameter has the largest effect, followed by CtrlValvePreload, CtrlValveLift, and SolenoidCrossSectionalArea.
The full set of tabulated Morris results are provided in this table.
The Morris results are most useful for determining which factors can be excluded from further analysis. One might apply a threshold of 0.05 or 0.1 to the mean and standard deviations of the elementary effects to determine which factors are worth keeping. A threshold of 0.05 will be used for this example. The goal is to exclude factors which have EE Mean less than 0.05 for both responses. Those consist of CtrlValveStiffness, NozzleDiam, and NeedleSpringStiffness. The example will proceed with the remaining 8 factors.
Predict the Probability Distributions of the Responses
To determine the probability distributions of the responses, a new Monte Carlo DOE with 2000 experiments is configured in DOE Setup in GT-ISE. This screenshot shows the configuration which applies normal distributions to the 8 factors using the syntax normdist(mean, standard deviation).
After running the DOE simulations, the Variability Analysis page of the DOE post-processor is used to analyze the results. The resulting probability distributions for the two output distributions of interest are shown below, where they appear to conform to normal distributions. For the injected mass, 90% of the values fall between 10.85 and 14.24 mg, representing a relative spread of 2.9/12.6 = 27%, where 12.6 mg is the baseline injected mass. Relatively little variation is present in the control valve mass loss, where 90% of the values fall between 6.0 and 6.3 mg, with a relative spread of 0.3/6.14 = 4.9%, and as a result, the injected mass will take the remainder of the focus of this study.
The relatively large variation in injected mass is typical of an injector in the ballistic phase, where the needle is not at the upper stop. Injectors in the ballistic phase tend to be dynamic and unstable, as minimal force differences can cause substantial variation in injected mass. This behavior often causes difficulty in calibrating an injector model to measurement data, because these slight variations in input variables can cause a wide range of injected mass values. This variability can discourage the modeler by unnecessarily making them think that either something is wrong with the model or something is wrong with the measurement.
The modeler or design engineer might want or need to determine some worst-case scenarios for the injector. For example, it’s clear that the injected mass profile has a bell-shaped probability distribution that makes it possible for some injectors to inject as little as 8.5 mg or as much as 16.5 mg at the operating point of interest, even though none of the 2000 experiments resulted in these two values. This injector might be installed in tens of thousands of vehicles, and each vehicle would have multiple injectors, one for each cylinder. As a result, hundreds of thousands of this injector might be produced, making it likely for these more extreme injected mass values to occur.
To explore these worst-case scenarios, it is necessary to fit the injected mass data to an ideal distribution. This utility is included in the Variability Analysis page of the DOE post-processor. By enabling this feature, a distribution line representing the best fit is overlayed with the distribution plot, and a table appears that shows the metrics of different distributions. Changing the checkbox selection updates the plot so that the user can see how each distribution appears to fit the data. A few different distribution selections are shown below. The lower the “Error” metric in the table, the better the fit. The table also provides the parameters necessary to reproduce the distribution in the right-most column. The normal and log-normal distributions clearly fit the data best, but there is very little visual distinction between the two. Because of its lower error, the normal distribution will be used.
With the desired distribution selected, a calculator utility can be opened to determine worst-case scenarios. For example, the probability of the injected mass being as little as 8.5 mg is 0.000040, meaning it is likely to occur 40 times out of a million. The probability of the injected mass being as high as 16.5 mg is 1 – 0.999939 = 0.000061, meaning it is likely to occur 61 times out of a million.
Alternatively, the calculator utility can be used to enter cumulative probability values to calculate corresponding response values. For example, it might be desirable to determine the response values for the 0.1% and 99.9% cumulative probabilities. These are 9.4 and 15.7 mg, respectively.
Given the wide range of injected mass values that can be expected at this operating point, it might be necessary to ensure a narrower range of values in the context of injector design. For demonstration, we’ll assume that lower injected mass values are more problematic and that 99% of the time, at least 11 mg of fuel needs to be delivered. In the Variability Analysis page, the injected mass slider bars serve as specification limits. The lower bar is positioned at 11 mg, and the upper-right corner of the page displays the percentage of experiments that are out-of-spec according to these specification limits. 6.6% of the experiments yield injected mass less than 11 mg. The right-side table also provides process capability metrics, and the common metric Cpk is 0.57.
Predicting Improvements to the Response Distribution
The Variability Analysis page within the DOE post-processor serves as a powerful analysis tool for performing fast what-if studies to predict changes in response distributions according to changes in the factor variances. To experiment with changing the factor distributions, one should understand which factors should be focused on. We have already seen from the Main Effects Ranking plot that the OutletOrificeDiameter, InletOrificeDiameter, and CtrlValvePreload have the largest first-order effects on the injected mass. As a result, these three factors would be most appropriate for experimenting with adjusting the response distribution. These Main Effects rankings are indicative of using all the data, or in other words the entire factor domain. However, in some situations, a different factor can have a larger effect on a response in a smaller, specific region of the response. It is worth checking, since we are particularly interested in determining which factors mainly affect injected mass at the lower values.
To accomplish that task, which is known as Monte Carlo Filtering, we view the factor CDF plots in the Variability Analysis page. These plots are provided below, where the in-spec and out-of-spec CDF is provided for each factor. In summary, they show that higher values of InletOrificeDiameter and lower values of OutletOrificeDiameter cause the most significant discrepancy between the injected mass being in-spec and out-of-spec. SolenoidCrossSectionalArea and CtrlValvePrelaod also have a statistically significant effect on this discrepancy, but we will focus on the two diameters since their effects are so much larger than that of the other two.
Because the higher values of InletOrificeDiameter cause the injected mass to violate the specification limit, it is necessary to adjust the mean or nominal value, and attempting to improve its variance will not be sufficient. The same is observed for OuletOrificeDiameter. To test the effect of adjusting their means, we run a new Monte Carlo analysis, but we can do so directly within the DOE post-processor; it is not necessary to run additional DOE experiments from GT-ISE. To run them from the DOE post-processor, a metamodel is needed. It has already been shown that a linear metamodel without interaction effects can be used for the range of the factors being studied. The regression plot showing how well the metamodel’s predicted points align with the original data points is shown here.
After creating a linear metamodel in the Create Metamodels page, a new Monte Carlo experiment set is configured in the Variability Analysis page, where we experiment with adjusting these two diameters by 5 micron. In addition, we use a larger number of 5000 Monte Carlo experiments for each of these case studies, since the metamodel evaluations are very fast and practically free. We test three modifications to the input variables:
Decreasing the InletOrificeDiameter mean by 0.005 mm.
Increasing the OutletOrificeDiameter mean by 0.005mm.
Making modifications 1 and 2 simultaneously.
Test 1
Test 2
Test 3
The plots below compare the original injected mass distribution with those of the three tests, and the table below the plots summarizes the effects of the modifications. Making a modification to either the InletOrificeDiameter or OutletOrificeDiameter is enough to decrease the probability of having the injected mass be less than 11 mg. Using these results, the design engineer can assess making one of these changes in the product.
Conclusions
This study demonstrated using sensitivity analysis and variability analysis to analyze and improve an injector design in GT-SUITE. A sensitivity analysis utilizing the Morris method identified the most influential factors on the two responses of interest, and three factors were found to be negligibly important for both responses and were therefore omitted from further analysis. Then normal distributions were applied to the remaining eight factors using a 2000-point Monte Carlo DOE. After running the simulations, the results were analyzed in DOE-POST, where the response distributions were plotted and observed. The response data was fit to a normal distribution so that worst-case scenarios could be calculated based on cumulative probabilities. Finally, a regional sensitivity analysis was conducted to determine how best to avoid injected mass values at the lower end of the distribution, and three fast what-if studies were conducted to predict improvements in the injected mass distribution.
Robust Battery Pack Simulation by Statistical Variation Analysis
Simulating Battery Packs – Not All Cells are the Same
When simulating a large battery module, typically we assume that all the cells in the module are going to be the same. However, that is not always the case. Factors such as the capacities and resistances of the cell can vary from cell-to-cell. This brings up the question – How can we model the variance of different cells within the module?
In v2021 of GT-SUITE, we added some new features to help model this cell-to-cell difference. One new feature is a new statistical variance analysis tool.
This new tool opens a wizard which walks users through choosing an object to select and vary the mean and standard deviation for the attribute variation. This will create unique parameters for each part associated with the object. To put it simply, each cell uses part overrides to define different capacities and resistances for each cell in a battery module.
Another new feature is in our design of experiments setup. The Monte Carlo method has been added as a new DOE distribution method in DOE Setup. This allows users to vary any parameter according to a normal distributed mean or standard deviation.
Let’s take a look at this example below.
In this example, we have a battery module with 444 cylindrical cells – 74 in series and 6 in parallel. We were told that the cell capacity was 3.2 Ah. Additionally, we were given distributions of the capacity and the resistance of 200 cells of the same type. The distributions followed a standard normal distribution and included the mean and standard deviation of the distribution.
With the statistical variance tool, we can add unique parameters for the capacity and resistance multiplier for each of the 444 cells with our simple wizard. Once that is done, we can open up DOE Setup and select how many experiments we want to run to see how the distribution of capacity and resistance affect our module. GT’s Monte Carlo solution enables normal distribution to be setup for the capacity and resistance multiplier parameters
After running the model, we can look at the responses for each individual cell in the module including the state of charge and total energy dissipated. The boundary conditions for this model include discharging at a 2C rate and using a simple thermal model of a 1-D convective boundary condition at an ambient temperature of 25oC and a convective heat transfer coefficient of 10 W/m2K. The total energy dissipated varies by around 3% and the SOC varies by around 1% within the various cells in our module.
Those these results may seem small, but depending on the ambient temperature, discharging/charging protocols, or other factors, they can have a great effect on the battery module. In some instances, we might be able to see that one section of the battery module is heating much faster than another, meaning that some cells might need to be replaced faster than others or need to be designed to allow for more cooling compared to other areas of the module.
With these new tools and with GT-AutoLion, users can take large battery modules like these and analyze the responses of each individual cell and extrapolate to various C-rates and temperatures with physics-based modeling – allowing the user to gain more insight into their battery module, evaluate the battery degradation, and improve their BMS in the process.
Gerotor Pump Model Optimization via Process Integration of GT-SUITE and Gerotor Design Studio
Overview
GT-SUITE gerotor pump models are powerful design tools that provide accurate predictions of flow and thermal behavior while also being computationally fast. Necessary inputs to these models are the gerotor profiles for volume, surface areas, clearance gaps, and pressure force areas. Gerotor Design Studio (GDS) is a unique CAD software solution for designing, analyzing, and manufacturing gerotor pumping elements. More specifically, GDS can take detailed gerotor pump geometry and calculate the gerotor profiles required by GT-SUITE. The partnership between GT-SUITE and Gerotor Design Studio provides sophisticated, accurate, and fast pump modeling capabilities to pump design engineers.
This post provides some background on gerotor pump models in GT-SUITE and GDS and details how the two software tools can complement each other through process integration for powerful optimization runs.
GT-SUITE Gerotor Pump Models
GT-SUITE is an ideal tool for studying the detailed flow and thermodynamic behavior of a variety of pumps and compressors, including gerotor, vane, external gear, reciprocating, swashplate, scroll, screw, and others. These models can handle either positive or variable displacement, and key output predictions include:
Performance vs. speed, pressure rise, and temperature
Flow and pressure pulsations, and cavitation behavior
NVH issues, resonance, and fluid borne noise
Dedicated friction model for vane and gerotor machines
Interaction with relief valve dynamics
Bearing loading caused by internal pressurization
More specifically, gerotor pump models can be created directly from CAD using the GEM3D application, where four basic shapes are specified to define the inner gear, outer gear, inlet volume, and outlet volume. With this basic template filled, the GEM3D model can be automatically exported to a .gtm model file in GT-ISE where all flow parts are connected and volume, port area profiles, and pressure force areas have been calculated.
Gerotor Design Studio Pump Models
The GDS software provides a user-interface for designing gerotor profiles, designing porting, specifying fluid and material properties, and visually inspecting the gerotor design. The software performs many detailed physics calculations to predict flows, powers, efficiencies, and pressure waves. It also has some powerful integration capabilities with GT-SUITE, where GDS can directly create GEM3D (.gem) models of the gerotor geometry while providing the gerotor input profiles to be used in a GT-SUITE pump model. In addition, a command line version of the software is available that does not require the graphical user interface, and this command line version makes it possible to integrate with GT-SUITE for model-based optimization runs.
Motivation for Integrating GT-SUITE and GDS
Each software tool has unique advantages that complement the other. More specifically, design changes made in GDS, such as detailed geometry inputs, will impact the angle-resolved gerotor profiles that the GT-SUITE pump model relies on. A pump design engineer may want to run optimization or Design of Experiments (DOE) studies to analyze the effects that input geometry variables entered in GDS have on important flow and thermal output variables that are predicted by GT-SUITE.
Integrating GT-SUITE and GDS for Optimization and DOE Studies
Optimization and DOE studies require an automated approach to integration, which, for any combination of input GDS input variables, will first run the GDS model to generate the gerotor pump profiles, and then run a simulation of the GT-SUITE pump model that is configured to use those profiles.
The command line version of GDS becomes very beneficial for this type of automated approach. It relies on an input text file that provides all the geometry to a GDS gerotor model. Then an executable called GDS_GTS.exe can be run, which invokes the GDS solver to read the input file, perform its calculations, and generate the gerotor profiles. A section of a GDS input file is shown here.
Process integration refers to automated sequencing of a series of processes that might include running CAE simulations, executing scripts or executables, and managing input and output variables that might transfer between processes. GT-SUITE provides a process integration tool called Process Map which is part of the GT-Automation license. Process Map models are graphically constructed in GT-ISE and provide organization of parameters, configuration of multiple cases in Case Setup, activation of the GT optimizer, and creation of Designs of Experiments. The Process Map model for this integration is shown below and consists of five processes that are executed from left to right.
The first process handles transfer of Process Map parameter values to GDS, specifically by modifying the GDS input text file. This component appears as follows, where four GDS variables are designated for replacement using syntax that targets the specific locations of the text file. Four port variables are used in this example, and the parameters are used to define the replacement values in the Process Map model’s Case Setup.
The second process is a Python script that executes GDS_GTS.exe to perform the GDS calculations, generate the gerotor profiles, and generate .stp CAD files. It also performs additional tasks to check for GDS errors.
The third process takes a GEM3D (.gem) gerotor model, which is preconfigured to point to the .stp CAD files that GDS generates, and exports or discretizes it as a 1D GT model file. A screenshot of a GEM3D gerotor model is shown here.
The GT model exported from GEM3D happens to have a few missing inputs that need to be filled, so the fourth process is a Python script that utilizes the GT Python API to automatically fill those missing inputs. A preview of the Python script that fills some Case Setup parameter values is shown below.
Finally, the fifth process executes the simulation of the GT-SUITE gerotor model. The key results of the workflow are output variables (RLTs) from this GT simulation, such as mass flow rate, power consumption, and pressure wave amplitude. The entire workflow takes about 90 seconds to execute.
With the Process Map model configured, the optimizer can be enabled. This example focused on varying the four port variables previously mentioned. The optimization objectives were to simultaneously minimize the pressure wave amplitude and minimize the power consumption. A multi-objective Pareto approach was chosen to achieve these goals. The optimization search algorithm was a genetic algorithm, specifically NSGAIII. The following screenshots show a couple aspects of the optimization setup.
The main result of the optimization is a Pareto plot which puts the two objectives on the two axes. The red points are non-optimal points, while the blue points are Pareto optimal. The Pareto points are considered to be equally optimal and represent the trade-off between achieving the two objectives; if lower pressure amplitude is desired, then an increase in power consumption must be accepted. The design engineer therefore must use additional criteria to choose a final design from among the available Pareto points.
Gamma Technologies is committed to continued research, testing, and improvement of its optimization and data analysis capabilities to provide its users with increased confidence in using these tools for their own projects. For any questions, support, or further discussion, please reach out to us at [email protected].
Piston Group Dynamics: An Approach for Friction Reduction Simulation
Stricter demands on emission norms, fuel economy and performance require internal combustion engines to be optimized with respect to their frictional losses and wear. Different investigations show that the piston group, consisting of piston skirt and rings, cylinder liner and conrod bearings, contributes a major part to the overall frictional losses. Although the ICE exists for over a century now and many improvements have been made, its efficiency can still be increased and simulation tools like GT-SUITE have become a major contributor to achieve these targets. This blog illustrates the workflow how a predictive friction model for the piston group assembly can be set up in detail using GT-SUITE and how the results correlate with measurements. The built-in design optimizer and DOE tool (design of experiments) can be used to minimize frictional losses, while ensuring lubrication of all involved parts.
The outlined approach enables not only solving common problems for conventional vehicles like improving warmup strategies to stay within emission limits and reducing fuel and oil consumption. But especially also rather new effects introduced by the increasing amount of electrified powertrains are captured. Just to name a few, questions like how do measures like stop-start and pure electric drive over longer distances impact engine friction, can be answered.
Model Setup
The GT-SUITE software package offers a CAD driven process to obtain a model based on the 3D-CAD geometry of the system. All geometric and mass properties will be automatically transferred to the model after assigning the corresponding material (Fig. 1).
Figure 1: Sliced crankshaft CAD model (left) and converted part of it in GT-ISE (right)
Based on this exported standard Cranktrain model, that can be used for various investigations from rigid balancing up to torsional damper modeling, the focus in the next sections will be put on some friction relevant parts and how GT-SUITE takes into account their properties:
Piston Ring definition
Piston Skirt profile and Cylinder Wall deformation
Lubricant properties and surface roughness
Cylinder Pressure (Profile): Depending on the modeled condition (fired or motored)
The piston ring objects are used to represent each of the individual rings in the ring pack. Detailed modeling involves calculation of ring radial and twist motions under the influence of ring tension, land pressure and countered by hydrodynamic and asperity forces at the ring running face. The piston rings can be incorporated into a more comprehensive blowby model with GT-POWER to evaluate sealing effectiveness and crankcase ventilation. In the context of predicting friction, these physics are simplified and instead the cylinder pressure is applied as the boundary condition for the compression ring groove and is scaled and applied to the scraper ring using our suggested values. This level of model fidelity can accurately predict the friction physics and its effect on the Cranktrain torque. The shape of the piston ring can be represented by various geometric approaches. These are shown in Fig. 2 with symmetric profiles for the top (cylinder side) and bottom (crankcase side) section.
Figure 2: Symmetrical parabolic, circular-arc, elliptical and linear ring profiles. Parabolic and circular-arc converge together as the profile depth decreases.
In case the actual ring shape differs from the ones shown above, GT offers full flexibility in allowing a user defined profile. The necessary inputs will be concluded by specifying material and mass properties as well as the surface texture and roughness.
Another main contributor to the friction is the piston skirt to cylinder wall interface. In order to consider the piston secondary motion (lateral motion and tilting effects), the skirt profile will be described in terms of axial and ovality profiles (Fig. 3). Also thermal deformation can be considered.
Figure 3: Schematic drawing of piston axial and oval skirt profiles (left) and radial bore distortion plot of cylinder wall
The other side of the interface is determined by the cylinder wall. The calculated clearances to allow ideal piston motion inside the cylinder can be considered as a function of circumferential angle and axial position. Furthermore, the deformed shape of the bore resulting from thermal expansion and mechanical bolt clamping loads can be included (Fig. 3). This information, together with the surface roughness, are used to determine the ring face and skirt oil film viscosity and ring-bore conformance, both of which affect oil film and friction calculations.
This brings us to the Lubricant that needs to be defined for this calculation. GT-SUITE offers a wide range of different oils available in the standard library that ships with the installation. If the user does not find a specific lubricant, it’s easy to define it as a new reference object in the model. The tribology models in GT-SUITE consider viscosity as a function of temperature, pressure, and shear rate to properly characterize the wide variety of oil blends used in industry.
Oil film temperature is calculated by averaging the piston skirt temperature (assumed to be uniform), and a bore temperature profile as a function of axial position.
Non-Newtonian shear thinning can be modelled by a variety of Carreau-like methods, all of which follow the Power Law. After providing GT-SUITE with viscosity data as a function of temperature and shear strain rate, GT-SUITE will fit high-shear viscosity , characteristic time , and power law exponents m/p. The former two are physically-based functions of temperature.
Pressure dependence can be defined explicitly via viscosity vs pressure/temperature maps, or use of Barus or Roelands exponents.
Figure 4: Characteristic Behaviors of Thinning and Pressure-Viscosity Effect
At runtime, thinning and pressure effects are applied on-the-fly using the average shear rate and film pressure. Averaging is performed using a geometric mean for each pad, and lookups are performed at each timestep.
Shown below are classical Stribeck curves for the piston group, shown for 3 oil weights in a typical cranking scenario – adiabatic cylinder, no firing. In general, increasing viscosity allows the surfaces to escape asperity load (transition from mixed to hydrodynamic lubrication) more easily, but carries a penalty of increased hydrodynamic shear friction.
Figure 5: Classical Stribeck Curves for 3 Oil Weights
Since the surface finish is another important influencing factor on the oil film extension and therefore on the friction behavior, the user can choose from different ways to define the asperity mesh. Next to pre-defined surfaces in the GT library and the possibility to enter measured profilometer data, that GT-SUITE uses to extract the corresponding parameters to characterize the surface, a new feature also allows the possibility to investigate different patterns and investigate their influence on the oil film extension. A demonstration of such a pattern for the piston skirt axial profile can be seen in Fig. 6.
Figure 6: Possible surface patterns and their influence on the piston skirt friction force
Last, but not least, the cylinder pressure curves should be defined for an accurate prediction. The common approach is to either use measured profiles or generate these using an engine performance model (integration of GT-POWER model with predictive friction model).
Results
The standard procedure to perform this simulation is to run multiple steady-state simulations for the whole engine operating speed range, increasing the engine speed from idle in small increments to maximum speed. The runtime for these models usually is in a range of minutes, which makes them attractive to be used in large DOE’s and/or optimizations.
GT-SUITE’s post-processing tool GT-POST allows to analyze and customize the simulation results in every way. Summarizing plots, like the stacked plot below, can be visualized as well as individual component friction losses. In Fig. 5, the contribution of individual combine to provide the overall friction for the entire engine speed range and is compared to measured strip down test data. The contribution of the piston-cylinder (ring and skirt) is dominating for most operational points and emphasizes the importance of investigating this interface in more detail.
Figure 7: Stacked plot showing the overall friction pressure compared to strip down test data
Fig. 5 shows the friction force for a top ring at different engine speeds and with different cylinder pressures. It can be seen that the GT solution is capable of following the measured floating liner forces in a quite accurate manner. Beside these results, the detailed motion of the piston ring as well as the contact behavior to the cylinder side (hydrodynamic and asperity) can be investigated and further improved, if necessary.
Figure 8: Piston Top Ring Friction Force compared to measured data of floating liner system
The other major contributor to the overall friction, the piston skirt to cylinder interface power loss, is shown for one engine cycle and a specific steady-state condition (fixed engine speed) in Fig. 6. For this specific layout and operating condition, there is no asperity contact in contact surface. That means that all forces in lateral direction are carried by the oil film which would ensure a sufficient lubricated contact here. Furthermore, the friction power is higher during upstroke than during the downstrokes. The reason can be the piston pin offset in order to abet the downstroke secondary motion behavior that usually causes the higher friction forces.
Figure 9: Piston Skirt to Cylinder Friction Power loss over one complete engine cycle, split into hydrodynamic and asperity as well as major and minor thrust side contribution
Finally, in addition to 2D line plots, there are also several results that can be visualized in 2D and 3D plots or even in animations. Fig. 8 shows one of these animations for the piston skirt to cylinder interface and color contours the fluid film pressure according to its magnitude. This unwrapped view and animation gives the user the possibility to detect any uncommon behavior in the important oil film extension and can give ideas for areas where the design might need to get improved.
Figure 10: Animation of the Piston Skirt to Cylinder Contact Oil Film Pressure during one engine cycle
Conclusion and Outlook
GT-SUITE offers a complete solution to model the frictional behavior of internal combustion engines. The special features are that the solution is:
Predictive: Underlaying models (e.g. oil film models, Stribeck curve etc.) in combination with detailed input data allow to provide fast and accurate results for day-to-day use in design trade-off and optimization studies.
Validated: The approach and methodology has been validated with several industrial partners, which has been implemented by the GT team of development and application specialists.
Integrated: The integration of different sub-systems (e.g. cranktrain, valvetrain, timing drive etc.) and different physical domains (e.g. engine performance model, lubrication circuit etc.) allows to capture interacting effects on the overall friction studies.
Considering this complete solution in the well-known and established, user friendly environment of GT-SUITE, allows the development engineers to meet their targets with simulation, in addition to test engines on test benches.
Preview: The above shown model can be enhanced by compliant piston-liner model to capture flexible effects based on the interacting forces. The same setup can serve as a starting point to investigate oil consumption mechanism like oil evaporation, blowby/blowback and oil throw-off. More information will follow in one of the next blogs. Stay tuned.
Virtual Calibration of xEV Thermal Management Systems
Reducing the emission of the conventional vehicles by means of electrification of the powertrain brings new trends and new challenges for automotive engineers, especially in the field of thermal management and thermal component security. Thermal management systems for xEVs show a high degree of topological complexity, integration of new functions, more interaction between the different circuits, new thermal management strategies, and high level of complexity of the controls algorithms. The aim is to validate these new technologies early and without the presence of hardware, lowering the test costs and shortening development timeframes. Because of these requirements to design controls systems without prototype hardware, the focus is shifting to virtual testing (XiL) of vehicle systems: including Hardware in the Loop (HiL) and Software in the Loop (Sil) simulations.
XiL simulations tend to underline the software changes which may lead to the final market product. The key to keep pace with the changing requirements is efficient and simple workflow: minimization of modeling tools that are time intensive to interface, and a flexibility of a simulation model that allows for versatile model utilization of different stakeholders in an organization. Ideally the same model would be used for detailed hydraulics, transient warmup component sizing, and controls’ development for HiL and SiL.
With GT-POWER-xRT, Gamma Technologies is offering a “settled in market” engine modeling tool, capable of bringing physical, predictive performance and combustion simulation on a HiL system. Similar opportunities exist for models for thermal management system or thermal component models. Delivering fast running thermal management models increases development efficiency and allows to explore new thermal management strategies.
With the increase demand for XiL cooling simulations, the ultimate objective is to be able to run the GT-SUITE cooling models on dedicated HiL machines.
GT-SUITE is capable to run fast, complex, and multi-physical thermal management models. The example that will be discussed is a Through The Road (TTR) hybrid vehicle with its thermal management system. The TTR powertrain has an ICE that drives one axle and an electric motor that drives the other. The plant includes a detailed thermal management system with:
3DFE thermal representation of the engine
Detailed battery pack module
Quasi 3D Underhood model
Parametric thermal representation of the electric motor
Two-Phase system for battery cooling and passenger comfort
Cabin model
Detailed piping network for low and high temperature cooling circuits
Such a model is usually assembled by components and systems from several contributors, who may have different expectations on model accuracy and runtime. If the individual components and systems are not optimized for fast runtime, the resulting model will likely run slower than real time. In the described state, the example TTR model has a real time factor of 8: meaning that it ran 8 times slower than real time. In order to make such a model accessible for stakeholders in SiL and HiL activities, GT-SUITE offers a set of comprehensive tools to guide through model changes while maintaining model accuracy.
For circuit setup, GT-SUITE is equipped with the “Circuit Definition” wizard to help identify the different numerical circuits in a system and set optimal and therefore fast numerical settings. In the TTR example the flow circuits are divided automatically between engine cooling, engine lubrication, battery cooling, e-motor cooling, underhood air, cabin air and the refrigerant. Through analysis of the model circuits GT-SUITE will select appropriate settings for fast model execution automatically.
Complex thermal management systems also contain a large number of flow volumes. The “Combine Flow Volume Wizard” is another tool within GT-SUITE that allows the user to reduce the total number of flow volumes for faster runtime. The wizard is a guided tool which in addition to automatically merging flow volumes also auto-calibrates the pressure drop of those simplified flow branch. This automatic calibration maintains high result accuracy while decreasing the model runtime.
GT-SUITE also comes with a comprehensive tool for Design of Experiments. The user can perform any variation of the model inputs for the purpose of predicting the model outputs by means of a meta/surrogate model. In addition, the guided workflows as well as the visual and interactive plots facilitate the task and increase the work efficiency.
Another very useful tool for thermal management XiL modeling is GT-SUITE Integrated Optimizer which allows the user to automatically calibrate the models without any user interaction or guidance. The tool can also optimize the model to automatically achieve certain targets in Pareto-style optimizations, for example minimizing the warm-up time or maximizing the driving range.
When creating a thermal component model in GT-SUITE, e.g., of battery, e-machine or inverter, users typically capitalize on the GT-SUITE unique 1D-3D synergetic approach for fast runtimes with high accuracy. This approach usually results in a 1D flow network coupled to a full 3D-FEM thermal structure model which can be computationally expensive. To allow SiL and HiL stakeholders to efficiently work with those models, GT-SUITE allows changing the modeling approach from a 3D-FEM to a lumped thermal mass approach by the click of a button. The information needed to characterize a lumped thermal mass, such as distance to mass center and heat transfer areas, is extracted automatically from the 3D FE. With this new approach, the user does not have to maintain different models for component, SiL and HiL simulation.
Throughout the described stages, GT-SUITE’s FRM Converter offers a convenient way to organize files and view/compare results.
By using these GT-SUITE tools, we have derived, in a meaningful way, a fast-running simplified model from the previous detailed TTR example. The derived model has a real time factor of 0.7 while still respecting the physics of the model. Throughout the process, the fast-running model shows a high result accuracy when compared to the detailed model, as demonstrated in the following plots of the coolant, the battery, and the cabin temperature.
Because the physics of the system are maintained, the fast-running model keeps the predictability of the detailed physical model. For many late design changes during the project, such as scaling a heat exchanger or substituting the pump, fan, or valves, the derived fast-running model can predict the right behavior. To demonstrate this the HT radiator is scaled down in the detailed and in the fast-running TTR plant model. This scaled down heat exchanger will affect not only the heat transfer rate to the coolant, but also the air distribution in the underhood model, the power consumption of the pump and other important system parameters. The following plots prove that the simplified model keeps the high predictability level characteristic thanks to physics-based model optimization. The scaled down heat exchanger succeeded to predict and capture the same behavior of the heat transfer rate, the coolant temperature and the SOC of the battery in both models.
Finally, GT-SUITE’s flexibility allows to partition a full model into parallel sub-models by means of FMI, thereby allowing to utilize parallel cores on a HiL machine to further speed up model execution or run model with even higher physical modeling depth.
Utilizing GT-SUITE’s XiL tools for thermal system and component modeling enables efficient, accurate and convenient modeling including HiL applications. This allows physically descriptive and predictive models to be used throughout the engineering program. The following graph shows the results of the temperature of the engine, battery, and the cabin of the HiL capable TTR model over a WLTC cycle. At the end, the TTR detailed model runtime was reduced by 96% leading to HiL capable model run with a real time factor of 0.3.
In summary, GT-SUITE offers different ways to help the user transform their cooling model into a Real Time capable model. Gamma Technologies is committed to continue to develop new tools and improve its solution to provide the users the confidence to use the tool for their current projects and extend it to new usage in the future. For any questions or support related to runtime, please contact us at [email protected].
Simulating Battery Thermal Runaway Propagation with GT-SUITE
Learn how to use a model to evaluate battery pack safety during thermal runaway events
Designers of battery packs are tasked with a very challenging problem: to package a set of cells as tightly as possible and minimize the amount of non-cell weight in the battery while maintaining proper temperature levels of cells, protecting against premature cell degradation, and ensuring safe operation. The last item on the list is, of course, the most important: ensuring that the battery pack is safe under any circumstance.
1C Discharge
The most common challenge to ensuring battery safety is thermal runaway. Thermal runaway is a phenomenon that occasionally occurs in Lithium-ion (Li-ion) cells when extreme temperatures are reached. During thermal runaway, undesired exothermic side reactions heat up the cell, and as the cell heats up, the rate at which the undesired reactions occur accelerates, eventually causing a catastrophic loop of events that concludes with a destroyed Li-ion cell and a lot of heat released. This loop of events is summarized in the image below.
There are many potential causes of thermal runaway. For example, if a cell is heated to extreme temperatures, thermal runaway can occur. If a cell is pierced by a nail or crushed, this can cause an internal short which eventually leads to thermal runaway. Other times, thermal runaway can occur for seemingly no reason at all – in these cases it is often manufacturing issues or even internal dendrite growth that lead to internal shorts inside the Li-ion cell.
With all these potential causes for thermal runaway, as a pack designer, how are you supposed to protect your cells from entering thermal runaway? The unfortunate answer: you can’t.
In fact, this is the wrong question for a pack designer to be asking. Because thermal runaway can occur for so many different reasons, and occasionally for no apparent reason, pack designers must assume that at some point a cell in a battery pack will enter thermal runaway. The correct question to be asking is, “Is my pack designed well enough to withstand one cell entering thermal runaway without starting a chain reaction of neighboring cells entering thermal runaway?”
Thermal Runaway Propagation
Thermal Runaway Propagation is the key phenomenon to consider when designing a safe battery pack, this refers to the event of a single cell entering thermal runaway, releasing a large quantity of heat, and heating neighboring cells to the point of thermal runaway, essentially starting a chain reaction in which all cells in a battery pack are eventually destroyed.
There are various levels of success for this type of thermal runaway propagation scenario. There are the intuitive “pass” or “fail” results where a “pass” would mean that after a cell enters thermal runaway, it does not cause a chain reaction and a “fail” would mean that after a cell enters thermal runaway, it does cause a chain reaction. There is a less intuitive middle ground in these scenarios, too. For instance, maybe a chain reaction is set off, but the time delay between the first cell entering thermal runaway and the entire battery pack being destroyed is a long period of time, this may also be a “passing” result, depending on the application. If a cell is sensed to have entered thermal runaway while a vehicle is at highway speeds, does a family have enough time to stop and safely exit the vehicle before a fast-moving chain reaction is set off? If a cell enters thermal runaway during an EVTOL flight, is a pilot able to land before the chain reaction becomes unstable?
Without Simulation
Testing the design of a battery pack against thermal runaway propagation is an expensive and dangerous endeavor. First, expensive prototype versions of battery packs must be assembled, then a single cell is selected (either at random or with engineering discretion to determine which would be most likely to cause the undesired chain reaction), and finally thermal runaway is intentionally induced on the selected cell (either with a nail or by heating it to extreme temperatures). After that, it is up to the design of the pack to determine whether or not neighboring cells enter thermal runaway, and if they do, how fast.
This experimental setup has two major downsides. First, battery packs are expensive, and prototype versions of battery packs are even more expensive. To build these and intentionally destroy them can result in a high cost for battery safety testing. Second, this physical test is often done very late in the development cycle for the battery-powered product (e.g battery electric vehicle, EVTOL, electric bicycle). If a battery pack fails this test, it can be a major setback for the release schedule of the product, which can be detrimental to businesses.
With Simulation
Using simulation to run virtual thermal runaway propagation tests for Li-ion battery packs is a great way to avoid the costs and risks associated with experimental testing. In addition to that, single cells do not need to be picked out at random. Instead, multiple tests can be setup testing the “what if” scenario for every cell in a battery pack.
GT-SUITE is the ideal platform to run virtual thermal runaway propagation tests.
Modeling Thermal Runaway Propagation in GT-SUITE
In a paper published with NASA, who has extensive experimental data on thermal runaway of Li-ion cells, GT-SUITE was used to model the propagation effect of thermal runaway in a small battery module. The thermal runaway propagation model was built by converting CAD geometry and validated with experimental data.
Nominal Electrothermal Model of Battery Module
The study shows a number of test cases, including two of the battery modules during normal operation, which do not have cells entering thermal runaway. The animation below shows one of these tests, a battery module discharging at a C-rate of 1C. In the animation below, the blue – red contour animates local temperatures where blue is cool temperatures and red is hot temperatures. From the animation below, we can see the battery slowly warms up while being discharged at 1C.
1C Discharge
To take this electrothermal battery model and setup the thermal runaway propagation model, a few extra steps were required.
Cell-Level Experimental Thermal Runaway Tests
NASA has created specialized bomb calorimeters that impose thermal runaway on a single cell through a variety of causes (internal short, nail penetration, excessive heating). With this type of cell-level testing, NASA was able to measure the amount of energy released during a thermal runaway event. Some example results from their testing of cylindrical cells are shown below.
Alterations to Battery Model
The nominal battery model that was setup for the previous electrothermal model was upgraded to include a model of thermal runaway. This included the following changes:
The Trigger: If any jelly roll temperature rose above 180°C, the cell would immediately enter thermal runaway
The Heat Release: Once a cell entered thermal runaway, the cell would release energy in the form of heat (in this case 70 kJ)
40% of the heat released would be absorbed by the jelly roll in 1.5 seconds
60% of the heat released would be released as ejecta in 1.5 seconds
The Electrical Disconnection: Once a cell entered thermal runaway, it would no longer participate in the module, which means the neighboring cells, which are placed in parallel, would have more current flowing through them.
Module-Level Model of Thermal Runaway Propagation
Once these alterations were made to the battery model, any cell in the module can be selected as the “trigger” cell by applying an external heat until the trigger temperature of 180°C is reached.
In the first study, a cell in the corner of the module was selected to be the trigger cell. It was artificially heated to its runaway temperature of 180°C and it immediately entered thermal runaway. The animation below shows the results of the thermal runaway simulation with the corner cell (top of the image) selected as the trigger cell. Once again, blue cells are relatively cool and red cells are hot. From viewing the animation, we can see that the corner cell does not cause a chain reaction of neighboring cells entering thermal runaway.
Corner Cell Trigger
Since no real battery modules were destroyed in this simulation, this simulation can be repeated under different conditions. The next study conducted was to test how the module behaves when a cell in the center of the module enters thermal runaway. The image below shows how the module reacts when a cell in the center of the module is the trigger cell for thermal runaway. Once again, we can see that the thermal runaway event does not propagate to neighboring cells.
Middle Cell Trigger
The virtual thermal runaway propagation tests shown above both show “passing” results. The trigger cell self-heats to extremely high temperatures; however, the neighboring cells do not pass the 180°C threshold to enter thermal runaway.
In order to illustrate a “failing” test result, some changes were made to the module to make it more likely to propagate thermal runaway to neighboring cells. The busbar in the module was included, which increased the amount of heat conducted between neighboring cells. Additionally, the ratio of self-heat and heat released as ejecta was altered to be 30%-70% instead of the 40%-60% previously mentioned.
With these changes, the following results were observed. In this case, the trigger cell very quickly causes a chain reaction among neighboring cells and causes a much more catastrophic event than the previous two test cases presented.
Failing Test
Time-to-Results
Because time is one of the most important resources of battery pack designers, one of the key considerations when faced with a modeling challenge such as thermal runaway propagation is the total time that it takes to get results. The time that it takes to get results is the sum of the time it takes to build a model (“time-to-model”) and the time that it takes the model to run (“time-to-run”). With GT-SUITE, both time-to-model and time-to-run are minimized.
In the examples given above, the models run roughly 2-4 times faster than real-time on a laptop PC, resulting in a 30-minute simulation taking 7-15 minutes to run. The finite element structure in this model consisted of 6,000 nodes and 13,000 elements.
This fast time-to run enables users to experiment with some of the uncertainty that comes with battery thermal runaway propagation. Which cell initiates thermal runaway? How much heat does it release? How is that heat released? How much material is ejected from it? All of these are sources of variability that can be explored with the help of fast-running models (look for a future blog on this specific topic!). This type of variability analysis would not be possible when using extremely detailed 3D CFD models.
Conclusion
When designing a battery module or a battery pack, the battery’s response to a cell entering thermal runaway needs to be studied to analyze whether or not the cell causes a chain reaction of cells entering thermal runaway, known as thermal runaway propagation. This can be done experimentally by building prototype modules and packs and imposing thermal runaway on a trigger cell; however, this can be extremely expensive and if the pack fails the test, can be a substantial setback in the development of the battery.
With GT-SUITE, these thermal runaway propagation tests can be done virtually. This provides a number of advantages, including the large cost advantage and the ability to run any number of hypothetical thermal runaway propagation tests.
If you’d like to learn more or are interested in trying GT-SUITE to virtually test a battery pack for thermal runaway propagation, Contact us!
Predicting Lithium-ion Cell Swelling, Strain, and Stress using GT-AutoLion Simulation
Lithium-ion batteries (LIBs), used in most commercial electronics and portable devices, occupy a highly privileged position in the energy storage sector and have emerged as a versatile and efficient option for the electrification of automotive transportation and integration of renewable energies. The global market for LIB technology is projected to be over $90 billion by the year 2025. Therefore, the reliability of LIBs is very crucial in such largescale applications and has a direct impact on the societal economics.
It has become increasingly evident that the next-generation high-energy-density batteries will not be realized without understanding the degradation mechanism from the mechanics perspective. On one side, the repetitive volumetric strain in electrodes, ranging from a few percentages (graphite, layered/spinel/olivine oxides) to a few hundred percentages for the materials of ever-increasing energy density, disrupts the structural stability in batteries and deteriorates the capacity retention over cycles. On the other side, the mechanical stress influences the kinetics of electrochemical processes, such as mass transport, charge transfer, interfacial reactions, and phase transitions, thereby impacting the performance, capacity, and efficiency of batteries. Some battery pack designs also contain cooling fins, thermistors, foam separators, and repeating frame elements to hold the cells, maintain temperature, and manage this volume change.
Successful battery engineering at all levels (cell-level, module-level, and pack-level) requires an understanding of the complex linkages between mechanical and electrochemical phenomena in. The mechano-electrochemical model in GT-AutoLion effectively captures this linkage to allow cell engineers, module engineers, and pack engineers to accelerate the development of battery technology while decreasing the required testing load and cost.
How does the mechano-electrochemical model work?
The electrochemical processes such as the lithiation and de-lithiation process leads to significant volume change in the active material. This volume change leads to a substantial change in cell performance. Thus, to simulate this change in cell performance, a flexible swelling model capable of predicting cell performance under various conditions is implemented in AutoLion. These models include various mechanical strain mechanisms such as casing constraint, foam packing constraint, and an applied cell pressure constraint. The mechanical constraints applied here are analogous to the conditions that occur during battery testing or regular battery usage. Based on this analogy, the conditions can be broadly classified into three major categories (i) Rigid Wall (zero strain and high stress), (ii) Free Expansion (high strain and zero stress), and (iii) Realistic scenario with mixed constraints (non-zero stress and strain), as shown in the figure below.
Figure 1. Electrode swelling in a de-lithiated (discharged) versus lithiated (charged) active material under different mechanical loading conditions (i) Rigid wall, (ii) Free expansion, and (iii) Realistic scenario with mixed constraints
How are the stresses, strains, and changes in porosity on each component captured?
To accurately simulate the cell’s mechanical behavior (e.g., the thickness change, pressure generation, etc.), each cell component’s mechanical response properties are incorporated. The porous electrodes’ mechanical properties are defined based on linear elasticity, porous rock mechanics, or measured stress-strain response of an electrode saturated with incompressible electrolyte. In an electrochemical cell system, the anode and cathode layers will contribute from both externally applied pressure and electrode layer expansion/contraction caused by the active material volume change to their overall strain response. The magnitude of the active material’s volume change is related to the active material state-of-lithiation. The total strain is captured from both these components, and the corresponding stresses are calculated through the constitutive relationships in mechanics.
The balance between the change in porosity and the change in dimension/strain is captured using the material balance. The sample result shown below shows the effect of change in porosity and strain for different stiffness casing constraints.
Figure 2. Effect of casing stiffness (Rigid to Compliant casing) on strain and porosity (i) Strain vs. SOC plot; (ii) Averaged Porosity vs. SOC plot
The material balance also accounts for the change in kinetics through the porosity change and, therefore, the cell’s electrochemical performance. A sample result for a graphite anode with an applied cell pressure is shown below, describing the change in strain and porosity.
Figure 3. Effect of applied pressure on component/total strain on the cell and porosity (i) Strain vs. SOC plot; (ii) Averaged Porosity vs. SOC plot
The AutoLion model allows for a non-ideal lithiation behavior for anode and cathode, which increases the model’s accuracy, showing the importance of accounting for individual active material volume change behavior on cell level predictions.
How are these results useful?
Cell and pack designers currently rely on extensive electrochemical and mechanical testing to appropriately account for the volume change and the developed stresses. This mechano-electrochemical model predicts this volume change, which may reduce the required number of electrochemical and mechanical tests.
This model can help designers estimate cell-level volume change using knowledge of particle-level lithiation-based volume change behavior. The theoretical predictions of individual component strain, capacity balance effects, porosity, and pressure change effects can also be explored using this model. Also, the performance of the cells is highly dependent on the swelling strain evolution as shown in the sample results below.
Figure 4. Effect of swelling on performance of Lithium-ion cell.
The resulting understanding from the model may aid in cell design or determining operational parameters to mitigate adverse effects from active material volume change. Additionally, this model may prove useful to consider how mechanical and electrochemical phenomena intertwine as promising new chemistries such as silicon or Li metal are considered.
How to Optimize Electric Vehicle (EV) Drivetrains in Less Than 1 Day Using Simulation
Improving Hybrid Electric Vehicle Controls:
The recent proliferation of hybrid electric vehicles has greatly complicated the world of vehicle controls engineers. Multiple energy sources and propulsion systems applied to sophisticated hybrid drivetrains necessitate a much more intricate controls strategy than conventionally powered vehicles.
Determining when to distribute power to the engine, motor(s), or both is no simple task, and the time typically taken to develop these controls strategies reflects that. Even developing controls for simple hybrid vehicle models can take precious time away from the rest of the design process, and cutting corners can lead to sub-optimal fuel economy and vehicle performance results during simulation. Fortunately, GT-SUITE’s embedded tools include two different methods to automatically generate optimized, charge-sustaining hybrid controls strategies on a per drive cycle basis:
Using these tools allows for quick evaluation of a hybrid system’s peak capabilities without the hassle of developing and testing multiple controls options.
Model Generation & Evaluation In Minutes Vs. Days:
In Part 1 of this blog series, we employed GT-DRIVE+, Integrated Design Optimizer, and JMAG-Express to properly size and characterize an electric motor for a P4 hybrid system in a compact passenger car. These tools streamlined a traditionally time-consuming design process, with model generation and evaluation taking minutes rather than days. The goal was to select a motor for a P4 hybrid to meet the following requirements:
Metric
Requirement
Acceleration (0-60 mph)
8.5 seconds
Fuel Economy (City/Highway)
50/52 mpg
This blog will build upon our previous work, applying two of GT-SUITE’s hybrid controls optimization solutions to evaluate the previously selected motor’s impact on drive cycle fuel economy. Applying these tools within our workflow allows us to evaluate estimated fuel economy under optimized control without spending time developing complex hybrid controls.
Figure 1. Hybrid Design Tools Workflow
Previous evaluation of our example model revealed that our 27.5 kW motor selection met the acceleration and highway fuel economy requirements but could not meet the city fuel economy demand. These tests, however, were performed using a rule-based control strategy that was not necessarily optimized for city or highway driving. Applying ECMS and Dynamic Programming to the city drive cycle should provide a better idea of this configuration’s fuel economy capabilities.
ECMS in GT-SUITE assigns a “fuel consumption” rate to energy pulled from the vehicle’s battery. Calculation of this energy-equivalent rate is influenced by several user-defined parameters including:
Equivalence Factor – this represents the relationship between battery energy and fuel energy
Target State of Charge – this sets a target SOC to develop a charge-sustaining strategy
Penalty Function Exponent – this influences a penalty function that increasingly penalizes battery energy consumption as the battery deviates farther from the target state of charge
For an ECMS run, the user specifies a variety of independent control variables that are altered at every timestep with the goal of minimizing combined ‘fuel’ consumption from both the engine and the battery. For our example, the following variables were selected:
Variable
Values
P4 Motor Torque (27.5 kW motor)
-105 Nm to 105 Nm
Transmission Gear Number
1st to 6th Gear
Vehicle Mode
Hybrid, Electric, or Conventional
At every timestep, all combinations of the independent control variable values are considered. Any combinations that can meet the drive cycle power demand while obeying the defined constraints are evaluated to determine total fuel consumption. This calculation is heavily influenced by the battery energy-equivalent rate parameters. For example, if the SOC deviates too far from its target, then a larger penalty will be levied on battery consumption to incentivize a charge-sustaining strategy – this means scenarios where more engine power and less motor power is used may be deemed more favorable at that timestep. The variable combination that locally optimizes fuel consumption is then selected, and the process repeats for the remaining timesteps. The process at each timestep is summarized below:
Figure 2. ECMS Process Summary
Applying an ECMS control strategy to our city driving cycle, we will see a significant improvement in fuel economy that meets our initial requirements:
FTP-75 (City) Minimum Fuel Economy Requirement
Reported FTP-75 (City) Fuel Economy
Heuristic Control
50 mpg
42.93 mpg
ECMS Local Optimization
50 mpg
58.30 mpg
Figure 3. ECMS and Dynamic Programming runs vary the selected variables at every timestep to minimize fuel consumption
Figure 4. ECMS and Dynamic Programming can be tuned to deliver a charge-sustaining strategy
Despite evaluating 612 different control scenarios at every timestep, this ECMS run completed in less than 3 minutes. After completion, we can see that our motor selection will be sufficient to meet the initial fuel economy requirements – all it needed was a better control strategy. However, optimizing locally at each timestep will likely result in slightly sub-optimal performance over the entire drive cycle.
In other words: This is good, but we can do even better.
Dynamic Programming (Global Optimization)
Dynamic Programming will provide an even clearer picture of our example vehicle’s fuel economy capabilities under optimal control. Dynamic Programming uses similar strategies to minimize fuel consumption but seeks to do so in the context of an entire drive cycle. A global cost function is created and minimized using similar parameters to those defined for ECMS. The run begins at the end of the drive cycle and marches backwards in time to the initial state, where the fuel costs for all possible states and controls are calculated and saved. By referencing these saved values, a controls solution is determined by computing the ‘optimal cost-to-go’. This may not necessarily minimize fuel consumption at every timestep but will produce a solution that cumulatively has the lowest fuel consumption from start to finish.
Applying dynamic programming to our city driving cycle, we will see fuel economy further improve to 62.3 mpg:
Figure 5. City fuel economy Comparison between different controls techniques
Figure 6. Map of Optimal Cost To Go produced by Dynamic Programming Run
This blog series has demonstrated 5 different GT-SUITE tools that will significantly streamline your design process. In our motor sizing example, this increased efficiency was apparent:
GT-DRIVE+ instantly generated a P4 HEV vehicle model to use for evaluation – 5 minutes
Integrated Design Optimizer automatically selected the correct motor size to meet our acceleration requirements – 20 minutes
JMAG-Express instantly created an efficiency map from our selected motor characteristics – 10 minutes
Optimization Tools generated controls for our drive cycles to understand motor/vehicle performance under optimal control – 2 hours
One iteration of this design process could conceivably take less than one day. If we are unhappy with the results after evaluating this final design, we can easily iterate through again – tweaking our initial model and motor characteristics and applying all the tools again with relatively little time lost. If you are interested in learning more about any of these tools, feel free to contact us for additional information!
How to Optimize Hybrid Vehicle Design using Simulation
As emissions regulations tighten and an electrified future grows more and more imminent, automotive engineers have been tasked with applying decades of traditional vehicle engineering knowledge to increasingly complex xEV architectures. The good news is that automakers have become very proficient at building conventionally powered vehicles – so much so that they still comprise the basic underpinnings for most electric propulsion architectures. But even implementing hybrid-electric components into an existing conventional vehicle architecture introduces a long list of questions, such as:
Where in my driveline should I integrate my motor?
What size battery do I need to meet design requirements?
And how should I optimally distribute power between the engine and (potentially multiple) motors?
Fortunately, GT-SUITE offers a variety of embedded tools that help answer these questions.
This will be the first blog in a two-part blog series highlighting solutions for hybrid-electric component sizing, assisted electric motor design, power split controls optimization, and much more. Proper use of these tools will significantly aid development and increase efficiency throughout various stages of the design process.
Hybrid Component Integration and Optimization
To demonstrate the power of incorporating these tools into your hybrid vehicle design process, we will walk through a simple example of hybrid component integration and optimization. The goal of the exercise will be to appropriately size and optimize control of an electric motor in a standard compact passenger vehicle.
This process can traditionally be burdensome. Assessing multiple motor configurations often requires repeated manual manipulation of models. Defining motor characteristics typically relies on data from expensive testing, and developing effective hybrid controls strategies is often a time-intensive, trial-by-error process. Fortunately, GT-SUITE’s embedded tools address these problems within one easy workflow. The efficiency of these tools also allows extra time to continually iterate and fine-tune your design.
Figure-1.-Hybrid-Design-Tools-Workflow
Requirements:
For this example, we will focus on hybthat are reasonable for a compact hybrid sedan. Selecting a smaller, lower output motor will save costs, so we will seek to minimize motor size while still meeting these requirements.
Metric
Requirement
Acceleration (0-60 mph)
8.5 seconds
Fuel Economy (City/Highway)
50/52 mpg
Quickly generate a hybrid vehicle model
GT-DRIVE+ starts with a model generator that allows for easy generation of vehicle systems and quick evaluation between multiple architectures and components. When creating a hybrid vehicle, the user will be presented with several options for vehicle architectures as well a large library of pre-defined engines, transmissions, electric motors, batteries, and drive types.
The components were selected in the model generator to closely match many of the hybrid vehicle offerings currently on the market:
Hybrid Configuration
P4
Vehicle
Compact Car
Drive Type
FWD
Battery
Lithium-Ion 247 V 5Ah
Engine
Gasoline Direct Injection, 1.4 L Turbocharged I4
E-machine
To be configured within model
Transmission
6 Speed Automated Manual
Now, after just a few simple selections, a full vehicle model is generated – complete with test cases for city/highway driving cycles, and a pre-configured hybrid control strategy. If necessary, the exact same components can also be generated within a P0, P2, or P0/P4 hybrid architecture to compare how motor placement impacts vehicle performance.
Evaluate motor sizing
Next, maximum and minimum motor torque curves can be modulated to determine the minimum acceptable motor output. This can be done by configuring GT’s integrated design optimizer to sweep through one or more parameters to target a specific output. Here, the 8.5 second acceleration performance requirement is targeted. The design optimizer can be setup to modify torque values, run the model, evaluate the outcome, then modify torque values again, rerunning until results converge on the requirement. In this case, 151 possibilities were automatically evaluated in under 10 minutes. Upon completion, the design optimizer outputs several plots that help visualize the evaluation process, along with an updated model containing the final optimized parameter values.
After doing so, a motor with the following characteristics was selected to meet the acceleration performance requirement:
Figure 3. Selected Motor Characteristics
Note that while two of the requirements are satisfied, city fuel economy does not meet the 50 mpg requirement. This is not cause for immediate concern. GT-DRIVE+ automatically generates a rule-based hybrid controls strategy that effectively follows standard drive cycles but is not necessarily optimized for city or highway driving. A hybrid control optimization tool should be applied to this model to better understand city/highway fuel consumption. The second blog posting in this series will demonstrate the application of these optimization tools to this example.
Generate motor efficiency maps
GT has partnered with JSOL to bring their motor design tool, JMAG-Express, directly into GT-SUITE. The approximated motor characteristics determined in our initial investigation can be input directly into this integrated tool to calculate a complete motor efficiency map, as well as estimate many other fundamental motor characteristics, in only a few minutes.
Figure 4. Integrated JMAG Express Interface
This efficiency map can be pulled into our initial model to more accurately characterize the efficiency and power demands of a motor this size.
Instantly generating efficiency and loss maps eliminates the need to choose between over-simplifying motor efficiency for convenience, or spending time and money to test motor hardware. Instead, we can quickly calculate efficiency of our 27.5 kW machine, plug it in to our original model, and move on towards applying our hybrid controls optimization tools.
In this blog, we walked through three different GT tools that facilitate the hybrid vehicle design process. These tools assist and accelerate a highly iterative design process where components can be evaluated, optimized, changed, and evaluated again without sacrificing large amounts of time or resources. The next blog entry showcases GT’s hybrid control optimization solutions and how it can be used in the context of the example presented here.
If you would like to learn more about GT’s integration with JMAG-Express, [CLICK HERE] to read an additional blog post on the topic.
Exploring and Improving the GT-SUITE Genetic Algorithm
Mathematical optimization is the search for optimal model input parameters (factors) to minimize or maximize one or more of its predicted outputs (responses). Optimization is a powerful tool to the modeler for two main tasks, both of which can enhance the modeler’s understanding of the relationship between factors and responses, and therefore the model in general. The first task is model calibration, which is the process of tuning model inputs, whose values have a reasonable degree of uncertainty, to get one or more desired model outputs to align better with measurement data, thereby achieving a more accurate model. The second task is engineering design, which is the process of finding optimal design parameters for which the design engineer is responsible, for the purpose of improving a real component, product, or system.
To avoid the rote task of simulating every possible combination of factor values within their specified bounds and with an acceptable degree of resolution, an optimizer employs a search algorithm to more efficiently find the optimal solution in as few model evaluations (referred to as design iterations) as possible.
Genetic algorithms are commonly used for optimization problems, as they include stochastic methods to more thoroughly search the design space, which is the multi-dimensional region between the lower and upper bounds of the factors. Stochastic methods attempt to balance two processes, exploitation and exploration, to further improve upon solutions that are already known while continuing to find new solutions in unexplored areas of the design space. The genetic algorithm included in GT-SUITE, NSGA-III, was developed by Kalyanmoy Deb and Himanshu Jain [1] and is well-regarded in the optimization community. A genetic algorithm mimics biological evolution by applying mathematical equivalents of selection, cross-over, and mutation to a population of agents over multiple generations. An agent is a specific combination of factor values, thereby representing one design iteration. A key genetic algorithm setting is the population size, which, after getting initialized with a set of agents, proceeds to evolve from generation to generation. The total number of model evaluations or design iterations is the population size multiplied by the number of generations.
The performance of an optimization run generally consists of viewing the change in the response variable plotted against the design iteration number. To evaluate optimization performance, we introduce the term progress, which is defined as the best response value at any given design iteration. The performance of an optimization run can be evaluated using a single plotted line that represents the progress. Because of the stochastic nature of a genetic algorithm, multiple optimizations (referred to as trials) must be run on the same problem using different random seeds to fairly evaluate the progress. For these studies, 20 trials were run on each model, and the median and maximum progress were calculated and plotted to compare the two algorithms. The maximum progress is a measure of the worst performance that can be expected. The plot below is an example of 20 trials that were run for a particular optimization model, along with the median and maximum progress.
Again, the genetic algorithm’s population size is an important setting that influences the performance that can be achieved. Smaller population sizes generally allow the optimizer to “turn over” and learn from itself faster, given an equal number of total design iterations. In other words, with a smaller population size, more generations fit within a prescribed number of total design iterations. As a result, smaller population sizes generally provide faster progress to be made for any given optimization run. However, if the population size is too small, it might lack sufficient diversity of agents and converge to a local optimum. This concept is illustrated in the following plot where the median progress is shown for four different population sizes for a given model. The smaller population sizes achieve faster earlier progress within the first 400 design iterations, but population sizes 10 and 20 converge to local optimums and fail to continue to make progress. The larger population sizes of 30 and 40 are slower to converge, but continue to make progress.
Gamma Technologies recently improved the genetic algorithm in V2021 by incorporating metamodeling during optimization runs. Also referred to as a response surface or surrogate model, a metamodel is a mathematical representation of response data that uses factors as input variables. The known data points are used to train the metamodel for the purpose of predicting response values between the known points. The improved search algorithm, which is referred to as the Accelerated GA, seamlessly uses metamodels behind-the-scenes to intelligently estimate where the optimal solution might be located. The following graphic is a basic representation of this concept, where a metamodel (represented by the surface contour) is fit through 9 datapoints. If the optimizer is trying to find the maximum response, it can quickly determine that the starred location is the estimated maximum by using the metamodel.
More details about this approach are given in a white paper recently published to the Gamma Technologies website. The paper also provides a comprehensive performance comparison between the standard genetic algorithm and the Accelerated GA using 25 complex optimization test models that were used to develop the modified algorithm. Finally, the paper provides some helpful background on the genetic algorithm, including an exploration of the relationship between its two main parameters, the population size and number of generations. The paper can be accessed using this link:
A few results of the final comparison are included here. The plots below show median progress (solid lines) and maximum progress (dashed lines) for the standard genetic algorithm and Accelerated GA for four physics-based models. For all of these four models, the Accelerated GA not only finds better final solutions, but does so faster and with less variance (as indicated by the maximum progress lines). Considering the DIPulse-B model for example, the Accelerated GA is often finding better solutions multiple hundreds of design iterations sooner than the standard genetic algorithm, as measured by the horizontal distance between red and blue lines.
In summary, the GT-SUITE genetic algorithm’s exploitation process has been improved by incorporating metamodeling using the optimization data, and these metamodels provide an estimate for where the real optimum might exist. The resulting Accelerated GA provides better performance than the standard genetic algorithm for almost all 25 of the models tested. As a result, the modeler can use larger population sizes and fewer generations to increase diversity without substantially increasing the total number of design iterations. The Accelerated GA should provide the modeler more confidence in finding better optimization solutions in fewer total design iterations.
Gamma Technologies is committed to continued research, testing, and improvement of its optimization and data analysis capabilities to provide its users with increased confidence in using these tools for their own projects. For any questions, support, or further discussion, please reach out to us at [email protected].
[1] K. Deb and H. Jain, “An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints,” in IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577-601, Aug. 2014.
Using Design of Experiments and Distributed Computing to Optimize Microgrid Design
Learn how to use GT’s advanced productivity tools to explore a microgrid’s design space.
As mentioned in a previous blog posting, GT-SUITE is a powerful tool for evaluating the economics of a microgrid. However, in that previous blog post, only one combination of input variables was selected to be analyzed; for instance, the wind generation system was sized to be 105 kW and the solar system was sized to be roughly 90 kW. One of the main challenges of planning a microgrid is that the design space is very large, and the sizing of every component should be optimized, including the wind turbine system, the solar array, and the battery. Changing the sizes of these subsystems results in different upfront cost and power generation metrics for microgrids; ultimately, these will develop into different payback periods, returns on investment, and maximum battery backup time for islanding mode.
To accurately quantify how the sizing of the subsystems in a microgrid affect the economics of a microgrid, we will use GT’s integrated Design of Experiments (DOE) and Distributed Computing capabilities to quickly explore the design space of the microgrid.
Model Setup
The first step to exploring a design space using GT’s DOE is to define the inputs and responses of the system. The tables below summarize the system inputs and the system responses.
System Inputs
Input Name
Description
NumberOfWindTurbines
Number of 3 kW Wind Turbines installed in microgrid
SolarPanelArea
Area of solar panels (m^2)
Battery_NSeries
Number of series-connected cells in battery
Battery_NParallel
Number of parallel-connected cells in battery
System Responses
Response Name
Description
Initial_Cost
Initial Cost of microgrid installation (USD)
NetReturn_##Years
Net Savings of microgrid after 10, 20, and 30 years (USD)
Payback_Period
Payback period of microgrid investment (years)
ROI
Return on investment (%) after 30 years
Annualized_ROI
Annualized ROI (%) after 30 years
Islanding_Time
Amount of time (hours) microgrid can operate in islanding mode
Next, to properly explore the design space of the microgrid, GT’s DOE tool was used to setup a full factorial of simulations that varied the four system inputs. To do this, each input requires a minimum, a maximum, and a # of levels (to determine how finely or coarsely to search each system input) to be defined. The image below shows the settings used for the microgrid deign of experiments. As shown in the image below, this configuration resulted in 10,368 experiments to be setup.
As mentioned in the previous blog, each 30 year simulation takes rough 90 seconds to complete. This means that the 10,368 simulations would take over 10 days to complete if run on a single core. With this in mind, we utilized GT’s ability to distribute this job using a high-performance computing (HPC) cluster and 50 simulations at a time, meaning this DOE could be run in less than 6 hours.
Post-Processing Model
Post-processing and understanding over 10,000 cases of simulation is no easy task; however, with GT’s DOE Post-processor, we are able to setup an easy-to-use and highly visual interface to quickly explore the design space of the microgrid.
In GT’s DOE Post-processor, users create metamodels that are trained by the large number of experiments that are run. After the metamodels are created, Case predictions are made in the visual interface that is shown below. This interface allows sliders to quickly adjust the values of the input variables (number of wind turbines, solar panel area, battery sizing) and see how these changes affect the results in plots on the right that are dynamically updated based upon the input values selected. The response which is plotted is determined by which response is selected in the “Metamodel Tree” on the left side of the screen.
The image above shows a series of “single factor response” plots where the responses are plotted as functions of only changes in a single input variable. Two factor response plots are also automatically created by GT’s DOE Post-processor. Examples of these are shown in the image below, which show how the responses vary across two different input variables.
With the plots from the previous two images, we can conclude that as we increase the size of the battery, the annualized ROI of the system decreases This lines up with our expectations because the batteries are expensive and their primary purpose is to provide sufficient amounts of backup power, they’re not installed for profitability. We can also see that increasing the number of wind turbines and solar panels will increase the annualized ROI, which lines up with our expectations because the wind turbines and solar panels are what will allow a microgrid to save money on the cost of electricity.
In addition to annualized ROI, these intuitive interfaces can be constructed to explore the design space of a micro grid and understand how changes in the design affect other responses like the payback period, net return, initial cost, and islanding time, among many other possible outputs. For a more interactive demonstration, we’ve created the video below to show how this can be done.
Conclusion
With the microgrid model introduced in a previous blog, GT’s integrated Design of Experiments (DOE), and GT’s Distributed Computing capabilities, GT-SUITE is a powerful tool to explore the design space and optimize the design of any microgrid. Want to evaluate GT-SUITE to study your microgrid? Contact us!
Learn how to use a model to evaluate energy usage and cost of a microgrid.
According to the United States’ Environmental Protection Agency, the generation of electricity accounts for over a quarter of the greenhouse gas emissions in the United States and is the second leading source of such gases, behind only transportation. It is well known that GT-SUITE from Gamma Technologies plays a leading role supporting automotive and commercial vehicle OEMs to decrease the greenhouse gases that their cars and trucks release, but what is less known are the capabilities that GT-SUITE has to model and help develop cleaner and more efficient electricity infrastructure.
Microgrids are a relatively new technology that communities, campuses, and governments are investigating to modernize and decentralize their electrical infrastructure. These microgrids combine renewable energy, energy storage, and advanced controls to give communities and campuses the ability to restructure how they generate, store, transmit, and consume electrical energy.
What is a microgrid?
The U.S. department of energy microgrid exchange group defines a microgrid as “a group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode.”
In short, microgrids enable communities to integrate decentralized generation (i.e. renewable energy and backup generators), as well as energy storage in batteries into the larger electrical grid. Additionally, microgrids integrate advanced controls to enable precise energy management within the community and disconnect from the larger grid and operate in an islanding mode.
Why install a microgrid?
A microgrid’s battery backup and islanding feature enable microgrids to continue to operate during power outages, which has the potential to save lives during natural disasters.
Microgrids can also be a good financial investment. Microgrids generally have a high upfront cost to install renewable energy, controls, and energy storage, but over time, this has the potential to return on its investment with the money that is saved by purchasing less electrical energy.
Why model a microgrid?
Because of the steep upfront costs of microgrids, it is important to understand the expected return on investment and payback period before committing to installing a microgrid. Because the upfront costs are so high, it is likely that zero prototypes will be built.
Additionally, it is beneficial to study the trade-offs between different design decisions in order to optimize the system (How large should the solar array be? How many wind turbines should be installed? How large of a battery should be used?). The optimal design will change for every project, based on the load of the system and the weather. For example, perhaps a business park in Arizona can expect a high load due to more air conditioning usage and may invest more into solar generation than wind because of the lack of cloud coverage.
All these factors mean that simulation is paramount for designers to optimize the various trade-offs associated with their microgrid project.
Modeling a Microgrid:
To demonstrate the capabilities and to evaluate the effectiveness of a microgrid, we built a comprehensive model in GT-SUITE. This model reflects the size and topology of a residential microgrid for a neighborhood of 20 homes. The model includes renewable energy, a battery, the electrical consumption from the homes, and simplified controls to optimize energy usage. This model is parameterized so weather, load, and cost of electricity data can be customized to fit any application.
Each source of generation and the loads were converted to a simple power source/sink and a simple circuit that obeys Kirchhoff’s current law was created. This provides a full picture of the power flowing through the system and allows the cost of ownership of a microgrid to be easily calculated.
The power supplied by the renewable energy sources are based upon the weather data and specifications for specific wind turbine and solar panel models. The model was setup such that new weather data can easily be brought into the model to calculate how the microgrid would behave in any climate.
In addition to differing weather, each microgrid’s electrical load will be unique as well. The load in the demonstration model is meant to represent a series of residential homes, so we have included a dependency on ambient temperature to increase power usage during times when heating or air conditioning are required. The daily energy usage of a home in the microgrid is defined using a piecewise function that defines three zones: heating, comfort, and cooling. The comfort zone (when neither heating nor cooling is required) is defined as 55-65 degrees F. Additionally, we have defined a baseline electrical energy usage for a household to be 20 kWh/day. The piecewise function for daily power usage as a function of ambient temperature is shown below. This is of course flexible and can be altered to represent the electrical load from any system, like a hospital or college campus.
As mentioned earlier, another large benefit of microgrids is the increased control over the loads, generation, and connection to the grid. In the demonstration model, we have included some controls to take advantage of fluctuations in the price of electricity by charging the battery when the hourly price is low and discharging the battery when the price is high. Other control schemes can be tried and tested, allowing users to find new ways to save money and energy within their system.
Because the model is simple, we were able to run it with a timestep size of one hour, enabling simulations to be years long but only take seconds to run. We setup the model to simulate 30 years of operation, which took less than 90 seconds to complete.
To understand the financial viability of the microgrid over the course of time, this demonstration model calculates financial results, including:
Upfront cost of the microgrid
Cost of electricity with and without the microgrid
Net return of the microgrid
Return on investment (ROI) of the microgrid
Annualized ROI of the microgrid
Data Population
For this demonstration model, weather data from this website was used. This is a paid service that provides historical weather data for locations around the world. As an evaluation option, this web service provides historical weather data for Basel, Switzerland for free, which was used to build this demonstration model. The data includes historical temperature, solar radiation, and wind speeds (all shown in the image below), but much more weather data is available on the website. Please note that the simulation was setup such that day 0 is January 1st and Day 365 is December 31st.
In this demonstration model, we decided to use hourly pricing based upon historical data found on the website of Commonweath Edison (“ComEd”) Electricity Company. ComEd is the local supplier of electricity in the Chicago area, which is the home of Gamma Technologies.
ComEd has given the public the ability to download historical prices of electricity here on their website. For this demonstration model, the price of electricity during calendar year 2019 was used. Additionally, to account for taxes and other fees, GT has applied multipliers and shifts onto this data.
Model Results:
In the image below, we have included the power generated by both solar and wind generation and the power consumed by the 20 homes in the microgrid over the course of one year. The model uses a wind system that is rated at 105 kW and a solar system that is rated at 90 kW. Clearly, the renewable energy sources are not operating at their maximum powers very frequently due to the highly transient weather data used in this simulation.
In addition to the amount of electrical power generated, stored, and consumed, the financial calculations were done over a 30 year period. Below is a plot showing the cost comparison of the microgrid that was modeled (red line with a high initial investment) and the cost of electricity for the 20 homes if no microgrid was built (blue line).
In addition to the cost calculation, the return on investment (ROI) was calculated as a percentage, calculating the rate of return on the initial investment of the microgrid. The payback period in this model was roughly 8.7 years and before that point, the ROI was negative. Additionally, we calculated the annualized ROI, which asymptotes close to a 4% return on investment after 30 years.
Conclusion:
Using the simulation capability of GT-SUITE, we are able to evaluate the long-term power and aspects of installing a microgrid well before the initial investment for such a project would be made. The results shown above reflect only a single simulation, but many more simulations can be done to experiment with and optimize the sizing of the wind turbines, solar arrays, and batteries. Additionally, every microgrid will be different, so the electricity usage patterns, electricity price patters, and weather patterns need to be considered for each individual microgrid. GT-SUITE is a powerful tool to be able to evaluate the economic viability of any microgrid. Want to evaluate GT-SUITE to study your microgrid? Contact us!
For those who already have GT-SUITE installed, we’ve uploaded the demonstration model to our secure downloads page. To download the file, the username and credentials used to log onto our website are required.
Parametric Battery Pack Modeling in GT-SUITE for All Existing Cooling Concepts
How would it benefit your workflow if there was a multiphysics CAE tool that performs the following tasks within a single software?
1D system analysis of battery and vehicle cooling including all the detailed components, DOE matrices and optimization.
Fully parametric 3D finite element (FE) analysis of battery packs including all the cooling concepts, namely direct (air, dielectric) or indirect (plates, fins).
Both 1D as well as 3D multidomain (thermal, electrical, chemical) modeling of battery cells for all shapes and chemistries including all the cell level subcomponents.
3D computational flow dynamics and 3D thermal conjugate heat transfer analyses for complex flow.
Flexible CAD based custom 3D finite element analysis of battery packs for complex cooling concept.
Fast running 1D-3D flow thermal models for transient drive cycle simulations and complex phenomenon such as thermal runaway.
Switch between a detailed 3D finite element (both parametric and CAD shapes) and a 1D lumped mass thermal domain in a single model for detailed component and real time system analyses, respectively.
Figure 1. GT-SUITE at all stages of product development V Cycle
After a lot of innovative work done at Gamma Technologies in recent years, the latest version 2021 of GT-SUITE does all the above-mentioned tasks, as shown in Figure 1.
Here in this blog I would like to give you a sneak peek into some of the above features to be released in the coming version. The topics I chose for this blog are following:
Fully parametric 3D finite element (FE) analyses of battery packs including all the cooling concepts, namely, direct (air, dielectric) or indirect (plates, fins)
Switch between a detailed 3D finite element and a 1D lumped mass domain in a single model for component and system analysis, respectively
A further demonstration of these features will be available during the “Thermal Management Systems and Component Modeling” seminar at the Global GT Virtual Conference 2020.
At early stages of development cycle engineers need to define the battery packs in terms of shape, size, and cooling concept. The design goals include:
Uniform temperature distribution at cell, module, and pack level
Effective temperature control during charging/discharging and cold start
Containing uncontrolled battery failures such as thermal runaway
Because multiple cooling concepts could meet the above goals, easy transition from one cooling concept to another cooling concept will enable a full evaluation of the options. Using a parametric 3D finite element approach enables a quick comparison between different cooling concepts to decide which one roughly meets the system requirements. Through model scaling they can also test the compactness and overall pack sizes while staying within the safe working temperatures.
GT-SUITE offers a fully parametric 3D finite element meshing capability to cover a wide range of geometrical shapes. For this blog I took the example of a battery module equipped with prismatic cells and I tested four cooling concepts (see Figure 2):
Direct dielectric immersion cooling
Direct active air flow cooling
Indirect vertically embedded plate cooling
Indirect horizontal plate with extended fin cooling
Figure 2. Types of cooling concepts
For each cooling concept, I identified a repetitive building block that I parametrically modeled in GT-ISE (the modeling environment of GT-SUITE) and built multiple copies to create the battery module. Within the repetitive building block, parametric 2D FE meshes were extruded to create parametric 3D FE meshes with all the parameters available in the case-setup of the model for further DOE analyses.
I ran a brief analysis of the above mentioned four cooling concepts for two given sizes of cells. In the analysis, all four battery modules were discharged at a constant C-rate of 2C. At this discharge rate, active air cooling is not suitable in keeping the module under safe temperature limit (see Figure 3). The fin extended cooling approach is effective at least for smaller cells, but it reacts slowly to both heating and cooling. But for larger cells, fin extended cooling is not suitable at least for this extreme case of 2C-rate at the given pump flow rate. For both embedded plates and immersion cooling, the cooling concepts are equally effective for the smaller and larger cell sizes.
Figure 3. 2C discharge rate of four different modules for two different cell sizes (large and small), a) at ambient temperature of +40°C and b) at -10°C.
After this initial analysis one can ask the question if fin extended cooling is good enough in terms of cost and performance or the embedded plates and immersion cooling options should be pursued. To help answer this question, the temperature distribution between cells and modules can be evaluated. For this metric the immersion cooling concept performs better than the other three cooling concepts with lowest maximum module temperature and highly uniform temperature distribution (see Figure 4).
Figure 4. 3D temperature contours of four different modules for smaller cells.
A very compact case-setup of two cases is presented here, but for project work I would recommend to run multiple case DOE analysis to optimize the cell dimensions and extract maximum performance from the module within the safe limit of maximum allowed temperatures.
For a chosen cooling concept, cell shape, and size, as we move forward in the system development cycle the goal is to achieve an accurate system model to design and test the subsystem and system controls. Later in the development stages such analyses are done either virtually (via software in the loop) or using an ECU test bench (Hardware in the loop). But the parametric approach presented here gives the possibility to move towards the right side of the V-cycle quite early in the development stage without having to create the first CAD models and prototypes.
To integrate the controls model or hardware it is required to have real-time capable GT-SUITE models. In general practice this is achieved by simplifying the model i.e. replacing the detailed finite element structures with lumped masses. A well calibrated lumped mass model characterized with correct mass, heat transfer areas and material thermal properties can provide the accurate component thermal inertia and temperatures required for the system transient analyses.
In GT-SUITE v2021, it is possible to maintain two different levels of details i.e. 3D finite element and 1D lumped mass domain within one model. With this capability, a single model can be switched between the two domains based on the desired component or system analysis. Unlike in the past, the advantage here is that one does not have to maintain two different models and the model building time is significantly reduced (see below the new workflow in Figure 5).
Figure 5. Switch between 3D thermal finite element and 1D lumped mass within one model.
In the new workflow the solid shapes are converted into 3D mesh in GEM3D and the geometrical information (distance to mass center and heat transfer areas) needed to characterize a lumped thermal mass is also extracted. As a result, the exported part in GT-ISE can be switched between the 1D thermal mass and 3D FE solution. For components that use a 3D FE mesh generated parametrically directly in GT-ISE or imported into GT-ISE from a third party tool, the solver is programmed to calculate the geometrical parameters needed to characterize the lumped thermal mass from the 3D FE mesh. These capabilities enable a real-time capable model to be developed from the detailed finite element model regardless of the source of the mesh.
Because the fidelity of the model has been reduced, the 1D thermal model requires calibration so that it can produce the same results as the 3D counterpart. For this task “Distance Multipliers” can be used to scale the “Distance to Mass Center” or in other words the distance of heat transfer in Fourier’s law. In a future version of GT-SUITE these values will be calibrated automatically from the 3D FE model to reduce the building efforts needed for 1D thermal models.
For the case presented here, the temperature different between the 3D FE and 1D thermal mass models was negligible due to the symmetrical nature of the geometry. As a result I decided not to calibrate the modules and simply add them to create a 1D battery pack model consisting of 15 battery modules (see Figure 6). This pack was integrated into a 1D electric vehicle (EV) cooling system model. Subsequently, I ran some drive-cycle simulations (The EV cooling system model used is from the example library of GT-SUITE under Cooling_Vehicle_Thermal_Management).
Figure 6. Battery pack model consisting of fin extended modules.
Below in Figure 7 are some results of the drive cycle analysis for different cooling concepts. Although the cumulative energy production over the 30 minutes of WLTC drive cycle was not as high as it was in the case of a 2C-rate constant discharge, the observation about the suitability of different cooling concepts holds true in Figure 7. The air-cooling approach is not suitable for this pack design at cold or warm conditions.
Figure 7. WLTC drive cycle simulations of four different modules for two different cell sizes (large and small), a) at ambient temperature of +40°C and b) at -10°C.
The main takeaway from the above example is that with this integrated model you have the possibility to design system controls, not just for some average lumped battery pack temperature but for specific cells that are prone to maximum temperature due to their location in the pack. Moreover, the well calibrated 1D pack model gives accurate heat rejection to the cooling system for the sizing of heat exchangers. It also accounts for accurate pressure drop in the cooling system for the sizing of coolant pumps. And not to forget that this 1D pack model was not built separately, originally it was the detailed 3D FE model build for the component analysis and simply switched to be used as a 1D model.
With this introduction I would like to invite you to join us in our “Thermal Management Systems and Component Modeling” seminar at this year’s Global GT conference to learn more about the workflow improvements and new features available in GT-SUITE v2021.
Written by Dr. Dig Vijay
Comprehensive System to Component xEV Simulation Using GT-SUITE and JMAG (Part II)
This second blog of our two-part series on system to component xEV simulation looks at a solution approach which goes beyond the scope of basic energy consumption evaluation.
Once the electric machine has been preselected based on the map-based approach – taking into account the energy consumption and packaging aspects, as shown in part 1 of this blog series – the peripherals (inverter, dc-link filter) and the associated control strategy must be designed and fine-tuned. This helps to ensure that the operational machine efficiency is high during transient operation and the current profile discharged from the battery is smooth, which are important factors for both achievable total energy consumption and component health. These development tasks are mastered with the help of electrical-equivalent system models.
Options for integrating JMAG-Express Online models with GT-SUITE system models
The integration of JMAG-Express Online with GT-SUITE basically allows two options to create such an electric-equivalent model of the electric machine, depending on the availability of data, on simulation models already in place or your preference for modeling features.
The first option is preferable for a simulation driven design evaluation chain and conducts the transformation of the FE based JMAG-Express Online model into a JMAG-RT plant model. The JMAG-RT plant model integrates directly into the system model of the electrical, mechanical and thermal domain within GT-SUITE and includes detailed specifications regarding the electrical equivalent machine attributes and the associated thermal losses throughout the entire operation range of the machine.
The second option is available when you would like to customize the level of model fidelity or include machine parameter values obtained from measurements. This approach is based on the export of the electrical-equivalent machine parameters from JMAG-Express Online into the appropriate electric machine template in GT-SUITE.
Both electrical-equivalent machine modeling options require an electrical excitation: in the simplest case using an AC voltage source, for the purpose to estimate the resulting mechanical performance of the machine. This idealized setup of the electrical domain allows engineers to design and optimize the control strategy which is required to drive the electric machine.
Control Strategy Design in GT-SUITE
The signal flow diagram in the figure below represents a field-oriented control strategy with PI based current controllers. It contains two blocks for the Clarke-Park Transformation which transform the sensed 3-phase current signals to the rotor-oriented d-q reference frame and reversely transform the controller d-q voltage outputs to the 3-phase system. The target value generator includes the core strategy to satisfy a high electric machine efficiency in both the base speed and the field-weakening region.
In a further detailing step, the system model can be expanded to include a switching inverter model and the associated control strategy (e.g. Pulse Width Modulation). This level of model fidelity allows users to explore the full detail of system dynamics both in the electrical and the mechanical domain. Therefore, you may find all relevant control blocks ready-to-use inside the GT-SUITE Template Library and start from the exemplary control circuits contained in the numerous example models and refine them for your individual needs.
It is obvious that the electrical equivalent approach adds a deep insight into the interplay of the electrical components and their control strategy. It complements the map-based approach depending on the data availability and requirements at the particular development stages. Both approaches are successfully conducted through the integration of JMAG-Express Online with GT-SUITE, taking advantage of a good degree of automation.
Written By: Michael Zagun and Yusaku Suzuki
Accurate, Concept Level Electric Motor Design Using Simulation (Part I)
How to evaluate electric vehicle performance and behavior using simulation
A typical task of a vehicle simulation engineer is to evaluate the effect of different technologies or component selections on overall vehicle performance and behavior. One of the main challenges of this task is the lack of accurate data available for components, especially for engines, batteries, and electric motors. This lack of data availability can lead to false assumptions or extrapolations which may lead to inaccurate results. In this first blog of a two-part series, we will introduce a new integration between GT-SUITE and JMAG-Express Online that provides a method for accurate concept-level electric motor design. In the context of vehicle electrification, motors are a key powertrain component. What is required for a motor is not only high performance as a component but also high consistency with the system. This includes, for instance, matching the motor and battery sizes, but also cooling system size and performance as well. Figure 1 below shows an example of how different losses, and therefore cooling requirements, vary throughout the motor operating range.
Figure 1. Motor Performance and Losses
High-fidelity efficiency map-based modeling
Vehicle engineers use either a map-based approach measured by the prototype or lower fidelity motor model-based approach at the system design phase. When using a map-based approach, the engineer commonly needs to wait for the prototype to be ready, or relies on other empirical approaches. Alternatively, using a lower fidelity motor model-based approach causes a lot of rework as the design matures. To eliminate these errors and inefficiencies, GT and JSOL have partnered together and are excited to release new software functionality. With GT-SUITE v2020, GT users can now create a high-fidelity motor model by using the embedded JMAG-Express Online interface. JMAG is a comprehensive software suite for electromechanically design and development. It enables users to make a high-fidelity efficiency map model with less than 1% error compared to measurement. It allows the user to see various kinds of motor characteristics within 1 second by changing motor types, slot combinations, dimensions and other machine parameters.
Figure 2. JMAG-Express Online Workflow
Figure 2 shows a high-level overview of the workflow.
Figure 3. JMAG-Express Online Integration
The above workflow is accomplished through an integrated interface, shown in Figure 3.
To the GT user, the experience of concept-level motor design is intuitive and seamless, as well as fast. In this embedded interface, the user has flexibility to change machine types, geometry, as well as requirements for torque, power, and maximum speed. It is also possible to add additional constraints on the system, such as voltage and current limitations, as well as geometry constraints such as maximum motor diameter or stack height, air gap, etc. Based upon “rules of thumb” and common motor design principles, JMAG-Express Online will create a motor which meets the requirements, subject to the constraints. The user can refine the design or proceed with the configured motor design. Because of JMAG’s history in the area of motor design, the end user does not need to be an expert in motor design to be effective in exploring different design possibilities.
Figure 4. JMAG-Express Online Motor Efficiency Predictions
Through this embedded workflow, users quickly and efficiently analyze different motor types and create maps for each, such as in Figure 4.
Figure 5. JMAG-Express Online Motor in a GT-SUITE BEV Model
Because the JMAG-Express Online interface is natively integrated with GT-SUITE, users not only analyze the motor behavior in a standalone environment but integrate the JMAG motor models directly in a complete system-level model, as shown in Figure 5.
Figure 6. JMAG-Express Online Motor Efficiency and Operating Points
Such a model can be exercised through standard drive cycles, and by reviewing the residency of plots for motor operating points by going through vehicle simulation, it enables users to reflect immediately on the motor specification and run the next simulations, shown in Figure 6.
By connecting vehicle simulation engineers with parametric and template-driven motor design solutions with JMAG-Express Online, it is now possible to make earlier, and more confident design decisions, or motor selections. The push-button integration allows for design-space studies which more quickly explore all possibilities for the most effective motor solution at the vehicle level. Check out Part 2 of the blog series, where we discuss further integration possibilities between GT and JMAG, which move beyond map-based models into more predictive capabilities.
Written By: Jon Zeman and Yusaku Suzuki
Virtual Calibration of Fast Charging Strategies in GT-AutoLion
As mentioned in a previous blog, charging strategies have a significant impact on battery longevity and market perception, which heavily effects the market’s overall perception of a brand.
Depending on the complexity of a charging strategy, it may require optimization, or calibration. is extremely difficult, expensive, and time consuming. Intuitively, this is true because of the time and cost associated with physical testing. Less intuitively, however, this is also true because experimental testing of Li-ion cells can only measure voltage, current, and temperature of the cell or battery; whereas, understanding the true impact of a charging strategy requires much more detailed information about the state of the battery.
Luckily, physics-based modeling of Li-ion cells using GT-AutoLion enables engineers to have insight into the electrochemistry inside Li-ion cells well beyond voltage, current, and temperature measurements. This ultimately enables charging strategies to be virtually calibrated.
GT-AutoLion not only calculates the high-level quantities of Li-ion cells like voltage, current, and temperature, but also hundreds of other physical quantities within the cell, allowing engineers to have valuable insights into the electrochemistry of the cell. With this ability, GT-AutoLion enables engineers to explore, calibrate, and make robust decisions for fast charging algorithms early in a development cycle, enabling less physical testing to be done and when physical testing needs to be done, the tests are better informed and more focused.
Example Quantities
Thanks to a discretization of the anode, separator, and cathode in a one-dimensional manner, GT-AutoLion solves for many quantities throughout the thickness of the cell. For instance, the electrical potential of the electrolyte and the concentration of Lithium-ions throughout the thickness of the cell are solved for on a location-dependent and time-dependent manner, as shown in the image below, which summarize the results of an example 1C discharge event (Left) and a 1C charge event (Right). These quantities help give insight into electrochemical performance within the cell. The potential in the electrolyte at various times is represented in the top plots and the concentration of Lithium-ions at various times is represented in the bottom plots.
1C Discharge (Left) and Charge (Right) Results – The space to the left of the first dashed vertical line represents the anode, the space between the two dashed vertical lines represents the separator, and the space to the right of the dashed vertical lines represents the cathode
Electrolyte Potential & Fast Charging Strategies
As mentioned in a previous blog, charging Li-ion cells too fast can lead to premature degradation of the cell primarily due to lithium plating in the anode. Lithium plating occurs in the anode when the electrolyte potential is above zero in that electrode. This is illustrated in the image below which plots the electrolyte potential across the anode, separator, and cathode at various times.
Plot showing electrolyte potential during a 1C charge event and the area that would cause lithium plating to occur
This simulation and plot can be repeated for higher charging rates, including 1C (charging the cell in one hour), 2C (30 minutes), and 3C (20 minutes). The results are summarized in the image below. Clearly, there is a high risk of lithium plating when charging this particular Li-ion cell at a constant current of 3C.
Electrolyte Potential plot for 3 different charge rates: 1C, 2C, and 3C
Virtual Sensors
These quantities calculated by GT-AutoLion are not only available to be plotted after a simulation, but can also be dynamically sensed and even used in controls algorithms. GT-AutoLion provides an easy-to-use framework for sensing any quantity at any location, which can then be sent to GT’s controls domain or even 3rd party tools specialized in controls for use in controls development (for Model in the Loop or MiL testing). With this framework, more complex charging strategies can be explored and calibrated.
Benefits
Virtual calibration of batteries enables real-time or faster than real-time virtual battery sensors or observers that can be used to optimize charging strategies. Additionally, this framework enables engineers to understand the root cause for battery degradation and even anticipate early failures.
The primary and most immediate business benefit early in the development cycle is a reduction in testing costs by simply minimizing cell aging tests. This means product development will be faster and more cost-effective.
From a marketing and product desirability point of view, performance metrics can be met and potential critical lifetime scenarios averted. This, over time, translates to improved brand perception and greater long-term brand loyalty.
Written By: Joe Wimmer
How to Reduce Battery Charging Time While Maximizing Battery Life
In the age of electrification, promising technologies like battery electric vehicles and electric aircrafts are coming into the forefront of societal advancements. However, one major hurdle in electrification is speeding up the vehicle battery charging time. Re-fueling conventional vehicles and aircrafts generally takes minutes, whereas electric vehicles and aircrafts can take hours. For airlines, this down time can be very expensive, and a long charging period has been shown to reduce the likelihood of consumers purchasing pure electric vehicles.
To combat this, battery engineers are exploring options to reduce the amount of time required to charge a battery. Unfortunately, fast charging Li-ion batteries can cause premature battery degradation by initiating lithium plating, so an aging cost for fast charging must also be considered. Because of this, the system manufacturer (i.e automotive OEM, aircraft OEM, power tool OEM, or consumer electronics OEM) has to strike a balance between decreasing time required to charge a battery and the expected life of the battery (which can have a great effect on brand perception).
There are various charging protocols that both improve the battery life and shorten the charging time, when compared to traditional charging protocols. The effects of these charging protocols vary from cell to cell and need to be tested individually to fully understand their effects on cell charging time and degradation rate.
Testing and optimizing charging protocols is extremely resource intensive because it intentionally degrades lithium-ion cells, which can be expensive and time consuming. GT-AutoLion helps reduce this cost by supplementing experimental tests with virtual tests.
What is Lithium Plating?
One of the key contributors to battery degradation that comes with fast charging is lithium plating. Lithium plating is the reduction of lithium ions into lithium solid. It is caused when the potential in the anode falls below zero volts and cycling lithium-ions (Li+) reacts with electrons (e–) to form lithium metal (Li+ + e– -> Li). This lithium metal is deposited into the anode, lowering the porosity of the anode. Because Lithium-ions are consumed in this reaction, it decreases the capacity of the cell. Additionally, because it lowers the porosity of the anode, it increases the resistance of the cell.
Lithium plating occurs most frequently when Li-ion cells are charged with very high currents, especially at low temperatures.
Figure 1, taken from a paper using GT-AutoLion, shows how GT-AutoLion can be used to match experimental data of capacity fade with its built-in model for lithium plating. With this model, GT-AutoLion allows engineers to virtually test various charging strategies and their effect on both charging time and cell degradation.
Figure 1. Overview of lithium plating in a cell
Example Charging Protocols
A charging protocol is an algorithm which defines the charging methodology of a cell. Each charging protocol has different implementation costs and unique implications on charging time and cell degradation. Figure 2 summarizes three of the most common charging protocols.
The most common charging protocol is a constant-current-constant-voltage (CCCV) charge. During a CCCV charge, the cell is charged with constant current until a certain max voltage is reached. After, the cell discharges while maintaining the voltage at the previous max voltage, as shown in Figure 2 (left). A CCCV protocol is considered to be the simplest, safest, and most widely-used protocol to implement.
In boost charging (BC), the cell is charged with a constant boost current that is significantly higher than the subsequent constant current charge. The cell then discharges while maintaining a constant voltage. The BC protocol is shown in Figure 2 (middle). Implementing a BC protocol can decrease charging time without potentially losing cycle life.
Pulse charging (PC) is another charging protocol that can also be used. During PC, the current alternates between a high current and a low current and the voltage increases until an upper cutoff voltage is reached, as shown in Figure 2 (right). Pulse charging can reduce resistance due to diffusion, which reduces charging time and aging and improves the cycle life of a cell.
Fast Charge Strategy Development in Real-World Aging Simulation
While various charging patterns can be studied experimentally, these experimental tests often are not reflective of the real use case a battery may see in a vehicle, aircraft, power tool, or consumer electronic. As presented in a previous blog, GT-AutoLion and GT-SUITE can be used together to predict how a Li-ion battery will degrade over time while considering any use case such as various load profiles, drive cycles, and weather conditions. These analyses can also be upgraded to test the effect of the charging protocol on real-world charging time and battery degradation.
Conclusion
With GT-AutoLion and GT-SUITE, system manufacturers better understand the tradeoff between reducing the time required to charge a battery and maximizing the life of a battery. This tradeoff is imperative to understand because it has a profound effect on customer satisfaction and brand perception.
Written By: Vivek Pisharodi[/vc_column_text][/vc_column][/vc_row]
Predicting System Performance with Aged Li-ion Batteries Using GT-AutoLion and GT-SUITE
As demonstrated in a previous blog, GT-AutoLion and GT-SUITE can be used together to predict how a Li-ion battery will degrade over time while considering any use case, weather condition, and even charging patterns.
Up to this point, battery degradation has primarily been presented as the change in the capacity of a battery over time. This only tells a small portion of the full story for two reasons. First, batteries also see an increase in resistance over time (which GT-AutoLion is able to capture). Second, consumers are not interested in how the capacity and resistance of their battery change over time, they are interested in how fast their vehicle can accelerate to 60 mph, how many logs their electric chain saw can cut between charges, how many pictures can be taken and posted between charges on their cell phone, etc.
Luckily, with GT-AutoLion and GT-SUITE, understanding not only how Li-ion batteries degrade over time, but how that affects system-level performance is very straightforward.
Inserting an Aged Battery into A System-Level Model
When running an aging simulation of any kind, GT-Autolion stores an external file (.state) that details the state of the Li-ion cell at every cycle of the aging scenario (here a “cycle” can be a charge-discharge cycle, a vehicle’s drive cycle, an aircraft’s flight cycle, or a metric of time like days or weeks). This external file generated by the AutoLion aging model can then be inserted into a system-level model to predict how the aged battery will influence product performance. Using this, system-level models have a straightforward workflow to predict system performance after 100, 200, or 300 cycles; 1, 2, or 3 years of realistic operation; or even, in the case of automotive applications, after 12,000, 24,000, or 36,000 miles. That is, engineers utilize physics-based models to gain insight into the performance of their products once the battery is aged.
To demonstrate this workflow workflow, we have selected to build a model of a battery electric vehicle (BEV) in GT-SUITE. This BEV model contains an accurate model of the powertrain, heating ventilation air and conditioning (HVAC) system, and cabin to understand tradeoffs between vehicle range and passenger comfort. This BEV model can accurately capture the power draw from the battery and the temperature of the battery during any load condition (for example a drive cycle or commute pattern from GT-RealDrive).
Summary of process presented for a Battery Electric Vehicle application
With this workflow, automotive OEMs have the ability to take standard cell-level laboratory tests, such as calendar and cycle aging tests, and predict more meaningful system-level performance metrics over the lifetime of a battery electric vehicle, such as the BEV’s range and acceleration performance (here measured in 0 to 60 mph time).
Summary of process presented for a Battery Electric Vehicle application
With the seamless workflows available between GT-SUITE’s advanced system-level modeling and GT-AutoLion’s accurate Li-ion battery modeling, engineers understand how system-level decisions (such as vehicle topology decisions, thermal topology decisions, and controls decisions) can affect the long-term degradation of not only the battery but the performance of the entire system.
Written By: Joe Wimmer
Using Simulation to Reduce Battery Testing Time and Cost
To predict the lifetime of a battery-powered product, engineers must understand how a battery will degrade over time. Popular methods of understanding battery aging usually rely heavily on physical testing, which is expensive or prohibitively time-consuming.
Simulation tools that use a physics-based approach to modeling Lithium ion cells, such as GT-AutoLion, enable engineers to decrease the amount of physical testing required to fully understand how Li-ion cells degrade over time.
Calendar and Cycle Aging
Standard experimental testing procedures for quantifying battery degradation include calendar aging and cycle aging formats. Calendar aging experiments store a Li-ion cell at various temperatures and states of charge for extended periods of time while the capacity of the cell is periodically checked. This data is generally visualized by showing the capacity retention (as a percentage of the beginning of life capacity) vs. the amount of days in storage. Cycle aging experiments cycle the Li-ion cell between 100% and 0% states of charge (or other SOC windows) at various temperatures and currents repeatedly. This data is generally visualized by showing the capacity retention vs. number of cycles the cell has run.
Calendar and Cycle aging data of Li-ion cell
This data often comes from cell manufacturers, but sometimes cell buyers execute this testing themselves.
Depending on the number of cycles and current, cycle aging tests can take weeks or months to test. For example, if a cell is charged at discharge at a C-rate of 1 C, 500 cycles are completed in about 6 weeks.
Calendar aging, however, can take quite a bit longer. Depending on the projected life cycle of the product, different amounts of calendar aging may be required to properly test the degradation during the full life cycle of the product. For instance, cell phones may only be designed to last 2 years; whereas, battery electric vehicle may be designed to last 15 years. Unfortunately for the automotive industry, testing a Li-ion cell for 15 years is unfeasible because the standard development cycle is roughly 2-3 years. Additionally, battery technology changes very quickly – if 15 years of testing were done before the next generation BEV came out, the battery technology would be outdated by the time it was released.
Because of the great disparity between projected product lifetime and the product development cycle time, it’s not always feasible to rely solely on calendar aging or cycle aging data.
To help address this issue, physics-based aging models can be calibrated to available data and then used to project, or extrapolate, the degradation of cells beyond the available data.
Using GT-AutoLion to Predict Aging Beyond Measured Data
The calendar aging data presented earlier can be used to calibrate physics-based aging models in GT-AutoLion consisting of 4 parameters. In the image below, the 4 parameters were calibrated by using The Integrated Design Optimizer that comes with GT-AutoLion and GT-SUITE, which automatically varied the parameters to minimize the error between simulation and experimental data. The results of such a calibration shows good correlation between simulation and experimental data.
GT-AutoLion aging models Calibrated to experimental data
However, as you can see, this set of experimental calendar aging data was collected over an 870 day period, which is nearly 2 ½ years. What if you don’t have 2 ½ years to test the degradation of a Li-ion cell? The following images and discussion try to answer that question.
The images below show the power of physics-based aging models in GT-AutoLion by demonstrating how well they extrapolate after being calibrated to experimental data. Each plot has a portion with a white background, which is the data used to calibrate the model and a portion with a grey background, which is testing how well the calibrated model extrapolates into the future. For example, the image below assumes that only 450 days were available for testing the calendar degradation. Only the data in the white section (before the 450-day cutoff) was used to calibrate the model using GT’s Integrated Design Optimizer. The grey section illustrates how well the model extrapolates into the future for the other 420 days of available data.
Description of White and Grey Backgrounds in Model Extrapolation Plots
This process was done for 750, 600, 450, 300, 100, and 50 days of data, and the results are shown in the image below.
As expected, the more data that is available for model calibration, the better the results will be. However, the results also show that even with a significant reduction in testing time, reliable physics-based models can be calibrated using GT-AutoLion. These calibrated aging models can then be used to predict how Li-ion cells will degrade in any system, which provides insight into how long a product will last and how well it will perform once it is aged. In my next blog, I’ll share more information on how physics-based simulations help engineers predict how batteries will degrade in real-world situations.
Written By: Joe Wimmer
Virtual Calibration Case Study – OBD Calibration (Diagnostic)
In the previous blog, I discussed closed-loop virtual calibration that incorporates controls into the model. I also discussed the various types of work and analysis that are performed with closed-loop virtual calibration. In this blog I will go deeper into a case study of how virtual calibration is used to save money on OBD calibration.
OBD is a diagnostic system that is used to protect the powertrain or maintain compliance of various emissions systems. Regulatory agencies have direct visibility of a company’s diagnostic systems, so compliance is crucial. The testing required to develop and calibrate the diagnostics costs upwards of multi-millions of dollars per engine program, and there are very strict deadlines in place for these tasks to be completed.
The consequence of getting the diagnostic calibration wrong is severe and impacts reliability, warranty, and product perception; all very critical areas downstream of powertrain development. It can even delay production which is an expensive outcome for any engineering organization.
Overboost Diagnostic
An example of a high-risk diagnostic is the overboost diagnostic, which is high-risk due to the chance of encountering mechanical limits (cylinder, turbo, etc.) that cause prototype damage or unintended performance degradation during calibration. The ability to perform this test is not difficult but the risk and certainty of robustness are the challenge.
This makes the overboost diagnostic a great example to demonstrate the power of virtual calibration.
There is a significant cost associated with attempting to re-create the failed condition, and there are a few questions that incur a large cost to answer:
How much should the component be failed? – Repeated machining then testing
Is the diagnostic robust (to ambient/component variability)? – Iterative testing
How will the rest of the control system react? – Iterative testing
Using Simulation for Overboost Diagnostics
Simulation helps answer the questions above and allows engineers to front-load overboost diagnostics to decrease the likelihood of repeated prototype damage.
With simulation, the powertrain model can be modified to represent the overboost condition. The variable geometry rack position signal sent to the turbocharger can have a reduced upper limit threshold or the signal can have a reduced gain/offset applied to it. It is up to the model developer to determine which strategy adequately represents their failure mode in conjunction with their OBD demo agreement/requirements. Below are pictures showing how easy it is to implement this in the GT-SUITE model map.
With open-loop virtual calibration, the calibrator runs the model from a nominal (non-failed) powertrain’s cycle and varies the offset/gain/limiter in order to influence the boost pressure. They will be able to determine how close to the mechanical limit they will be able to take the physical powertrain through this method. They can also feed the powertrain output signals into the diagnostic controls to see the impact (obviously not accounting for controller interaction). This provides great cost and time savings.
The diagnostic strategy and threshold are tuned in this virtual environment and refined/validated in a physical test. In a closed-loop simulation the calibrator will be able to treat the powertrain model as a close representation of a physical powertrain. Other portions of the powertrain controls (de-rate, exhaust gas recirculation, etc.) can be observed to see if there is interaction (positive and negative).
The benefit in using virtual calibration in a diagnostic development program is strong. The existing process is quite iterative and risky, and performing most of this work in a virtual environment decreases the time and risk associated with diagnostic development. A few of the benefits of incorporating virtual calibration into the existing process are:
Parallelization of the tuning in virtual which allows engineers to receive quick results
Flexibility of powertrain model which allows the incorporation of failure modes at will and allows operations at “strange” conditions
Zero risk of damage or degradation
Conclusion
Diagnostic development is a very challenging and critical area of powertrain development. The process is made more efficient by incorporating virtual calibration into the current process and by finalizing/refining the work with testing.
This finishes up the series of blogs on virtual calibration for powertrain development for now.
If you are interested in discussing virtual calibration or would like more details please reach out and contact us.
Written By: GT Staff
Closed-Loop Virtual Calibration – Controls and Diagnostic Development
In a prior blog, I discussed a few methods and benefits of open-loop virtual calibration. Now I will explain what closed-loop virtual calibration is and the types of analysis that can be performed.
Closed-loop virtual calibration is something that almost all powertrain manufacturers can perform by having 3 traditionally separate groups (controls, simulation/CAE, and calibration) work together. Every OEM will have the resources they need to complete this type of analysis: a simulation model of their powertrain (simulation group), engineers who develop calibration (calibration group), and a model of the controls on the ECM or physical ECM (controls group).
What is Closed-Loop Virtual Calibration?
Closed-loop virtual calibration is when the simulation model is used as a plant model to the controls system. This means having a simulation model of the powertrain with supplemental controls in virtual/hardware that control the simulated powertrain. In the end, the goal is to have an entire virtual system that behaves the same as a physical powertrain with a computer. This enables powertrain calibration to be performed upfront and provides a system level understanding.
XiL
There are two common ways that the industry performs closed-loop virtual calibration. Those two methods are HiL (Hardware in the loop) and MiL (model in the loop).
HiL – Hardware-in-the-loop tells the virtual powertrain how to interact with a physical control unit and other actuator/sensor (Hardware).
MiL – Model-in-the-loop tells the virtual powertrain how to interact with a model of the controls and actuators/sensors (Model).
Regardless of what type of closed-loop virtual calibration is performed, each will improve the existing iterative process of physical powertrain calibration when used upfront.
Example of HiL with powertrain control module (Credit – Unique)
Basic example of MiL with Fueling PI Controller
Why Closed-Loop Virtual Calibration Is So Powerful
One of the most important benefits of closed-loop virtual calibration is that there is zero risk of damage or unintentional aging to any expensive hardware. It allows existing calibration tasks to be performed more efficiently upfront by a virtual model with controls.
Closed-loop virtual calibration also enables new insights and tasks to be performed due to the virtual nature of the powertrain. For example, the model can have failure modes induced that would be difficult or risky to recreate as desired on a physical powertrain. That means engineers are able to detect and respond to key failures that would otherwise be challenging to catch.
With enough parallelization, robustness studies are performed to understand powertrain performance in various scenarios. This is critical in understanding product performance with respect to defining warranty and determining regulatory compliance as well as customer satisfaction.
Conclusion
The value proposition to pursue a virtual calibration workflow is significant. Some tasks that can be performed upfront are:
Test Trips/Cycles
Diagnostic Tuning
Off-Nominal Calibration
Robust Controls Development (to predict aging, component quality, etc.)
Software Verification
Any single one of these tasks incurs significant costs in the tens of thousands to millions of dollars. Performing work upfront with a model that has controls included will increase efficiency of these tasks, reducing time and saving money while also supporting physical tests.
In the next blog I will go in depth on one of the powertrain development tasks mentioned (diagnostic tuning) and how that process is improved with virtual calibration.
If you are interested in discussing virtual calibration or would like more details please reach out and contact us.
Open-Loop Virtual Calibration: Methods and Benefits
In the previousvirtual calibration blog, it was established that there are many benefits to incorporating virtualcalibration in the development cycle.In this blog, I’ll go into detail about open-loop virtual calibration and explain how it is easily incorporated into existing development processes.
Open-loop virtual calibration is a method of using virtual models to populate calibration tables/maps, serving as a baseline for calibration development. In this method, the virtual model has no external controls. It also serves as a sensitivity analysis for the inputs that are being varied.
The unique benefits of this are:
Calibration with the lowest barrier to entry
Enables simulation assisted testing
Provides insightinto operating areas where controls won’t venture
Understand sensitivity of input variables to output variables
Comparing Old and New Processes
Because open-loop calibration does not include external controls, it requires the user to vary many input variables in order to determine the output variables of interest from the model.
Imagine ifa company develops anew diesel engine thatwill operate at 10,000 ft above sea level in freezing conditions. Acalibration must also be developed to ensure that this engine will operate efficiently in the extreme environment.
The current method to develop this off-nominal calibration takes a significant amount of time in a very expensive test cell because of the extreme altitude and cold. This calibration also has to be developed sequentially at each operating point with risk of damage growing as the points approach full load.
Open-loop virtual calibration addresses these pain points and cuts down the time required for the performance team to develop a base calibration.Through parallelization of the virtual engine and aftertreatment model, multiple operating points are run at the same time. The results of the simulation are then used to develop a base calibration and to understand the limits of the powertrain.
The image below explains this alternative workflow, where an operating space is defined through an initial (or final) DOE space for input variables. The engine and aftertreatment model will be run local on 8 or 16 cores at once (most standard desktop computers) or sent to a computer cluster for massive parallel jobs. The outputs of that simulation will then be used to create a response surface model (meta-model) which will be used to better understand the engine and aftertreatment behavior. Therefore, calibration teams are able to develop a base calibration using the model results and reduce time in the test cell.
Methods of Open-Loop Calibration:
There are many types of analysis performed in open-loop. Two common methods are:
Create a coarse input variable space algorithmically
Start simulation in parallel (distributed)
Set objective of interest
At the end of these analyses the calibration engineer hasa stronger understanding of the engine behavior and uses the results from the simulation as a starting point to assist in developing thecalibration. This provides a much better starting point than going through the current sequential iterative process while in the expensive test cell.
Conclusion
All powertrain companies have these models in simulation groups. They also have teams of dedicated calibration engineers. By having calibration engineers use these models to simulate the test space up front, powertrain companies are able to save tremendous time and money.
In the next blogI will talk about closed-loop virtual calibration and some of its benefits and methods.If you are interested in discussing virtual calibration or would like more details please contact us.
Virtual Calibration: What It Is and How It Helps Your Development Process
Common Calibration Challenges
Powertrain testing and calibration is an essential part of the product development process, but it often takes years and costs millions of dollars to complete. The list of internal and external requirements for these powertrains has grown significantly over time due to regulatory (OBD/emissions), customer (efficiency), and manufacturer (reliability) demands. To meet these requirements, testing and calibration is performed via in-house tests, on-vehicle tests, trips to locations with extreme weather conditions, or specialized test benches.
Many unanticipated events happen during these tests such as destroyed prototype parts and delayed test trips that cause teams to miss the target weather conditions when a winter trip turns into a spring trip. It is very clear that doing 100% of powertrain tests with physical tests is inefficient and comes with risks.
There are many iterative engineering tasks (calibration/controls development) that are well-suited for a virtual or simulation environment. This greatly reduces the costs and time associated with traditional physical tests, while also incorporating those physical tests into the process.
Virtual Calibration
Virtual calibration is defined as the act of performing traditional calibration/controls development tasks upfront in a simulation environment. It offers a solution to perform the same tasks quickly and at a lower cost.
Virtual powertrain calibration is a method that is actively being performed at many automotive and commercial vehicle (on/off-highway) companies. The value in virtual calibration is to let simulation perform most of the upfront work (baseline calibration, controls development, etc.) and let expensive and time-consuming testing perform the validation and refinement. Basically, sparingly use the expensive resource (testing) while taking care of the brunt of the work with the low-cost resource (simulation).
There are several benefits that come immediately to mind in Virtual Calibration (I am sure you are already thinking of some):
Simulation has a lower cost to enter than test equipment.
When the virtual calibration is performed it can be automated and does not require as much supervision.
Enables full design space exploration at low cost/time.
Virtual environment allows for any ambient boundary conditions to be created at minimal extra cost.
Avoid expensive damage to prototype hardware during calibration.
Many OEMs/Suppliers already have the framework in place to perform virtual calibration.
In my experience all OEMs and Manufacturers perform some form of powertrain simulation and could easily implement a virtual calibration workflow. This offers a great benefit to the engineering process and enables more efficient use of engineering time.
Virtual calibration is a way for the industry to respond to quicker development cycles with tighter regulatory requirements and increased customer expectations. As I mentioned earlier, virtual calibration is not meant to make testing obsolete, but to supplement testing and create a more efficient engineering process.
This blog is just the first in a series of blogs on virtual calibration that will be published in the coming weeks. Check back for future blogs that explain the different types of virtual calibration (open-loop and closed-loop). If you are interested in discussing virtual calibration or would like more details please reach out and contact us.
Studying Vehicle Behavior in Real Driving Scenarios with GT-RealDrive
The continual tightening of vehicle emission standards has led to decreases in pollution associated with vehicles but not to the levels that the standards target. Research into why has highlighted a disconnect between the emissions emitted by vehicles during real-world driving versus under laboratory conditions. This disconnect is mostly caused by the fact that laboratory drive cycles don’t properly represent the range and variety of conditions that occur in real driving scenarios (ex. cold ambient temperatures, driver behavior, road grade).
As a result, many regulatory bodies around the world have started to adopt regulations that attempt to establish vehicle emission standards that apply to real-world driving. Such standards, commonly referred to as Real Driving Emissions (RDE), usually mean that the vehicle is driven on public roads while recording emissions rather than doing so in a controlled laboratory setting. This is also occasionally called off-cycle driving, as the vehicles are not being driven using standard regulatory cycles such as the WLTC, FTP-75 or JC08.
This presents a large challenge to the automotive industry which needs to ensure that vehicles are compliant with emission regulations under real driving conditions that vary greatly and are only defined as a set of boundary conditions. Testing can be conducted to try and ensure compliance, but it is both costly and can only be carried out once a prototype vehicle is built. Relying on testing alone is a huge risk. If a vehicle is discovered to be non-compliant late in the development process, it will likely result in delays and it will be extremely costly to re-design the powertrain and aftertreatment system.
To help engineers mitigate the risk that a vehicle won’t meet the applicable real driving emission standards, we developed simulation solutions such as vRDE. The latest development is GT-RealDrive, a route generation tool that helps engineers study how vehicle and system models perform under real-driving scenarios.
GT-RealDrive models real-world driving by creating off-cycles routes based on user defined start and end addresses. The cycles take into account traffic conditions, elevation, and traffic signals, effectively generating a cycle representative of real-world driving. Since GT-RealDrive is self-contained within GT-SUITE, the drive cycles created can be directly used in any GT-SUITE vehicle model to simulate off-cycle driving in the same manner as the standard regulatory cycles.
Route Creation
Let us now explore an example of how easily engineers can use GT-RealDrive to generate real-world routes to aid in their vehicle development process.
In this example, a user is tasked with comparing the estimated real-world and on-cycle fuel economy of a new passenger car under development by their company. Since this is relatively early in the development stage of the car, no hardware is available yet. However, the current design is known and was previously modeled in GT-SUITE. Thus, the user has a full vehicle model to generate an estimated on-cycle fuel economy number.
To accomplish this task, the user decides to generate numerous off-cycle routes, simulate the vehicle driving the routes, record the resulting fuel economy, and average the results.
The user starts by generating a route for the daily commute of someone living in Downtown Chicago who works in the suburbs. This route is created by accessing the GT-RealDrive tool directly within GT-SUITE and inputting the start and end locations. These locations can be entered as an address or as the name of a place (for example, The United Center). The tool includes suggestions/autocomplete as one typically experiences when using an online map. Several route options can then be specified (ex. Avoid Toll Roads) in addition to adding a Via-Point if desired.
For our example, this fictional commute is from the Gamma office in Westmont, IL to the Aqua Condo Building in Chicago, IL. The image below illustrates both the autocomplete and name to address conversion feature included in GT-RealDrive.
Once the data is input and the desired options are selected, the “Calculate Route” button is pushed. This is when the behind the scenes magic occurs; the route is created based on the directions between the start and end locations and current traffic conditions, which are supplied by a service called Mapbox. The route is plotted on the map and the resulting data is populated in the table.
This data represents the route and includes a list of coordinates, target vehicle speeds, traffic congestion and speed limits. Users can also choose to include other important factors such as elevation data and traffic signals in this output data. By outputting the data into this editable table, users are able to modify the route to their liking and make minor changes such as adding or removing Traffic Signals, setting unique Stop Duration times at specific intersections or modifying the Vehicle Target Speed.
This route is now available to be used in GT-SUITE to test how the vehicle model will perform on the route. All that the user has to do is open the full vehicle model and swap out the regulatory cycle with the off-cycle route.
In this scenario, the engineer would continue to generate a wide variety of additional routes to simulate in the vehicle model, then record the fuel economy and compile the results. Once a sufficient number of routes have been tested, the user has the data needed for an accurate comparison of the real-world and on-cycle fuel economy of the vehicle in development.
Beyond RDE
The application of such real-world driving cycles extends far beyond just simulating light and heavy-duty vehicle emissions for regulatory compliance purposes (i.e. RDE, In-Service Conformity, and Heavy-Duty In-Use Testing). It can also be used to investigate topics such as real-world EV range & energy management, effects of driver behavior, thermal management, component mechanical durability loading conditions, and control strategies/algorithms; the only limit is your imagination.
One key use is to study the ageing of batteries in EVs when they are driven with real-world drive cycles. A DOE could be created to investigate the effect of driver behavior, ambient conditions and traffic to better understand the variation that these factors have on EV range. This could then be taken a step further to investigate range prediction given some, or even all, of the above factors.
Summary
GT-RealDrive presents an integrated solution to discover how vehicle models will perform on real-world routes. There is no need to collect and process expensive GPS driving data or to format data for importing into GT-SUITE. With GT-RealDrive and an internet connection, users generate real-world routes to aid in their vehicle development from RDE compliance tests to EV range prediction and beyond.
If you are interested in trying out GT-RealDrive or would like more details please reach out and contact us.
Written By: Phil Mireault
eVTOLs, an Exciting Frontier in Aviation and System Simulation
Oshkosh, Wisconsin Control Tower (Credit – http://www.boldmethod.com)
Some of my colleagues and I recently went to Oshkosh, Wisconsin for the exciting EAA AirVenture show, which boasted the “World’s Busiest Control Tower” for the week, as adventurers and aviators alike came from all over the world to congregate around the exciting specter of aviation. With big air stunts from some of the most powerful military jets to acrobatic maneuvers of biplanes of yore, this show offered excitement from all types of aircraft.
One aircraft that was conspicuously sitting at the exhibition was the Vahana vehicle demonstrator from A3 by Airbus. This demonstrator is one of many electric vertical take-off and landing (eVTOL) craft being designed by aircraft companies from startups to large airframers in an attempt to make urban transport more efficient – by air.
As the name suggests, eVTOLs are electric aircrafts that take off and land by moving vertically. Traditional VTOLs, such as helicopters, are widely used, but more and more companies are researching eVTOLs as an alternative.
There are many potential advantages with eVTOL aircraft, including the potential to bring the cost down over traditional helicopters, reducing maintenance costs, reducing noise, and having no exhaust pollution. Another interesting motivation for eVTOL research is the desire to offer cheaper air mobility in urban environments.
Despite the positive attributes of an eVTOL, the design of these aircrafts is no easy feat. Battery pack sizing and safety play a major role in the overall weight of the craft, and engineers must first optimize these considerations before the use of eVTOLs becomes widespread.
To assist in these studies, GT-SUITE offers a solution to study many aspects of eVTOL design from the overall vehicle system to battery pack electrical and thermal performance over flight missions. Engineers can even study down to the battery cell level and use GT’s electrochemistry modeling capabilities to predict cycle life, which is crucial in understanding the economic feasibility of eVTOLs. Since safety is paramount for aircraft, thermal runaway of battery cells and packs can also be predicted using GT-SUITE.
At the upcoming AIAA/IEEE Electric Aircraft Technologies Symposium, in conjunction with the AIAA Propulsion and Energy conference in August, GT-SUITE will be presenting a paper in collaboration with A3 by Airbus on “Using Multi-physics System Simulation to Predict Battery Pack Thermal Performance and Risk of Thermal Runaway During eVTOL Aircraft Operations.” This paper shows a real-life case study of the ability of system simulation to predict cell, pack, and vehicle performance over a wide range of modeling scenarios.
GT-SUITE Model Schematics of Vahana Vehicle a) Vehicle Level, b) Pack Level, c) Cell Level
Conclusion
As we stood in front of the Vahana demonstrator at Oshkosh, we were amazed by the technological breakthroughs that have enabled electric flight, and are pleased to be a part of this exciting new frontier in aviation and system simulation.
If you’d like to learn more about how GT-SUITE is used to solve eVTOL simulation challenges, contact us on our website!
Written By: Jon Harrison
Fuel Cell System Modeling: Key Considerations and GT-SUITE Solutions
Over twenty years ago, I heard a joke regarding fuel cells that still makes me chuckle: the fuel cell is a technology of the future – and it always will be. I think I laughed at the time because there were so many obstacles that appeared insurmountable that it seemed like fuel cells would never become a mainstream power source. I think I am amused now because the person who wrote the joke did not foresee the hard work and development that would go into fuel cells during the coming years would eventually make the joke invalid. Four major automotive makers (Toyota, Honda, Hyundai and Daimler) have introduced fuel cell powered vehicles in the past few years and all signs point to more car makers doing the same.
The fuel cell appears to be on track to becoming a mainstream power generation device for automotive applications. It is an attractive option because it has a relatively high efficiency and because the overall emissions from the source to the wheel can be zero when fueled by hydrogen created from a renewable resource such as wind or solar power.
Foreseeing that this power source has potential to become a significant option for the automotive industry, Gamma Technologies (GT) added the ability to model a fuel cell in GT-SUITE in 2004 and we’ve continued developing our fuel cell offering in light of the recent increase of usage by the industry.
Still, the challenges of fuel cell modeling are new to many engineers, so I want to explain some of the considerations taken in fuel cell design and demonstrate how GT-SUITE is the ideal tool for addressing these challenges.
Fuel and Air Delivery System Modeling
When simulating fuel cells, one challenge is modeling the gas and delivery systems to select the optimal size for compressors, heat exchangers, and evaporators while minimizing pressure losses and keeping the gases entering the stack at the right temperature (around 100 degrees Celsius) and humidity (near 100%).
Many modelers of fuel cells are turning to GT-SUITE to simulate and design the fuel and gas delivery systems that are needed to provide the gas exchange to and from the fuel cell stack. This choice has been strongly motivated by GT-SUITE’s position as the premier software for 1-D flow solutions in the automotive industry. As a result of this interest, GT has invested into improving the capabilities even further.
In v2018, GT included with each installation a compound template that combined the existing fuel cell template with control components, injectors and ejectors to model the transfer of hydrogen fuel from the anode (fuel side in a PEM) side of the stack to the cathode (air side in PEM) and the resulting formation of water. This development gives the modeler an opportunity to more accurately simulate the overall fuel and air system by including the effects of the reduced mass of hydrogen on the fuel side and the additional mass and different properties of the gas on the air side.
To continue on this path in v2019, a new template for the fuel cell stack was developed that not only handles the mass transfer and transformation of reactants to products, but also connects directly to an electrical circuit in a GT model, enabling one to simulate the interactions between the fuel cell and the rest of the propulsion system.
This template gives controls engineers a complete plant model that can be used to verify and validate their algorithms to control humidification, as well as fuel rate and air flow rates into the stack. This helps to maintain optimal operation of the entire system including state-of-charge of the battery during a drive cycle simulation.
Defining the Polarization Curve of the Fuel Cell Stack
As part of creating a good model of a fuel cell system, it’s also important to properly represent the polarization curve of the fuel cell stack.
For those who don’t know, a polarization curve is a curve that defines the relationship between voltage and current density of the fuel cell stack. This relationship impacts the performance of the whole system as it tells you whether you have too much power, if less power is required, etc.
The polarization curve is often entered into a fuel cell stack component as part of a larger system model. Because this integration takes into account key inputs from other systems, it allows engineers to make an accurate evaluation of fuel cell performance.
Another development made in v2019 was the ability to enter the polarization curve of a fuel cell stack into the fuel cell part on the map of GT-SUITE. In the original fuel cell template, the polarization curve is defined by the composition of the reacting and product gases in the anode and cathode sides of the fuel cell stack, as well as other coefficients that can vary with temperature and/or other factors. This design offered great flexibility in defining the performance of the stack and in adapting to changing conditions of the incoming gases.
In order to maintain this flexibility, GT decided not only to enable entry of the polarization curve, but to fit this curve to the same equation used in the original template. This design provides an easy way to enter a measured polarization curve and its corresponding reference conditions, and it allows the resulting model to increase or decrease the voltage accordingly depending on changes in the inlet conditions, such as pressure and temperature. An example of the fit that results from a sampled curve is shown below.
Reformers
Hydrogen has a number of disadvantages as a fuel for automotive transportation: it has a relatively lower power density compared to hydrocarbons, it has an amazing ability to leak through the smallest of gaps between parts and it has the perception of being dangerously explosive (though this point is certainly open to debate). One option to combat these challenges is to use a reformer to generate hydrogen fuel from a hydrocarbon on board the vehicle itself.
Reformers are devices that use catalytic chemistry to convert one or more substances into other substances. It just so happens that GT-SUITE developed this fundamental capability many years ago for modeling aftertreatment devices, such as three-way catalysts (TWC), deoxidation catalysts (DOC), lean NOx traps (LNT) and other components with three-lettered acronyms.
GT chose to provide a flexible method of defining the chemical mechanisms to allow the user to enter the reactions and rate equations like one would in a spreadsheet. This flexible method enables GT-SUITE users to add a reformer without customers having to wait for GT to add new capabilities to the code. One could say the feature is already there, but it is hiding in plain sight.
GT developed an example that demonstrates a reforming process including equations for the typical mechanisms used in this process. Please feel free to contact us to request a copy of the reformer example or to discuss how you would like to use GT-SUITE for your project.
Conclusion
The attractive features of a fuel cell, such as quick ‘recharge’ times, zero emissions and more certain ranges in extreme temperatures make it a viable option as a primary power source or a range extender for electric vehicles. Of course, as engineers, we need to analyze the systems we are designing or even just considering during the advanced development stage in order to make good decisions and recommendations. At GT, we’re committed to making GT-SUITE the ideal tool to empower engineers to make those good decisions.
Please contact us if you are interested in using GT-SUITE for fuel cell system or other modeling needs.
Written By: Tom Wanat
Building a Detailed, Highly-Predictive Battery Model from a Cell Spec Sheet
When it comes to predicting battery performance and degradation, more and more engineers are seeing the benefit of utilizing simulation tools like GT-AutoLion, the leading Li-ion battery simulation software. With simulation, engineers can predict battery performance and aging outside the standard capabilities of physical testing and because models run much faster than real-time, the simulated models expedite the process of testing lithium ion-batteries.
However, some engineers face a challenge that prevents them from utilizing simulation to its full potential, and at times from even attempting simulation. When engineers start with limited knowledge of the Li-ion cell, for example with nothing but a cell specifications sheet that contains only the very basics of the cell (i.e. chemistry and geometry), they struggle to build models that are an accurate reflection of the actual cell.
Because we noticed this struggle, GT created a process to calibrate cell models, then we developed a “cookbook” that walks through this process of going from a cell specification sheet to a predictive model. While I won’t cover the entire process of calibrating a battery model, in this blog, I’ll introduce some of the early steps using the GT “cookbook”, also known as the GT-AutoLion Application Manual and Calibration Process.
What type of cell are we working with?
The first step in calibrating a model is to determine the type of cell you have. Cells can range from having low-energy density and high-power density to having high-energy density and low-power density. Power-dense cells are used in applications such as power tools and mild and full hybrid electric vehicles (HEVs), where the battery is able to take high-power discharges and charges for starting the engine and regenerative braking. On the other hand, energy-dense cells are used in consumer electronics and battery-electric vehicles (BEV) in order to maximize the duration of the product.
If you don’t know which type of cell you are working with, start by looking at the voltage discharge curve, which you will find on your cell specification sheets. The figure below shows example discharge curves for batteries that use the same active material and electrolyte, but are designed for power-density or energy-density. If you have a high-power density cell, the voltage discharge curve on your cell specification sheet will look like the curve shown below on the left. These power-dense cells are capable of providing high C-rates continuously. If you have an energy-dense cell, the curve will look closer to the one shown below on the right. The deliverable capacity of these energy-dense cells goes down with increasing C-rates much faster than that of high-power cells.
Terminal voltage discharge curves of a power-dense cell (left) and an energy-dense cell (right) at 1,2,3 and 4 C-rates
Once you’ve identified the cell type, the “cookbook” offers suggestions on many parameters and attribute values. For example, if you have a power-dense cell, the “cookbook” suggests using thin electrodes, usually in the range of 40 microns with electrode porosity values greater than 30%. There are many parameters in GT-AutoLion that require a good starting point, or guess, by the user, but these parameters can be tuned by using the GT optimizer to calibrate the model.
What is the integrated design optimizer and what can it do?
In order to have an accurate model, it must be calibrated. We recommend using the data in the plot below (voltage drop during constant current discharges) to calibrate GT-AutoLion models. We can use the Integrated Design Optimizer – a built-in tool that calibrates models so that simulation results and experimental results are closely matched.
By using a combination of attributes found on the cell specification sheet and guidelines in the cookbook, you can enter the geometric attributes into GT-AutoLion for calibration. The design optimizer will then vary unknown material parameters so that simulation and test data match. The optimizer runs the simulation iteratively, each time changing the values of the parameter until a stopping criterion is met. It uses a search method or algorithm to intelligently determine the next set of input values in as few model evaluations as possible, since each execution of the model may be computationally expensive and time consuming.
Voltage discharge curves of experimental, pre-optimizer, and post-optimizer data
The figure above shows the result of a simulated voltage curve before and after the data were optimized using GT-SUITE’s design optimizer. The simulated and experimental data now match up even better than before it was optimized! Once the calibration process is complete, you have an accurate model ready for simulation and experimentation such as battery aging tests, voltage and power studies, and more.
Conclusion
As you can see, GT-AutoLion provides an easy method to build an accurate electrochemical cell model, even when an engineer has very limited data from a cell specification sheet. However, there are still some details in the cookbook that will help calibrate your battery models but are not covered in this blog.
If you want to learn more about the workflow and capabilities of this powerful tool or if you would like to access the GT-AutoLion Application Manual and Calibration Process, please contact us through our website.
Written By: Vivek Pisharodi
Optimized Vehicle Fleet Modeling with GT-DRIVE+
Full vehicle modeling has been a capability since the very beginning of GT-SUITE, and with GT-SUITE, Gamma Technologies has always been at the forefront of integrated simulations. In this blog, I’m excited to announce that Gamma Technologies has advanced their vehicle simulations even further with the creation of GT-DRIVE+, the next generation vehicle modeling framework contained within GT-SUITE.
In recent years, we’ve noticed emerging challenges which directly affect the vehicle modeling user base. From a technical standpoint, these challenges include the proliferation of electrification and electrified vehicles, as well as issues in real driving emissions, and overall system integration. These technical challenges have also created further organizational challenges as new groups are needing to work together to create solutions.
To help address these challenges, we created GT-DRIVE+ and built it directly into GT-SUITE. We saw the need for an interface that can quickly and easily build vehicle models and that provides strong modeling practice standardization, traceability, and data management features.
This dynamic new framework empowers engineers to quickly and easily build standardized vehicle models to predict the types of results they need to study, like fuel consumption, emissions, and vehicle performance.
With these capabilities, GT-DRIVE+ is able to adapt to changing trends and requirements, allowing users to predict the behavior of the vehicles of today and the vehicles of tomorrow. Let’s walk through some user-friendly features that improve the vehicle modeling process.
GT-DRIVE+ starts with a flexible and expandable model generator, which means users are able to model not only a single vehicle, but a whole vehicle fleet. And because the model generator is expandable, expert users at a company can tailor the types of vehicles and powertrain options available. Figure 1 demonstrates the types of vehicles constructed within GT-DRIVE+ by using our step-by-step vehicle model builder.
Figure 1. GT-DRIVE+ Vehicle Model Builder
The Vehicle Builder includes a vast array of vehicle configurations like battery electric vehicles, fuel cell electric vehicles, conventional commercial vehicles, and many more. With all these configurations available, virtually any vehicle can be created and examined.
Along with building standardized models quickly, it’s also important for our customers to be able to easily manage model data and structure. For this, we enhanced Case Setup to be full screen on top of the model map. This means that for GT-DRIVE+ models, the main modeling interface is Case Setup itself, which manages all the tasks and topologies of the vehicle model.
As shown in Figure 2, this allows end users to easily manage vehicle components and subsystem data to quickly characterize and optimize common vehicle outputs, like fuel consumption, performance, and emissions through a standard set of pre-defined maneuvers. Because the main modeling interface is tabular, the end user is enabled to change components or topologies effortlessly, without needing to modify an object-oriented model.
Figure 2. GT-DRIVE+ Main Modeling Interface
Together with a standardized model setup, the results format can also be standardized. Figure 3 shows some selected GT-DRIVE+ results from a model.
Figure 3. Selected GT-DRIVE+ Results
The results for six types of maneuvers are shown, including driving cycles, wide open throttle accelerations, and tip-ins. These results can be output for all the types of vehicles in a single model, allowing a large amount of flexibility in modeling and optimizing the vehicle fleet.
Perhaps the most powerful feature of GT-DRIVE+ is that it is an open framework. This means it is possible for modeling experts to configure customized layouts and workflows, which can then be exercised by anyone in the organization who needs a vehicle model. This open framework encourages smooth and effective collaboration within vehicle teams.
Together with the ability to add advanced physics submodels, GT-DRIVE+ represents the most state-of-the-art framework for modeling vehicle fleets of tomorrow.
Want to try out GT-DRIVE+ for yourself? Contact us to ask for a demo!
Real Life Scenarios and Multi-Objective Optimization with xLINK
xLINK’s mature simulation environment includes all GT-SUITE’s powerful features for parametric analysis and optimization. In the previous xLINK blog, we created a full vehicle model by assembling three external subsystem models (Drivetrain and Engine as two FMUs and a soft ECU as a Simulink DLL). xLINK identified the expected inputs, outputs and all exposed parameters of each module and we saw how you can easily connect the exchanged signals. Now let’s try to test xLINK capabilities by promoting the exposed parameters to Case Setup, then group them to replicate real-life scenarios and perform a challenging optimization.
Alaska or Death Valley, how will your vehicle model behave?
How sure can I be that my xLINK vehicle model will not die in Death Valley or freeze in Alaska? These are two extreme real-life climate scenarios that can be easily modeled with xLINK’s Case Setup feature. Altitude, humidity, and ambient temperature and pressure are exposed as FMU parameters of the Engine FMU. Through the right click option, I promote them to Case Setup and set them accordingly:
Keep in mind that the drivetrain model will also operate in the same climate and as a result I replace the default FMU parameter values with the Case Setup parameters I defined previously:
Revisiting the drivetrain FMU, I notice that vehicle mass, tire radius, frontal area and drag coefficient are also exposed as FMU parameters and I will use them to create a group of parameters that will stand for the vehicle type, e.g. an SUV and a Hatchback. In xLINK, several parameters can be grouped together in the Case Setup dialog box to form a Super Parameter. Basically, a Super Parameter is a set of parameters in the form of a drop-down list.
First, it is created through the dedicated toolbar button in Case Setup and then it is populated by moving defined parameters from Main Tab to the Super Parameter Tab. This way Environment Conditions and Vehicle Type groups are defined as Super Parameters and all options are named and populated respectively in order to create the extreme real-life scenarios, as shown below:
The above parameter settings stand for 4 different simulation cases. Now, let’s see how the two vehicles behave in the two extreme climate scenarios in terms of engine power output, while the soft ECU drives them through NEDC. The integrated vehicle model behavior is monitored throughout the 4 different scenarios by using xLINK Monitor templates and in the end of the simulation the results are available for further analysis in GT-POST. By using the Combine Cases option, I create comparative analysis plots and assess the behavior of the model at each simulation case:
From the very first 25s of NEDC, I notice that the SUV demands more Brake Power from the Engine than the Hatchback, while the latter obviously responds faster to fluctuations in desired vehicle speed and engine load. If we zoom into the red box above, the aforementioned observations are straightforward; what is still not easy to understand is the model’s behavior difference between the two different Environment Conditions. However, GT-POST offers the ability to efficiently compare the two different environmental conditions for each Vehicle type by applying the percentage difference function:
So far, we have seen that xLINK empowers users with the ability to efficiently create and evaluate multiple scenarios and variants using advanced parameterization features. At the same time, xLINK simulation results are available in GT-POST, which offers the necessary post-processing functionality to highlight the interactions between the different subsystem models and assess their behavior in different real-life scenarios. It seems that GT’s multi-year experience in system simulation has made xLINK a powerful, all-inclusive solution for system integration.
ECO-SPORT mode: Gear shifting strategy multi-objective optimization with xLINK
Earlier we studied how two vehicle types behave when driving through NEDC in extreme climate conditions. At this point, I would like to test xLINK using the Integrated Design Optimizer (IDO) and perform a multi-objective optimization varying some of the external models exposed parameters. A flat acceleration 0-60 mph (0-100 kmh) is by far one of the most exciting situations while driving a vehicle. Some drivers prefer vehicles which need the shortest amount of time for such an acceleration, while others opt for minimum fuel consumption. But, why not go for both? xLINK can perform such an optimization by varying the target vehicle speed before shifting upwards to a next gear. In other words, I will use the exposed Vehicle parameters of the Drivetrain FMU, in order to optimize the gearbox’s up-shifting strategy for both minimizing the 0-60 time and minimizing cumulative fuel consumption and consequently search for a 2-D Pareto Front of optimal solutions.
Reading the Optimizer’s manual, I understand that the most suitable available algorithm for such an Optimization is the Genetic algorithm, which I set appropriately inside Design Optimizer dialog as shown below:
Genetic Algorithm population size is set to 34 and number of generations to 33, generating a total of 1122 Iterations. In the lower right corner of the above screenshot, you’ll find optimization objectives, and above that are the optimization factors. Specifically, the chosen factors are the target vehicle speed to up shift from 1st to 2nd gear and all vehicle speed differences going from 2nd to 3rd and so on and so forth. In addition, I set the minimum speed difference before shifting from one gear to another to 10 km/h and the maximum 50km/h by limiting the optimization factors between Lower and Upper Limits as shown above. When IDO is enabled, the optimization run starts by simply hitting the Run button and the IDO User interface opens automatically:
The IDO interface is preloaded with interactive plots that display the values of Factors and Objectives during the simulation. Users can multi-select specific designs on the in-progress 2-D Pareto plot and all related values will be highlighted in the Solutions Table while in all other plots the related plot points will be marked inside red boxes. Furthermore, users can export one or more selected designs from the Solution table and perform further standalone tests in a new xLINK model, which is created automatically. Convergence progress is also shown in a separate plot. Once xLINK IDO finds all optimal solutions that satisfy both objectives, it generates the following 2-D Pareto Front:
Further analysis of the optimal solutions can be performed by exporting all Pareto front solutions to new xLINK models in order to choose the final one, the Eco-Sport mode. It will not be a problem for xLINK to work further with the IDO and search for other driving modes, e.g. Sport or Eco, by performing two new single objective optimizations, but this is where I close the second xLINK blog. From where I stand, xLINK has been proven to be a complete solution for system integration, testing and optimization. Can’t wait to try xLINK? Would you like to test any application with xLINK or the Integrated Design Optimizer? Contact us!
Written by Pantelis Dimitrakopoulos
5 Reasons to Attend The North American GT Conference 2017
GT is excited to be hosting the North American GT Conference from November 6-7 at the Inn at St. John’s in Plymouth, MI. As we prepare for this year’s conference, I’d like to share 5 reasons you won’t want to miss this free event!
1. Learn How Your Peers Tackle Their Challenges Using GT-SUITE
Over the course of this 2-day event, you have to opportunity to learn how your colleagues utilize GT-SUITE in their work. Monday, November 6 will feature over 30 technical presentations from leading OEMs, suppliers, and consultants including Borg Warner, Bosch, Caterpillar, Cummins, Echogen, EC-Power, FCA, Ford, GM, Honda, Integral Powertrain Technology, Isuzu, John Deere, Roush, Tecumseh, Tenneco, Tula Technology, Volvo, and Woodward.
2. Grow Your Network
There are plenty of opportunities to get to know your peers at this year’s GT Conference. Participate in our networking lunches on November 6 and 7 to meet other GT users, and join us during the evening on Monday, November 6 for a beer tasting and hors d’oeuvres.
3. Preview GT-SUITE v2018
Be the first to learn the latest highlights in GT-SUITE v2018, including the launch of an exciting new simulation platform. Discover what new features have been added that will save you time and improve your simulations.
4. Ask the Experts
Have a simulation question? Stop by our 12 Demo Booth Stations to ask the GT experts! Demo spots include VTM, Cabin, xEV, Vehicle Platform, Combustion, xLINK, Lubrication/Hydraulics, Acoustics, Aftertreatment, Engine Mechanics & Friction, and Aerospace Applications.
5. Expand Your Skill-Set
On Tuesday, November 7, we will offer 15 free seminars. Attend two seminars in one day to expand your knowledge and skills in multiple technical areas. We will also host a free GT-SUITE Introduction training and a paid GT-POWER training in Livonia, MI starting on November 7.
Last year, over 300 people registered and we expect another record-setting event. Sign up today before space runs out!
Virtual Real Driving Emissions (vRDE) Part 2: Simulation Solutions
In Part 1 of our discussion about Real Driving Emissions, we covered the shortcomings of traditional fuel economy and emissions testing methods using chassis dynamometers and fixed driving cycles. In Part 2, we will talk about ways in which simulation can replace the need for costly on-road testing, especially upstream in the product line development cycle.
The Traditional Approach
The way in which a driving cycle is performed on a chassis dynamometer already aims to simulate on-road behavior by imposing a load onto driven axles using rollers, typically connected to electric machines. Road loads (aerodynamics, inertia, etc.) have to be simulated by the dynamometer since the vehicle is actually stationary during the entire test. The test therefore only approximates on-road operation of these vehicles.
Numerical simulation offers a cost- and time-effective alternative to chassis dyno testing given that we have access to highly predictive and fast running engine and vehicle models. The preferred approach for this type of simulation is to dynamically target a chosen driving cycle via a controller or driver template. This method limits simulation vehicle performance to the maximum engine output and will reveal shortcomings of the powertrain or control strategies, in contrast to kinematically imposing the vehicle speed and back-calculating the engine load required.
Figure 8: Typical Dynamic Driving Cycle Vehicle Model
The unique advantage offered for these types of simulations by GT-SUITE is the ability to vary the fidelity level of each of the subassemblies representing the various subsystems. For example, we may want to consider fully-predictive GT-POWER engine models with turbocharging and predictive combustion, or use a fast-running map-based lookup resulting from dyno data. For the transmission, we may use a simple kinematic gear ratio or a discretized dual clutch transmission with individual gear meshes, synchronizers, and internal clutches. The driver can be one of our model-based controllers or a custom-built Simulink model with advanced features and production-level controls. Additionally, the model can easily be expanded to include other physical domains that might include thermal management, lubrication, cabin comfort, or, most importantly for Real Driving Emissions, aftertreatment.
The Case for an Integrated Approach
Since RDE driving cycles have inherent variability by design, it’s much less practical to calculate boundary conditions of independent subsystems and impose them in standalone design simulations. A modular modeling approach allows users to create subsystem models that can be run by themselves or coupled with one another in an integrated effort.
Figure 9: Example of Integrated Vehicle Model
This is especially true for thermal management and emissions aftertreatment systems, which strongly depend on the dynamic loading of the vehicle and transient control strategies. Options such as active grill shutters and electrically-heated catalysts can only be accurately studied during transient operation. A system architect may begin by making technology choices using very coarse models and steady-state simulations, but the task of validation and certification can only be considered through integration where the engine, vehicle, thermal management, aftertretament, and all relevant controls are simultaneously represented.
The GT-SUITE aftertreatment library features a fast and accurate advanced adaptive chemistry solver with a flexible interface that can model all catalysts and diesel particulate filters on the market today. Additionally, the library features state-of-the-art mechanisms and validation results in the form of example models. Even when coupled to a vehicle model, the aftertreatment system can be simulated tens to hundreds of times faster than real time.
Figure 10: Aftertreatment Model Example in GT-SUITE
With accurate measurement of engine out emissions, we can then correctly predict the tailpipe out emissions that a PEMS system will read during a test without the need for any of the components to be on hand. Adding a vehicle and driver to this system gives us everything we need for a proper RDE simulation.
RDE Driving Cycle Generation and Simulation
One of the more unusual features of the transition to real driving for emissions measurements is that the cycle is random by design. This means that adding it to a simulation is a lot more complicated than simply subjecting a virtual vehicle to another WLTP or FTP-75 cycle, which is a prescribed speed function of time. The RDE cycle requires adherence to a set of rules, but to be certain of meeting certification levels, we must cover a broad range of permutations that might be experienced during the actual test.
To that end, Gamma Technologies has developed a RDE cycle generator that draws on a database of real driving segments. The’ProfileRealDriving’ template gives the user control over the most important shaping parameters of the driving cycle and randomly generates one at the start of each simulation. For example, the user has the ability to determine the driving cycle distribution between urban, rural, and motorway portions, or to include elevation variability for additional load variation. Additionally, the cycle generator can be populated with user’s own measurement data for generation of RDE-compliant cycles with an OEM’s own source.
Figure 11: Example of Random RDE Cycle from 'ProfileRealDriving'
This new driving cycle template can be referenced directly by any of the GT-SUITE driver templates and targeting controllers. In addition to the new ability to randomly generate a driving cycle, GT-SUITE’s recently revised ‘ProfileGPS’ template also simplifies replication of testing routes in the simulation environment. Users can reconstruct a recorded drive by either extracting latitude, longitude, and elevation data, or by linking to a GPX file directly. GT-SUITE will then automatically extract the relevant information from the file and reconstruct that driving cycle, along with powerful, built-in filters for smoothing any noisy data.
GT-SUITE – A Comprehensive vRDE Tool
In addition to being able to simulate each of the relevant systems with a high level of detail, GT-SUITE offers a mature platform in which to integrate all vehicle systems as a replacement to costly physical testing of prototypes. The new methods called for by RDE legislation are supported in the current implementation and will be continuously monitored for any revisions and reflected by the software.
Beyond the core capability, GT-SUITE also includes a fully featured post processor in GT-POST, which allows for assembly of reports and compliance validation directly within the native environment. A suite of cosimulation harnesses for most leading commercial tools allows for direct linking with existing models, regardless of the platform they were developed on. All this, coupled with the ability to reuse the model in Model in the Loop (MiL), Software in the Loop (SiL), and Hardware in the Loop (HiL) tests, means that the effort put forth for vRDE can be reused throughout the entire development process, up until hardware becomes available.
Virtual Real Driving Emissions are an area that will dramatically challenge status quo approaches to emissions and fuel economy compliance. An integrated simulation approach with GT-SUITE provides the necessary support to process changes and help OEMs meet the new standards at every step of the way, from concept to production.
For more information about GT-SUITE’s vRDE capabilities, contact Damian Sokolowski at [email protected]
Written By: Damian Sokolowski
How to Create Your First Full Vehicle Co-simulation Model with xLINK
Wouldn’t it be nice to have a platform to simulate various subsystems from different tools in a single model and analyze their interactions? That is what I used to think every time I came across groups that develop and optimize individual components or subsystems, but have to wait until everything else is built to test the whole system. All this testing can now be frontloaded, because now teams can integrate models from a variety of tools in GT’s new tool-neutral platform for co-simulation, xLINK, which empowers users with the ability to perform integrated testing and whole system optimization.
I belong to the team that develops and supports co-simulation solutions through popular interface standards such as FMI for complete system design and testing. xLINK is our latest product; we gathered all of our advanced co-simulation capabilities, we packaged them along with GT-SUITE’s strongest productivity features, and more importantly we tailored the user experience to the needs of this type of simulation and xLINK was born.
As soon as xLINK’s development finished, I started preparing the first xLINK example model and in only a couple of hours I had a complete vehicle system in a single model up and running a full driving cycle.
GT’s Engine group provided the engine model as an FMU and our Vehicle group delivered a drivetrain model as an FMU too. At this point, I can only guess that the engine is a GT-POWER model and I have no idea about the origin of the drivetrain FMU, but when it comes to co-simulation with xLINK the origin of each incoming subsystem is nonessential.
My vehicle model now needed a soft ECU and a Driver, in order to control it through a complete driving cycle. In a lot of projects, I have worked closely with the Controls team and they were kind enough to prepare such a model in Simulink. Then, they simply built the Simulink model as a DLL for the xLINK target.
We are all set with acquiring our subsystems, let’s open a new xLINK model from scratch and start building the vehicle model:
Welcome to xLINK!
My first thought is that the look and feel is similar to GT-SUITE. However, users will notice that the UI is simple, lean and easy to use and all important capabilities, such as design-of-experiment and optimization setup, are ergonomically placed on the ribbon toolbar. In xLINK subsystems exchange inputs and outputs through signals. As a result, the model library (on the left as usual) is preloaded with all External Model Links to accelerate importing modules from popular tools and interfaces as well as a complete library of signal manipulation components, the Controls Library. Some of the supported tools and interfaces are:
FMUImport v1.0 and v2.0 for both Model Exchange and Co-simulation
Hand-written C source code as long as it conforms to our flexible API
It is clear, that in xLINK with only a few components, users can create extremely fast system models by importing different subsystems.
In my stash, there is an engine FMU, a drivetrain FMU and a Driver/ECU DLL created from Simulink. These three subsystems can be imported in xLINK by using two FMUImport components and a SimulinkHarness component linked together to compose a full vehicle model. At first, I import the two FMUs using FMUImport. xLINK is intelligent enough to read all inputs, outputs and parameters and append them automatically to the respected lists, minimizing manual steps and accelerating every team’s work:
The Engine FMU is an FMI 2.0 FMU for co-simulation. It expects 19 inputs, it has 10 outputs and our engine guys left a little room for me to play with it by exposing two parameters, ambient pressure and temperature. Following the same steps with the Drivetrain FMU, another FMI 2.0 FMU for co-simulation is added. xLINK understands that it expects two inputs, it has three outputs, and there is one exposed parameter:
The Simulink ECU/Driver model, which has been built as a DLL from Simulink, is imported into xLINK through a SimulinkHarness template and this completes the process of importing the subsystems for this example. For better organization, the three models are included inside subassemblies and the current state of the xLINK model is the following:
Next, I go ahead and link the three models together. Simulink ECU/Driver DLL controls both the Engine FMU and the Drivetrain FMU and it expects feedback, while Drivetrain and Engine exchange speed and torque. Valve timing, injection timing, injection mass and wastegate diameter signals are outputs of the ECU/Driver and inputs to the engine, while important Engine values are sensed and passed to the ECU/Driver. Brake pedal position and gear number signals are exchanged between the ECU/Driver and the drivetrain. After I finish linking everything together, I add a few monitors before I run the xLINK model in order to analyze the ECU/Driver behavior, the engine speed, and the average fuel flow of the Engine:
I am now ready to press the “Run” button. A few more options such as simulation duration either on my machine or on a distributed server and… I am up and running!!
xLINK launches a powerful UI during run time, which displays the progress of the simulation along with all important information for each subsystem. The monitors that I set up in my model also show up. I can fully monitor and control the simulation from here!
When the simulation ends, xLINK automatically creates a GT-POST file and saves the results and plots from each case. In GT-POST, users can quickly generate 2D and 3D plots, combine data from different cases and tests in a single plot for comparative analysis, and even perform math operations on data in all plots.
It was when I finished the first xLINK example model that I confirmed that xLINK is a really powerful and mature product for building integrated co-simulation system models. Imagine what you could do when harnessing all of xLINK’s capabilities and powerful features!
I am now ready for the next step: run multiple scenarios, perform parametric analysis, and optimize for fuel economy. Or better yet, optimize the tradeoff between fuel economy and performance by utilizing xLINK’s built-in Optimization tools. Stay tuned for the next blog post about xLINK, where I will share the results of this next step!
Written by Pantelis Dimitrakopoulos who worked with GT’s Engine, Powertrain and Controls support teams to replicate the integration process within an organization. If you are ready to try xLINK, please contact [email protected].
Virtual Real Driving Emissions (vRDE) Part 1: New Technologies Inspire New Emissions Tests
The goal of automotive regulation has always been to walk the line between offering OEMs an implementable procedure for certification and representing real-world driving conditions that drivers will experience with a vehicle. The inherent variance of operating conditions and drivers makes it difficult to predict the actual performance, fuel economy, and emissions levels of vehicles when they are sold to customers. On the other hand, OEMs must be able to ensure that multi-billion-dollar vehicle programs will be able to meet the regulatory burden before reaching production and therefore they need clear testing procedures and guidelines to use during development.
The Need for a Better Way
Historically, vehicle certification for fuel economy and emissions has been done on chassis dynamometers with fixed-profile driving cycles. European regulation was based around the New European Driving Cycle (NEDC) while the US used the Federal Test Procedure 75 (FTP-75). The cycles were designed to mimic the vehicle speed and load range that a typical customer might expect to impose on the vehicle during normal operation.
Figure 1: New European Driving Cycle (NEDC)
Figure 2: Federal Test Procedure 75 (FTP-75)
The NEDC driving cycle is characterized by regions of constant acceleration, deceleration, and cruising at constant speed. These qualities render it a poor representation of real world vehicle operation, which is subject to variable acceleration and speed ranges. The FTP-75, on the other hand, includes more diverse transients, with distinct regions aimed to mimic both city and highway driving, but features a relatively low average speed and moderate acceleration requirements.
Figure 3: Chassis Dynamometer for Vehicle Testing (credit - Argonne National Lab)
In addition to the inherent deficiencies of the driving cycles themselves, the testing procedures offer opportunities for OEMs to optimize vehicle and engine operation around the driving cycle requirements, even if the performance isn’t representative in real world settings. This can range from well-trained drivers who minimize vehicle acceleration throughout the driving cycle within the allowable error window all the way to electronic defeat devices that use different calibration maps when the vehicle detects it is being tested on a specific cycle.
On-Road Certification of On-Road Performance
One of the main motivating factors for chassis dynamometer testing has historically been the bulky nature of emissions measurement equipment. However, in recent years, improvements in the accuracy and downsizing of Portable Emissions Measurement Systems (PEMS) has enabled affordable emissions testing and measurement outside of the laboratory test cell and directly on the road by carrying the measurement equipment in the vehicle.
These advances have prompted rethinking of current vehicle testing procedures with the objective of more accurately capturing actual operation, performance, and pollution. PEMS have also been used in academia to independently verify claims about emissions levels produced by vehicles. With on-road measurement now becoming affordable, Euro 6c regulations include an updated chassis dynamometer driving cycle coupled to an on-road verification test called “Real Driving Emissions.”
The first phase of the overhaul involves replacing the synthetic NEDC driving cycle with an all new Worldwide Harmonized Light Vehicles Test Procedure (WLTP), which includes 3 variants of an all new driving cycle. The appropriate cycle is chosen based on the tested vehicle’s power-to-weight ratio and scales the maximum speed and acceleration demand accordingly. These new cycles aim to cover a broader region of the engine’s operating domain, both in terms of speed and load.
Figure 5: WLTP Driving Cycle Variants
For the purposes of new emissions regulations, these driving cycles serve as a baseline, which must adhere to increasingly stringent limits on various constituents. However, the vehicle cannot be certified until a “Real Driving Emissions” test also demonstrates that the vehicle is in compliance.
“Real Driving” – Procedural Unpredictability
Rather than simply certifying a vehicle based on its WLTP results, recently-enacted Euro 6c emissions regulations now also require that the vehicle be instrumented with a PEMS and conduct a long, “realistic” drive on the road. The main criteria that must be met during this “real driving test” include:
Duration of 90-120 minutes
Three distinct phases (urban, rural, motorway)
Average and maximum speed conditions
Minimum stop duration in the urban portion
Ambient conditions approximately nominal
Minimal change in elevation between start and end point
Because no specific vehicle speed profile is mandated as long as all regulated conditions are met, no two “RDE” driving cycles will look the same, but their general characteristics will be similar.
Figure 6: Example of a Real Driving Emissions Cycle
Because violating the mandated RDE criteria might result in a void test, it is important to choose a route that is readily available and bears the right characteristics. Yet all the planning might be rendered useless due to events such as road construction, accidents, or traffic that prevent the driver from meeting the cycle requirements.
Figure 7: Sample of RDE-Compliant Route
The challenges in meeting RDE requirements are most pressing in early development stages when designs are still in flux, prototype equipment may not be available yet, or testing vehicles are in high demand. Not only does each iteration of the test take a substantial amount of time, the work is conducted away from testing facilities in the event a vehicle suffers a mechanical problem. This has created demand for alternative means of ensuring product compliance early in the development process without the need for costly on-vehicle testing, and computer simulation is emerging as the primary solution.
Join us for Part 2 – Virtual Real Driving Emissions Solutions
A growing trend in the automotive market is the addition of advanced driver assistance systems (ADAS) to new vehicles. These ADAS features could be technologies such as lane departure warnings, emergency braking, or adaptive cruise control, all aimed at making vehicles safer and more comfortable to drive.
In addition to making vehicles safer and more comfortable, vehicle manufacturers are striving to make their vehicles more fuel efficient. We have seen that these two trends are causing a shift in the typical design process, where powertrain and vehicle departments are working more closely together than ever.
At the system level, even forecasting fuel economy demands an integrated, collaborative approach between powertrain and vehicle teams. To explore how this approach might work, we teamed up with Mechanical Simulation Corporation, the developers of CarSim.
Through open-source co-simulation provided by the Functional Mockup Interface (FMI), we were able to implement a workflow to couple powertrain and vehicle models in GT-SUITE and CarSim, respectively.
This allows engineers to predict complex phenomena, such as how a calibration of adaptive cruise control or traffic conditions might affect fuel economy, in a repeatable fashion, all virtually.
Varying the Adaptive Cruise Control (ACC) following distance results in changes in real-world fuel economy.
To demonstrate this, we setup a co-simulation study between GT-SUITE and CarSim, where the powertrain model in GT-SUITE provided torque to the vehicle model’s wheels in CarSim, and CarSim fully represented the chassis dynamics and virtual sensors that monitored the traffic environment.
Two main vehicles were considered: the lead vehicle, with an imposed vehicle speed that followed a standard driving cycle, and the main following vehicle, which used adaptive cruise control to target the lead vehicle at different following distances.
The plot above shows the differences in fuel consumption rates observed when varying the Adaptive Cruise Control (ACC) following distance.
At the shorter following distances, the following vehicle trailed closely in each acceleration and deceleration phase of the driving cycle. At a longer following distance, the following vehicle still followed the lead vehicle, but was not as aggressive, resulting in smoother pedal position inputs and therefore smoother throttle control. We were surprised to see a large 3% fuel economy difference between the different following distances over the driving cycle.
CarSim vehicle dynamics software captures the missing pieces in fuel economy calculations: road grade, route, traffic, driver variability, and the overall contextual environment.
By using this new approach to vehicle simulation, the coupled vehicle and powertrain models were able to find fuel economy effects that traditionally would be neglected by the vehicle dynamics engineer and missed by the powertrain engineer.