Peter Reithofer, Martin Fritz, Reinhard Hafellner – 4a engineering GmbH
LS-DYNA© has included plenty of material cards, each of them offering different scalability and complexity to describe the behavior of non- reinforced thermoplastics. The consideration of the strain rate behavior is included in many material cards, e.g. in the well known MAT_PICEWISE_LINEAR_PLASTICITY. More complex material models can also handle varying compression and tension behavior as well as unloading by using damage functions. One of the recent development results is MAT-SAMP-1 by Du Bois, Kolling, Feucht and Haufe. This specially developed material model for polymers includes a yield surface out of different loading cases and a damage function for better description of unloading. For better use of the above mentioned models a huge amount of tests have to be carried out, to determine the material parameters and to represent the thermoplastic characteristics in crashworthiness simulations. 4a impetus builds up an efficient and reliable process, starting with realistic tests and finally ending up with a validated material card. Recent developments of new test methods for 4a Impetus are presented, that satisfy the needs of complex material models as well as the expectations with regard to easy and favorable testing. Limits and opportunities of different test methods and material card implementations are shown and compared to each other especially focused on typical polymer behavior. Finally the influence of fiber reinforcement is discussed and solutions to determine material parameters by using micro mechanic models (4a MicroMec) are shown.
Mario Polanco-Loria, Torodd Berstad – SINTEF Materials and Chemistry / Structural Impact Laboratory (SIMLab), Arild Holm Clausen, Odd Sture Hopperstad – Department of Structural Engineering, NTNU/Structural Impact Laboratory (SIMLab)
This paper presents a hyperelastic-viscoplastic constitutive model for thermoplastics . It is partly based on a model proposed by Boyce et al. . The model involves a hyperelastic- viscoplastic response due to intermolecular resistance, and an entropic hyperelastic response due to re-orientation of molecular chains. A Neo-Hookean material model is selected for describing large elastic deformations. Moreover, the Raghava plastic yield surface  is introduced to capture the pressure sensitivity behaviour, and a non-associative visco-plastic flow potential is assumed for volumetric plastic strain control. The strain-rate effects are formulated in a format well-suited for structural applications. Finally, the intramolecular stiffness is represented with Anand’s stress-stretch relation . The model is developed within a framework developed for finite elastic and plastic strains, using a multiplicative decomposition of the deformation gradient. It is implemented as a user-defined model in LS-DYNA . The material model requires 10 parameters which are easy to identify from true stress-strain curves obtained from uniaxial tension and compression tests. In this paper, the parameters are determined from experimental tests on a polyethylene material with high density (PEHD). Subsequently, the model is employed in numerical simulations of the uniaxial tension test and a quasi-static test on a centrally loaded plate. The numerical model gives satisfactory predictions when compared to the observed experimental behaviour.
Todd P. Slavik – Livermore Software Technology Corporation
A coupling method recently implemented in LS-DYNA® allows empirical explosive blast loads to be applied to air domains treated with the multi-material arbitrary Lagrangian-Eulerian (ALE) formulation. Previously, when simulating structures subjected to blast loads, two methods of analysis were available: a purely Lagrangian approach or one involving the ALE and Lagrangian formulations coupled with a fluid-structure interaction (FSI) algorithm. In the former, air blast pressure is computed with empirical equations and directly applied to Lagrangian elements of the structure. In the latter approach, the explosive as well as the air are explicitly modeled and the blast wave propagating through the ALE air domain impinges on the Lagrangian structure through FSI. Since the purely Lagrangian approach avoids modeling the air between the explosive and structure, a significant computational cost savings can be realized – especially so when large stand-off distances are considered. The shortcoming of the empirical blast equations is their inability to account for focusing or shadowing of the blast waves due to their interaction with structures which may intervene between the explosive and primary structure of interest. The new method presented here obviates modeling the explosive and air leading up the structure. Instead, only the air immediately surrounding the Lagrangian structures need be modeled with ALE, while effects of the far-field blast are applied to the outer face of that ALE air domain with the empirical blast equations; thus, focusing and shadowing effects can be accommodated yet computational costs are kept to a minimum. Comparison of the efficiency and accuracy of this new method with other approaches shows that the ability of LS-DYNA® to model a variety of new blast scenarios has been greatly extended.
Dr. Thomas Borrvall – Engineering Research Nordic AB
In an attempt to alleviate transverse shear locking in fully integrated hexahedra elements with poor aspect ratio, two new variants of solid element type 2 in LS-DYNA have been developed, implemented and tested on some critical problems. The approach is based on modifying the jacobian matrix in such a way that the spurious stiffness is reduced without affecting the true physical behavior of the element. The method is in a sense justified by means of a theoretical motivation, but above all indicated to be of practical use through some illustrating examples. The two new solid elements are termed type-1 and -2, where the latter is more rigorous but suffers from a higher computational expense, whereas solid element type-1 has an efficiency slightly worse than solid element type-2 for explicit analyses. However, all three element types are in that sense comparable for large scale implicit analyses.
H. Ouyang, Tim Palmer, Q. He – Engineering Technology Associates, Inc.
Finite element modeling tools have undergone a transformation in recent years. These tools have been made easier to use, configurable for vertical applications and able to interact with outside applications. Such software needs to not only efficiently create models, but are also expected to interact with other software tools, be configurable and anticipate the future needs of users by being able to extend the users ability to assimilate models and other data efficiently into an LS-DYNA environment. Using the heritage of a stable, well-featured finite element modeling software, ETA has developed a new software platform to create finite element modeling software that meets the challenges of today’s users, both from an infrastructure and a data management standpoint. This will offer users an opportunity to create, share and manage models using standardized interfaces, scripting tools and both standardized and user-defined processes. This paper will present the opportunities that this new platform will offer users and the future of modeling tools that empower users of optimization and synchronous design tools.
Abed Alaswad, Abdul Ghani Olabi – Dublin City University
Tube hydroforming is one type of unconventional metal forming process in which high fluid pressure and axial feed are used to deform a tube blank in the desired shape. Bi-layered tube hydroforming is suitable to produce bi-layered joints which can be used in special applications such as aerospace, oil production, and nuclear power plants. In this work a finite element study was performed using ANSYS LS-DYNA to investigate the effect of geometrical factors (Tube length, Initial thickness, and Corner radius) on bi-layered tube hydroforming of X branch. Bulge height, Von mises stresses, and thickness distribution were studied for the hydroformed part. Suggestions were made to increase formability of process by adjusting geometrical factors. Optimization was performed to get minimum thickness reduction for a specific design.
Eng. Edoardo Francesconi, Prof. Marco Anghileri – Politecnico di Milano
Water landings in emergency are likely to have tragic consequences for helicopters. Most of the safety devices developed to enhance helicopter crashworthiness have been designed referring to ground impacts and they might be not effective in case of water landings. At LAST, the crash labs of Politecnico di Milano, water impact drop tests were carried out to deepen the knowledge of the event dynamics and to collect reliable data to validate numerical models. The water impact behaviour of skin panels made of aluminium alloy was investigated, in particular the panel failure due to water impact pressure. Drop tests with several drop heights and different impacting masses were performed measuring accelerations of the test article. In the second part of the research, the tests were numerically reproduced adopting ALE and SPH approaches to model the fluid region. The numerical results were compared both ones to the others and to the experimental tests in terms of impact dynamics and data acquired. As a result, a satisfactory correlation was achieved and guidelines to model fluid regions adopting ALE and SPH approaches were drawn.
Dr. Victor Apanovitch, Stefan Huhn – Forming Technologies Inc.
With a steady increase in computational power and decrease in hardware costs during the last few years, the incremental process simulation in the area of sheet metal forming is considered a state-of- the-art tool during the development process at OEMs and many, mainly larger, suppliers. However, the existing pre-processing tools do not support a template based analysis setup that can prevent the need for highly skilled personnel in order to set-up an analysis application and, therefore; open the door for an LS-DYNA based stamping simulation for smaller companies that do not have the skill set and manpower available to do the analysis setup using existing tools. The presented solution FormingSuite FastIncremental, is an analysis environment that provides the end user with a dramatically simplified solution for the setup process for incremental stamping analysis using the LS-DYNA solver. This is done through the automation of tool definition/extraction and an automated setup for blank on curved binder combined with other techniques to enable fast solving time. To ensure an optimized solution time, a combination of inverse and incremental finite element tools is used. This combination can lead to enormous time savings in blank development and positioning, tool definition, binder wrap calculation as well as in simulation run-time while reliable results are produced. The user interface is CAD-like and geared towards an average industry practitioner; definition of complete forming processes as well as the application of boundary conditions is mapped on the first form geometry as it is commonly used on A-layouts in the industry. Associative and regenerative mechanisms ensure that changes made to the first form geometry or to forming conditions are automatically propagated throughout the complete analysis process, ensuring that geometry, forming conditions and results are always in sync. This also means that in conjunction with an automated tool generation, the presented tool enables an automated batch mode processing for pre-defined part groups without any user interaction other than evaluating the calculated results.
Stephan Marzi, Olaf Hesebeck, Markus Brede – Fraunhofer IFAM, Felix Kleiner – Henkel AG & Co. KGaA
Presently, there are various cohesive zone models implemented in LS-DYNA. The simplest one consists of a bi-linear traction separation-law in both modes I and II. Further models allow more complicated shapes of the traction-separation law, such as the material model of Tvergaard and Hutchinson or the General Cohesive Zone Model. However, none of these implemented models consider rate-dependency or effects of plasticity. Crash-optimized structural adhesives used in automotive structures, as for example Henkel Terokal 5077, often show a rate-dependent elastic-plastic material behaviour. An extended mixed-mode co- hesive zone model is proposed in this paper. The model considers the effects of rate-dependency and plasticity, and therefore is able to predict the failure of adhesively bonded joints more precisely than the common models. The material parameters describing the rate-dependency of yield strengths or critical energy release rates can be identified directly by (fracture) mechanical tests. The new model is validated by simulations of single lap-shear, T-peel, End-Loaded Shear Joint (ELSJ) and Tapered Double Cantilever Beam (TDCB) tests. A comparison of numerical and experimental results shows the benefits and the limitations of the new model, which will be available from one of the next versions of LS-DYNA. Its official name will be MAT COHESIVE MIXED MODE ELASTOPLASTIC RATEDEPENDENT, or in short MAT 240. The tests were proceeded at velocities ranging over several orders of magnitude. The results, which depend strongly on the test velocity, are predicted well by the new model. Further advantages are seen, when simulating a specimen unloading during a TDCB test. The irreversible displacement after unloading, which is caused by the adhesive’s plasticity, is obtained also in simulations when using the new model. Finally, a side-impact test on a floor pan is simulated, using the new model to predict the failure of adhesive bond lines connecting a cross beam to the structure. The crash tests were performed by Adam Opel GmbH. First simulations of such impact tests, using MAT 138 to model the adhesive layer, were already presented at the recent German LS-DYNA-Forum in Bamberg. The new results obtained with the elastic-plastic, rate-dependent MAT 240 show a good agreement with the experimentally observed behaviour. Thus, the model has been successfully employed in the crash simulation of a large, bonded vehicle structure.
M. Clarke – Continental Tool and Die, J. He – Engineering Technology Associates, Inc., X. Zhu – LSTC
Springback is every tool designer’s nightmare. A tool designer can make a die adjustable for some areas that need to be over-hit. Most parts are sprung throughout their entire surface. In these cases the entire part would need adjustment. Almost every time the die always requires a re-cut/re-work to put the part in spec. This re-work and re-cut is a time consuming, trial and error method that takes shop resources which typically are not available near the end of a build. This presentation illustrated a method which uses DYNAFORM/LSDYNA as the analysis tool, associates with the measurement procedure, provides a complete process and a effective way for the springback compensation needs. This procedure saves hours of measurement and modelling time. In many cases, a single re-cut of the die is all that would be required to achieve the correct compensation in even the most difficult parts.
Prof. Marco Anghileri, Dr. Luigi-Maria L. Castelletti, Ing. Dario Molinelli, Ing. Federico Motta – Politecnico di Milano
Birdstrike is a serious threat for flight safety which causes every year remarkable losses. Even if modern aircrafts are certified for a level of bird impact resistance, it may happen that structures designed to carry aerodynamic loads, like a propeller spinner, may collapse after a bird strike. In general, the collapse of the spinner is not a concern if the fly-home capability is not compromised. In this paper, a strategy to design bird-proof spinner is introduced and its effectiveness evaluated by means of LS-Dyna. A SPH model of the bird was initially developed and validated. Then the impact of the bird onto a composite spinners was investigated. In particular, to capture its complex failure mechanism, the dynamic behaviour of the composite material used in the aircraft constructions was validated against specific dynamic tests. The influence of the spinner motion was also investigated and the differences between motionless and revolving spinners were pointed out. Improvements to the design of the reference spinner based on the idea of deflecting-the-bird instead of bagging-the-bird were developed and their performances numerically evaluated. In view of the results obtained, it was concluded that composite material and rotational motion can be exploited to design bird-proof spinner. Furthermore, it was observed that increasing the thickness of a spinner is not only against the weight constraints typical of aircraft constructions, but it is also ineffective.
Dr. C. Cleve Ashcraft, Roger G. Grimes, and Dr. Robert F. Lucas – Livermore Software Technology Corporation
LS-DYNA models for Implicit Mechanics are getting larger and more complex. We are continually seeing models where the linear algebra problems in Implicit Mechanics have 10M rows and know of at least one that is nearly 40M rows. We expect users wanting to routinely solve problems with 30M very soon. It is these very large linear algebra problems that distinguish the computer requirements for Implicit Mechanics. This paper will present a study of the performance of the MPP implementation of implicit mechanics in LS-DYNA examining such issues as performance, speed-up, and requirements for computer configuration.
Gaurav Nilakantan, Michael Keefe, John W. Gillespie Jr. – University of Delaware, Travis A. Bogetti, Rob Adkinson – US Army Research Laboratory
High strength 2D fabrics and 3D textile composites comprised of materials such as KevlarTM, VectranTM, ZylonTM, and S2-Glass® find applications in protective systems such as personnel armor, spall liners, and turbine fragment containment. Various parameters can significantly affect the response of these fabrics under high rate impact including yarn/tow geometry (cross section), yarn/tow material (modulus, strength), and architecture (undulations and span). However these are just a few of the many other parameters such as projectile characteristics, boundary conditions, number of layers and orientations, weaving degradations, and so forth. Many of these parameters are inter-related and unfortunately this makes a comprehensive study very complex. Therefore, a set of key parameters have been identified for an initial exploratory numerical investigation. These include, on the material front: yarn/tow axial modulus, strength, frictional coefficient; and on the architectural front: yarn/tow cross section shape, size, span, and angle of inclination of through-thickness stitching or Z-tows. This study provides interesting initial insight into the role of through thickness tows on the overall impact resistance and energy dissipation capabilities for which these high strength fabrics were designed for. 3D fabrics with varying Z-tow architectures are compared against each other as well as against 2D fabrics without through thickness stitching. A special in-house preprocessor DYNAFAB is used to automatically generate the entire textile composite mesh. The user inputs basic parameters describing the desired yarn/tow geometry, architecture, and mesh density. The output is a LS-DYNA keyword input-file ready to use in the simulation. The geometry and undulations in the FE model closely represents the actual micrographs of the textile composite leading to a realistic representation of the architecture.
G. Oberhofer, H. Gese – MATFEM Partnerschaft Dr. Gese & Oberhofer, A. Bach, M. Franzen, H. Lanzerath – Ford Research & Advanced Engineering Europe
Today the automotive industry is faced with the demand to build light fuel-efficient vehicles while optimizing its crashworthiness and stiffness. A wide variety of new metallic and polymeric materials have been introduced to account for these increased requirements. Numerical analysis can significantly support this process if the analysis is really predictive. Within the numerical model a correct characterization of the material behaviour – including elasto-viscoplastic behaviour and failure – is substantial. The particular behaviour of each material group must be covered by the material model. The user material model MF GenYld+CrachFEM allows for a modular combination of phenomenological models (yield locus, strain hardening, damage evolution, criteria for fracture initiation) to give an adequate representation of technical materials. This material model can be linked to LS-DYNA when using the explicit-dynamic time integration scheme. This paper gives an overview on the material characterization of ultra high strength steels (with focus on failure prediction), non-reinforced polymers (with focus on anisotropic hardening of polymers), and structural foams (with focus on compressibility and stress dependent damage evolution) with respect to crash simulation. It will be shown that a comprehensive material model – including damage and failure behaviour – enables a predictive simulation without iterative calibration of material parameters. A testing programme has been done for each material group in order to allow a fitting of the parameters of the material model first. In a second step different component tests have been carried out, which were part of a systematic procedure to validate the appropriate predictions of the crash behaviour with LS-Dyna and user material MF_GenYld+CrachFEM for each material group.
Tushar Goel, Willem Roux, Nielen Stander – Livermore Software Technology Corporation
Topology optimization is a very powerful tool to develop new concepts and has been widely used in engineering problems involving static loading conditions. However, there has been relatively little work for topology optimization of industrial size non-linear dynamic systems. The main issues are non-linear interactions among the material properties, contacts between parts, large strain-rates, transient behavior, etc. A hybrid cellular automata based method, combining cellular automata theory with the fully loaded design concept, has been demonstrated to be effective in generating new concept designs. This method is called hybrid cellular automata (HCA) and is used as the core algorithm to optimize the topology. This method is implemented in the LS-DYNA framework and would be available shortly. The method has shown encouraging results while solving many engineering problems. In this paper, the details of the methodology and a few engineering examples are provided to demonstrate some capabilities of the code. The main problem solved using the proposed methodology is the development of an optimal topology for a 1 million element box-shaped design domain subject to impact.
Tushar Goel, Nielen Stander – Livermore Software Technology Corporation
The efficient search of global optimal solutions is an important contemporary subject. Different optimization methods tackle the search in different ways. The gradient based methods are among the fastest optimization methods but the final optimal solution depends on the starting point. The global search using these methods is carried out by providing many starting points. Other optimization methods like evolutionary algorithms that mimic the natural processes like evolution, and simulated annealing that emulates the metal cooling process via annealing can find the global optima but are criticized due to high computational expense. The adaptive simulated annealing algorithm has been proposed to be an efficient global optimizer. This algorithm is implemented in LS-OPT. A few analytical examples and meta-model based engineering optimization examples are used to demonstrate the efficiency of the global optimization using ASA. The optimization results are also compared with the existing LFOPC and genetic algorithm optimization methods.
Knut Großmann, Hajo Wiemer, Andrè Hardtmann, Lars Penter, Sebastian Kriechenbauer – TU Dresden
Nowadays, despite powerful simulation programs, the tool design process still contains manual and not reproducible work. In specific, the manual die spotting is mostly dependent on the workers experience and consumes a lot of time. A large potential to reduce time and costs is seen by decreasing the die maturing. The paper introduces an approach to obtain deep drawing tools from FE simulation with LS-DYNA, which need less additional manual maturing until good parts can be manufactured. Therefore, the current tool design process was analyzed and it was found out, that not properly assessing elastic tool and press properties in FE simulations in one of a the causes that lead to additional die spotting effort. Hence, a methodology was developed to compensate for the effects of those elastic properties. Depending on their intensity, afore mentioned machine and tool properties are included in the FE model. Based on former research work at the IWM the effects of elastic deformations and dislocations of the die surface on the final shape of the part are calculated. Derived from the calculated deformations, a transformations matrix is calculated and a new die surface is obtained after a few iterations. The new die surface has the same shape under load like the initial die surface without load. The new method was tested through an experimental set-up, which allowed an excessive deformation of the die under load. This experiment does not reflect the reality but serves for general demonstration purposes of the compensation approach. As expected the simulation and experiment show a massive impact of the die deflection on the draw-in of the manufactured part. The die deformation affects the distribution of the blankholderforce on the part. It was found a higher pressure on the die corners and lower pressure in the centre. By means of the compensation method, the die surface was adjusted to achieve that the die surface under load is the same as the initial surface without deformations. The experiments show that the final shape of the part, which was drawn with the compensated die, is very close to the shape, which was predicted without calculating the die deformation.
F. Sautter, H. Hogenmüller – Dr.-Ing. h.c. F. Porsche
The Porsche Panamera brings together for the first time the virtues of a sports car with Porsche’s own interpretation of a classical Gran Turismo. This new segment is characterised by the balance between sportiness on the one hand and comfort, luxury and long-range touring characteristics on the other. The fact that the development was new, with no previous model available to use as a basis for vehicle parameters and characteristics, created a significant challenge. To meet the requirements of a reliable, target-oriented vehicle development, major emphasis was placed on virtual development tools. Advanced simulation methods found their application in the early concept definition, and were integrated into the complete vehicle development process. The CAE tools and methods were also developed further during the project to meet the specific project needs. The following paper demonstrates these points using the example of design for passive safety. The paper describes the new Digital Product Development Process, with its centrally managed multi- disciplinary Digital Protoypes, which was introduced for the Panamera project. Furthermore, a selection of the CAE tools used in the development of the Panamera and their evolution are discussed. The main focus of the CAE tools was to accelerate and provide a qualitative improvement to the product development process.
Z. Ozdemir – Bogazici University / Université de Lille, M. Souli – Université de Lille, Y. Fahjan – Gebze Institute of Technology
Estimation of the potential degree of risk for tank failure during an earthquake is very difficult to quantify since the liquid-tank system possesses many different nonlinear behaviour mechanisms which may be triggered simultaneously or separately depending on the characteristics of earthquake, contained liquid properties, fluid depth, dimensions of the tank, roof type, material properties, supporting conditions and stiffness of underlying soil medium. These nonlinear behaviour mechanisms can emerge in the form of elephant foot and diamond shape buckling at the tank wall, rupture at the junction between tank wall and base, buckling at the top of tank and roof, settlement at tank support system and foundation and large amplitude deformations at the base plate. For the case of unanchored tank, in addition to these mechanisms, uplift of tank base, sliding of the tank and successive contact and separation between base plate and foundation can be observed when tank subjects to seismic loadings. The analysis tool used to quantify the tank behaviour has to take into account the effects of all aforementioned factors. Since LS-DYNA is capable of handling complexities associated with the nonlinear transient seismic response of unanchored tanks it is utilized in this study. ALE technique and contact algorithms of LS-DYNA are used to model the coupling of tank and fluid and the interaction between tank base and soil, respectively. The results are compared with the provisions given in tank seismic design codes used in the current practice.
Leonard E Schwer – Schwer Engineering & Consulting Services
The focus of the present work is to perform an assessment of a relatively new class of numerical methods, referred to as meshfree methods, that offer analysts an alternate analytical technique for simulating this class of ballistic problems, without a priori trajectory knowledge, nor resorting to ad hoc criteria. The assessment is made by the comparison of projectile residual speeds provided by the various techniques, when used to simulate a ballistic impact experiment. The techniques compared are the meshfree method known as Smooth Particle Hydrodynamics, a Multi-Material Arbitrary Lagrange Eulerian (MM-ALE) technique, and Lagrangian with material erosion. Such comparisons inherently have aspects of an apples-to-oranges-to-pears comparison, but an effort has been made to minimize the numerous ancillary aspects of the different simulations and focus on the capability of the techniques. To minimize unintended differences in the simulations, the following three key aspects remain constant: 1. Only one software package (code) is used, 2. The same constitutive model is used, 3. The models were constructed by one analyst with a similar level of experience using the three modeling techniques. Even with these considerable constraints on the simulation comparisons, it is obvious that the results are subject to the analyst’s knowledge and skills in applying the various analysis techniques to the impact simulation. Thus the reader should not assess the merits of these techniques on the provided ‘answers,’ but should instead focus on the relative merits of each technique and their applicability to simulations of interest.
Madhukar Chatiri – CADFEM GmbH, Thomas Güll – Adam Opel GmbH, Prof. Anton Matzenmiller – University of Kassel
One major component of fuel cell vehicles is the hydrogen storage system. A promising and nowadays mostly used approach is to store hydrogen in wet wound carbon fiber reinforced plastic (CFRP) vessels manufactured by filament winding process with an operating pressure of up to 70 MPa (hereafter referred as H2 vessel). Due to the inherent complexity and 3-dimensional nature, accurate behavior of such thick composite structures in impact simulations needs an adequate representation of the composite plies. Modeling thick composite structures with 2-dimensional elements will produce inaccurate results in transverse normal direction. Thus 3D modeling should be used but to model each ply with one solid element leads to undesirably big models and is impractical for large structures. Thus representation of several plies in one solid element and more such elements across thickness is desired. Also, solid elements are needed to represent the 3-dimensional state of stress and impact direction normal to the outer vessel surface. A new layered solid element formulation is implemented in LS-DYNA® Version 971 R4 allowing the definition of multiple integration points through the thickness in combination with arbitrary material orientation. The above new element formulation is presented in this paper describing different patch simulation results and simulation results for thick composite structures such as hydrogen storage H2 vessels.
Per-Anders Eggertsen – Chalmers University of Technology, Kjell Mattiasson – Chalmers University of Technology / Volvo Cars Safety Centre
The residual stresses in the blank after forming are the main cause for the subsequent springback in a sheet forming operation. The accuracy of the predicted springback in a Finite Element simulation of the forming operation is very much determined by the quality of the material modeling. Those parts of the workpiece, which in particular contribute to the global springback, have usually been subjected to a bending/unbending deformation mode, when the sheet material has slipped over a tool radius. It is thus of utmost importance that the material model can accurately describe the material response, when it is subjected to such a deformation mode. This is considered by the so-called “hardening law” of the material model. In this context the terms “kinematic” or “mixed” hardening are frequently employed. There are numerous such hardening models described in the literature. Common for them all is that they involve material parameters, which have to be determined from some kind of cyclic test. In theory, the most simple and straight forward test is a tensile/compression test of a sheet strip. In practice, however, such a test is very difficult to perform, due to the tendency of the strip to buckle in compression. In spite of these difficulties some successful attempts to perform cyclic tension/compression tests have been reported in the literature. However, common for these tests has been that rather complicated test rigs have been designed and used in the experiments, in order to prevent the sheet strip from buckling. Another method that frequently has been used for the determination of material hardening parameters is the three-point bending test. The advantage of this test is that it is simple to perform, and standard test equipments can be used. The disadvantage is that the material parameters have to be determined by some kind of inverse approach. The current authors have previously, successfully been utilizing this method. In the test the applied force and the corresponding displacement are recorded. The test has then been simulated by means of the Finite Element code LS-DYNA, and the material parameters have been determined by finding a best fit to the experimental force-displacement curve by means of the optimization code LS-OPT, based on a Response Surface Methodology. A problem is, however, that such simulations can be quite time consuming, since the same Finite Element model has to be analyzed numerous times. In the current paper an alternative numerical methodology is described, in which the Finite Element problem only has to be solved a limited number of times, and, thus, considerably reducing the computational cost. In this new methodology a computed moment-curvature curve is fitted to an experimental one. A complicating factor is, however, that all information to determine a moment- curvature relation is not available directly from the experiment. Therefore, the problem has to be solved in two nested iteration loops, where the optimization loop is contained within an outer loop, in which the FE-analysis is performed. It is demonstrated that the parameters determined by this new method correspond excellent to the ones determined by the conventional method.
A. Sean Duvall – AMEC
This report is an initial investigation in to the effect of pre-stress of bolts during impact. The conclusions are not definitive but do indicate that for large strain analysis the effect of pre-stress in bolts is minimal. The effects are more noticeable for lateral impacts than in axial impacts. Further investigation is required to determine the effects of including friction and for other bolted configurations. This report is not an AMEC document and is not subject to AMEC procedures.
Mehrdad Asadi – Cellbond, Brian Walker – ARUP, Hassan Shirvani – Anglia Ruskin University
Cellbond and ARUP have launched their advanced crash barrier models in 2006 and since the time a continuous study has been carried out to distinguish costumer requirements and review feedbacks. Existing barrier models are constructed using the solid element configuration in honeycomb segments along with validated Modified_Honeycomb material cards. Due to a number of demands on using Shell based honeycomb model in crash barriers by car manufacturers, it was decided to investigate the application in detail using fullscale test data. This paper represents the methodology of creating the shellbased ODB and the comparison with existing solid based FE model. Frontal Offset tests are carried out by a large number of test houses worldwide, according to the European regulation and to FMVSS, as well as by EuroNCAP, Australian NCAP, JNCAP and IIHS. In the frontal offset test, only one side of a vehicles front end hits the deformable barrier, which means that a more concentrated area of the vehicles structure must sustain the impact of the crash rather than the whole width of the vehicle. The Cellbond ODB barrier has been investigated which consists of two different sized aluminium honeycomb blocks in main body and bumper partially covered in aluminium skins. Number of static compressive tests performed to specify honeycomb and adhesive material characters. Adhesive properties are obtained using Climbing Drum, TPeel, Tensile and Plate Shear test results. The barrier was subjected to four individual test conditions with different impactor and impact speeds.
Dr. Tobias Olsson – Engineering Research Nordic AB
This paper present a new material model (*MAT_244) in LS-DYNA capable of simulating phase transition during quenching and forming. Usually during forming the blank is initially heated to become fully austenitized and then continuously formed and cooled. When the temperature is decreasing the austenite decomposes into different product phases. The amount of each phase does not only depend on the mechanical history, but also on the cooling rate of the blank. A higher cooling rate increases the amount of the harder phases (bainite and martensite) whereas a slow process gives higher content of ferrite and pearlite. This thermo-elastoplastic model is based on the isotropic von-Mises yield criterion with an associated plastic flow rule. It includes both the decomposition of austenite into ferrite, pearlite, bainite and martensite, and transformation plasticity. The examples show that the model is well suited for hot stamping simulations and it should be possible to simulate different steel compositions at different cooling rates to obtain a good prediction of the hardening process and the properties of the final product.
Dr. Florian Jurecka – FE-DESIGN GmbH
The use of multi-disciplinary optimisation methods (MDO) in the development process of complex automotive structures is often hindered by several problems. The required resources for very expensive simulations such as crash or 3D CFD analyses rapidly exceed the means available – especially whenever many input parameters, disciplines or load cases are involved. Furthermore, we have experienced that it can be difficult to assure stable runs of simulation processes over a longer period of time. As a result, ‚trivial’ problems such as missing licenses, an overload in network or hard disk resources can lead to a termination of the optimisation process. Not to mention that an optimisation run based on different disciplines can only start once all disciplines involved have set up their respective simulation models. Even a simple change in only one affected discipline would necessitate the optimisation run to start from scratch (with simulations for all load cases/disciplines to be redone). Here, metamodeling techniques can lead to a significant increase in efficiency since all information on the system behaviour gained from former analyses can be reused e.g. for optimisation runs or sensitivity analyses. In addition to this data storage functionality, the use of metamodels also decouples the occupation of computing resources from the actual use of the information. That means that idle CPU time can be used to collect more information on the product or system leading to reduced computation times in the actual optimisation method. Problems in particular simulation runs do not automatically result in a termination of the MDO method, but can easily be repeated. Consequently, it is also possible to evaluate the different disciplines independently even when other disciplines cannot provide a final simulation model yet. All these advantages together result in a much more efficient usage of computation resources. However, the complexity and diversity of metamodeling techniques often prevent the potential user from these benefits. Typically, the choice between the different metamodel formulations is not easy to make. In this paper, an approach is presented which allows for an automated model selection and fitting process. This approach enables the user to use metamodels rid of the complicated selection and fitting process. This task is undertaken by an optimisation algorithm which automatically generates a large variety of metamodels and accesses their respective applicability by means of statistics. As a result, the user gets the most suitable metamodel for each load case or discipline individually and in addition important information about the accuracy of the approximation. The approach will be illustrated by a typical example of a multi-disciplinary optimisation of automotive structures.
Venkatapathi Tarigopula, Odd Sture Hopperstad, Magnus Langseth, Arild Holm Clausen – Norwegian University of Science and Technology (NTNU)
Dual-phase steels are being increasingly considered for application in vehicle structural crash components because of their combined attributes of high-strength and good formability characteristics. Accurate prediction of such components, which are often made of formed components, is necessary to reduce the cost of physical tests. The non-linear finite element method is an efficient and reliable tool for the design of new components. The reliability of the finite element analyses depends on the accuracy of the constitutive and fracture models. To support the engineering applications of dual- phase steels for crash and forming events, both strain hardening and strain rate hardening must be thoroughly modelled for general loading paths. In this paper, an elasto-viscoplastic phenomenological model is adopted to represent the material behaviour of dual-phase steels. The constitutive model is formulated in the framework of phenomenological continuum mechanics. The main ingredients of the model include a non-quadratic yield criterion, the associated flow rule and non-linear isotropic and kinematic hardening. The model is within the class of plasticity models proposed by Chaboche  and Lemaitre and Chaboche  for application to monotonic, non-proportional and cyclic loading conditions. The constitutive model is also able to depict viscous characteristics of the material. The material was experimentally characterized under different loading conditions suited for crash and forming events. Figure 1 depicts the approach embraced in this research for identification of material parameters for the utilized model from material tests. The conventional material tests are providing data for the calibration of the model. Simple tension tests were used to characterize the elastic parameters, the yield surface and the isotropic hardening parameters. Non-proportional tension tests were used to identify the kinematic hardening parameters. Additionally, viscous parameters were identified from the corresponding tension tests over a wide range of strain-rates. The predictive capability of the adopted numerical model is assessed against the material behaviour at elevated rates of strain and the material behaviour under strain-path changes. Particularly, phenomena like dynamic localisation, large plastic deformations have given due importance for validity of the numerical model. Moreover, the same model is validated for crashworthiness performance of dual-phase steel generic components. The model provides results which are in good agreement with the experimental observations. In this work, the non-linear explicit FE code LS-DYNA was used for numerical computations.
Y.N. Shmotin, P.V. Chupin, D.V. Gabov – NPO SATURN, А.А. Ryabov , V.I. Romanov , S.S. Kukanov – Sarov Engineering Center
Safety in accidental conditions is one of the important requirements an aviation jet engine must meet. It is known that one of quite possible and very dangerous accident is a bird strike into the engine in the flight. This case is characterized by the high speed impact of the bird onto rotating blades of the fan, causing large dynamic deformations of the blades and other elements which may lead to disintegration of the construction. That’s why numerical investigations of the bird strike are very important and should be implemented during design stage of a development process. The proposed paper presents some results of numerical simulations of dynamic deformations of the fan blades loaded by the bird impact, obtained by LS-DYNA code. There is an analysis of the numerical results and their verification by comparison with experimental data.
Ø. Fyllingen – Bergen University College, O.S. Hopperstad, A.G. Hanssen, M. Langseth – Norwegian University of Science and Technolog
In previous published literature, deviations in the mean crushing force and deformation pattern have been found between simulations with plane stress shell elements and experimental results for aluminium extrusions subjected to axial crushing. In the current study, simulations with solid and shell element models were carried out and compared to experimental results to study the influence of element type for this class of problems. The mean crushing force in the simulations using shell elements with through thickness stretch and solid elements were much closer to the experimental values than the plane stress shell elements. Concerning the deformation pattern, the solid element simulation exhibited a folding pattern much closer to the experimental one than the simulation with plane stress shell elements. To validate the conclusions drawn here, simulations of profiles with other geometries should be performed and compared to experimental results.
Jens Philippeit, Zoran Petrovic – Siemens Product Lifecycle Management Software
Development processes for more complex new products and better performance require necessarily verification through digital simulation. Due to shortened development cycles the time to answer questions about reliable product characteristics is cut off and an efficient change management has to be established in the process chain from CAD to CAE. Normally CAE processes start with CAD data from a PDM system. To start CAE analysis it is required to easily filter different product configurations from latest CAD data, automatically prepare data for simulation (i.e. batch meshing) and pass this to simulation processes. Depending on type and number of simulations and change rate of CAD it will be shown what tools a simulation data management system can provide to efficiently guide the CAE process and shorten the cycles between CAD and CAE. Optimized data handling as well as integration of CAE tools will be the key for synchronized and economic development processes. Making decisions on digital simulations mostly involves different simulation disciplines. Therefore decisions for digital optimizations have to run in a multi-disciplinary context, access the same data sources and lead to a common decision of all involved analysis. By establishing a Simulation Data Management this can be done very efficiently keeping the data together and running as fast as possible a decision making process. Tools like workflows and decision tables will support this process.
M. Clarke, J.G. Broughton, A.R. Hutchinson – Oxford Brookes University, M. Buckley – Jaguar Land Rover
The analysis of adhesive bonded joints and structures relies on accurate materials data and mathematical models. In the present study the goal was to develop a method for simulating accurately, using LS-Dyna, the behaviour of aluminium structures bonded with a single part, heat curing epoxy adhesive. This required a structure with known boundary conditions and for which the substrates deformed in a predictable manner. A suitable specimen, formed from two folded aluminium tubes bonded into a T-shape, had been developed by Ford Research Center, Aachen. The MAT 169 material card was selected as a method that showed promise for use in the simulation of bonded joints. A test programme was developed to characterise the adhesive for use with the MAT 169 material card, using tests from the British Standards catalogue. These data were then used to analyse the T-shaped structure under quasi static loading. The results of tests and analysis of the T-shaped structures were used to assess the accuracy of the adhesive characterisation and suitability of the MAT 169 material card. A parametric study was then carried out to determine the robustness of the solutions. Once it had been established that a robust solution had been reached the results from the parametric study were used to develop an optimised set of input data for the adhesive.
Mr. Stan Posey, Dr. Bill Loewe – Panasas Inc., Dr. Paul Calleja – University of Cambridge
The parallel efficiency and simulation turn-around times of CAE software continue to be an important factor behind engineering and scientific decisions to develop models at higher fidelity. Most parallel LS-DYNA simulations use scalable Linux clusters for their demanding HPC requirements, but for certain classes of FEA models, data IO can severely degrade overall scalability and limit CAE effectiveness. As LS-DYNA model sizes grow and the number of processing cores are increased for a single simulation, it becomes critical for each thread on each core to perform IO operations in parallel, rather than rely on the master compute thread to collect each IO process in serial. This paper examines the scalability characteristics of LS-DYNA for implicit and implicit-explicit models on up to 256 processing cores. This joint study conducted by the University of Cambridge and Panasas, used an HPC cluster environment that combines a 28 TFLOP Intel Xeon cluster with a Panasas shared parallel file system and storage. Motivation for the study was to quantify the performance benefits of parallel I/O in LS-DYNA for large-scale FEA simulations on a parallel file system vs. performance of a serial NFS file system. The LS-DYNA models used for the study comprise cases that were relevant in size and physics features to current LSTC customer practice. The favourable results demonstrate that LS-DYNA with parallel I/O will show significant benefit for advanced implicit simulations that can be heavy in I/O relative to numerical operations. These performance benefits were shown to extend to a mix of concurrent LS-DYNA jobs that require concurrent data writes to a shared file system, which for an NFS-based file system would still bottleneck from its single data path for I/O. The paper also reviews CAE workflow benefits since, as an LS-DYNA simulation is completed, the same shared storage provides a platform for direct post-processing and visualization without the need for large file transfers.
Prof. Dr. Mariano Pernetti, Dr. Salvatore Scalera – AMET ITALY
The standards which fixes guidelines for the execution of crash tests to assess the effectiveness of safety barriers in USA and Europe, define an experiment with a low weight passenger car. Such an investigation’s aim is to evaluate risks for vehicle’s occupants in case of impact against the tested device. The congruence of this approach with the philosophy of testing at “the practical worst condition”, has been widely demonstrated in literature. On the other hand, this kind of tests are really expensive and many parameters are hard to control and measure. Due to the aforementioned reasons, numerical analysis of vehicles collisions against safety barriers has become a convenient methodology that supports and integrates the previous one, especially considering the continuous technological hardware/software progress. Besides, the chance of controlling and evaluating each factor which influences full scale crash tests, makes such a methodology an important tool to perform parametric studies to assess the influence of different factors on crashworthiness. This research, carried out with a combine numerical experimental approach, is intended to assess what happens during the collision of a light weight passenger car against a steel bridge barrier with a containment energy level of 724kJ. The work includes three parts. In the first, the fundamental steps of the modelling process are described along with any requirements needed to reproduce four different full scale tests: the frontal and oblique collisions against a concrete wall and the impacts against two types of steel barrier with different containment energy level (127kJ and 724kJ). Data comparison between full-scale and FE simulation concerns time histories of longitudinal and transversal acceleration of CG’s vehicle, ASI, THIV, PHD, pitch and roll angle, velocity variation in the vehicle direction and residual displacements of the barrier. The second part is aimed at clarifying what happens during the impact of the light weight passenger car against a steel bridge barrier. The obtained results clearly show that the acceleration of CG’s vehicle and the value of impact severity indices are affected in a meaningful manner not only by the transversal kinetic energy but also by the impact angle. The last part defines a simple procedure useful to estimate the impact severity indices for a very wide range of impact conditions. The tests performed on the procedure show a very good agreement between estimated and calculated values.
F. Previtali, M. Anghileri, L.-M. L. Castelletti, A. Milanese – Politecnico di Milano
The failure mechanism of common aeronautical structures is influenced by the crash behaviour of the riveted joints. Therefore, crashworthiness analyses of aeronautical structures require accurate models of the joints under crash conditions for a correct prediction of the crash behaviour of the structure. In this work, a method to create reliable FE models able to reproduce the behaviour of rivets under crash conditions is introduced. Using explicit FE codes, it is common practice to model rivets and bolts with rigid links or beams, and adopt as a failure criterion the allowable forces envelope obtained for a single rivet after tests . It is shown here that numerical simulations of tests carried out on a single rivet under different loading conditions can be used to characterise the crash behaviour of riveted joints in place of expensive and time-consuming test campaigns. A specific test device was built in order to apply multi-axial loads to a single rivet and perform tests to evaluate the behaviour of a rivet under different loading conditions: from pure shear to pure tension. Numerical simulations of the single rivet test were then carried out using LS-Dyna  to reproduce experimental test and to validate the numerical model of the rivet. The rivet was discretised with solid eight-node elements and the piecewise linear plasticity material model was initially used. However, different constitutive laws were then used to characterise areas with either compressive or tensile loads. The whole loading process, from bucking to failure was simulated. Numerical results and test data were compared and it was observed that the numerical models are able to correctly represent the behaviour of a rivet after a tuning of the material parameters and therefore can be used to characterise a riveted joint. At this stage of the research, only quasi-static loading conditions were considered. This assumption allowed reducing the number of parameters that affects the calculations thus simplifying the model set-up. Future works will investigate the effect of strain rate to reproduce crash conditions.
M. Massenzio, S. Ronel – Université de Lyon, C. Goubel – Université de Lyon / Laboratoire INRETS Equipements de la Route (LIER SA), Lyon Saint Exupéry Aéroport, E. Di Pasquale – SIMTECH
The use of computational mechanics methods is now largely adopted in the field of Road Side Safety. They are certainly interesting in the context a product development. However, the application of these methods in the certification process raises number of issues, addressed, among others, within the EU CEN TC226/WG1/TG1/CM-E, where some of the authors participate. This paper presents some crash test results and their related simulations, and aims to cover a wide panel of devices, different both in the architecture of the devices and in the outcome of the crash test carried out for certification. After a brief presentation of the failure modes observed, we discuss different validation criteria.
Prof. Dr. Uli Göhner – DYNAmore GmbH
The increasing power of GPUs has led to the intent to transfer computing load from CPUs to GPUs. A first example has been the porting of computing intensive algorithms like e.g. ray-tracing algorithms from CPU to GPU. Through the Compute Unified Device Architecture (CUDA ) GPUs can also be used to increase computing speed for High Performance Computing applications. In this paper different parallelization strategies for different processor architectures are presented. They are compared and first experiences using GPUs for a collection of numerical applications are given.
Frieder Neukamm, Markus Feucht – Daimler AG, André Haufe – DYNAmore GmbH
Crashworthiness simulations using explicit Finite Element methods are a central part of the CAE process chain of car body development. Since crash tests of prototype cars at an early development stage are very expensive, a maximum in predictive performance of crash simulations can make a substantial contribution to a cost-efficient car development process. A central issue to ensure this, is an accurate prediction of crack formation in crashworthiness simulations. As the use of advanced high-strength materials in modern car body structures is increasing, crack formation is more likely to occur in such parts of the body-in-white. Typically, structural parts of a car body are manufactured by means of deep-drawing processes. Due to this, the local properties of these parts can be changed remarkably compared to the unprocessed material. In order to be able to accurately predict crack formation, the damage history including local plastic strain and pre-damage has to be considered. Since the use of forming simulations has become usual practice for sheet metal manufacturing, a damage model suitable to be used for both forming and crashworthiness simulations will be presented in the following. Based on the well-known failure criterion of Johnson and Cook, a generalized formulation is proposed that can account for complex failure modes in modern high strength materials. Numerical examples will be presented to demonstrate the practical use of the damage model for the process chain of sheet metal manufacturing.
Klaus Wolf – Fraunhofer Institute SCAI, Germany, Dr.-Ing. Robert Schilling – Ford-Werke GmbH, Köln, Dr. Jörn Lütjens, Dr.-Ing. Michael Hunkel – IWT Bremen, Dr.-Ing. Thomas Wallmersperger – ISD Universität Stuttgart, Udo Jankowski – Tecosim GmbH, Dirk Sihling – GNS mbH, Klaus Wiegand, Albrecht Zöller – Daimler AG, Martin Heuse Faurecia Autositze GmbH
In the automotive industry, there is an increasing demand for weight reduction as well as safety requirements. These demands have motivated the use of locally optimized components. This study shows how local rigidity of crash-relevant side rails made of multi-phase steels can be improved by local hardening and thus avoiding an increase of the cross section. At the same time simulation process chains were completed and results validated by experiments. A dedicated software tool for the coupling of a wide range of commercially available FEM software products was developed. Codes for metal forming, heat treatment and crash simulations can now be used in one serial workflow. One major aspect here was the transfer of tensor-like values such as stress or strain states.
Manuel Roth – Pfeiffer Vacuum GmbH, Stefan Kolling – Giessen University of Applied Sciences
The dynamic behaviour of a roots vacuum booster with two rotors is presented. The dynamic response of the structure is, thereby, investigated using an explicit analysis for the crash behaviour and an implicit analysis for the vibration behaviour. Typically the rotors run from 3000rpm to 3600rpm, but because of a desired rise in power density it is necessary to design rotors for operation at 6000rpm. With increasing rotational velocity the dynamic loading and the inertial forces increase as well. To face this challenge a rotor with less weight was devel- oped. To come to a conclusion about impact behaviour, dynamic deformation and bearing-reactions a crash between the two rotors has been simulated. These results are compared with another rotor which show the potential of weight reduction. The rotor and shaft are made from cast iron. For the material model in the crash analysis, a comparison between Mat Piecewise Linear Plasticity and Mat Gurson JC has been accomplished. To consider tri- axiality a vonMises yield locus is used together with the Johnson-Cook failure criterion in Mat Gurson JC, i.e. damage and its accumulation has been neglected, see  and . In an implicit eigenvalue analysis the eigenfrequencies are determined. Roots pumps are assembled in industrial installations, where it is necessary to know the appearing natural frequencies to avoid reso- nance vibrations.
S. Heimbs, F. Strobl, P. Middendorf, S. Gardner, B. Eddington, J. Key – EADS Innovation Works, Force India Formula One Limited
Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials – like the whole monocoque chassis – and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.
M. Langseth – Norwegian University of Science and Technology
Lightweight materials such as aluminium offer the automotive industry an opportunity to design and manufacture high-performance vehicles that are safe, energy-efficient and environmentally friendly, and much lighter than traditional designs. However, the introduction of these materials will challenge the automotive design engineers to explore and develop new solutions in design and production technology in order to fully realize the potential that can be gained in the interaction between these materials, product/structural design and the manufacturing process. Even though aluminium is an “old” material, it is relatively new as a load-carrying material in the automotive industry. This implies that material producers and parts suppliers have to develop new knowledge about these materials to gain an increased market share. In order to meet the future challenges with respect to the use of aluminium as a structural material in the automotive industry, the product development to day is increasingly carried out in virtual environments by using computational mechanics. Even though great advances have been made in modelling, the designer must still use knowledge about the physical mechanisms controlling the product performance. The designer must also know what simplifications can be made in the modelling and still retain sufficient reliability and accuracy. At all levels of modelling, experimental validation of the numerical models to be used is required before the models are accepted. In the period 2007-2014 the SIMLab research group at NTNU is defined by the Research Council of Norway as a Centre for Research based Innovation (www.ntnu.no/simlab). One of the objectives with the centre is to provide the industrial partners with reliable and robust engineering models of aluminium to be used in crash analyses. Thus the present presentation will focus on some of the modelling activities carried out in the centre as well as the validation of these models and try to highlight some of the needs and challenges mentioned above by using aluminium in the automotive industry. In the introduction an overview of aluminium as a structural material will be given and the strong and weak points about the material will be defined. Examples will here be given on typical mechanical properties that have to be taken into account in the developed models in order to have good and reliable predictions. Then the use of aluminium in the automotive industry will be discussed as an introduction to the models developed by the SIMLab group on thermoplastics, aluminium foams, self-piercing rivets, aluminium extrusions and plates, aluminium castings and magnesium. Finally the developed models for aluminium and self piercing rivets will be validated against component tests in the laboratory. For the self-piercing riveting activity an engineering and research strategy will be shown in order to develop a shell-based model where an interaction between process and component testing and process and component numerical simulations are carried out.
Dr. Dong-Zhi Sun, Dr. Florence Andrieux – Fraunhofer Institute for Mechanics of Materials, Dr. Markus Feucht – HPC X271
The local mechanical properties e.g. flow stress and fracture strain in an automotive component manufactured by deep drawing are inhomogeneous due to different local deformation degrees which affect the component behaviour under crash loading. A reasonable approach for modelling the damage behaviour of a component produced by deep drawing is a coupling between forming simulation and crash simulation. The open questions are which material model (kinematic or isotropic hardening) and which damage model should be used for an integrated simulation. Since the loading type is mainly biaxial at deep drawing and uniaxial under crash, it should be investigated how the damage development is influenced by deformation history including change of stress state. In this work the influence of triaxiality and pre-deformation on damage behaviour of a TRIP steel was characterized with different specimen tests e.g. under shear, uniaxial and biaxial tension and a damage model taking into account shear fracture and dimple rupture was developed. This damage model can also describe the influence of pre-deformation. Validation tests on an automobile component under a loading close to reality were performed and simulated with pre-strains and pre- damage mapped from a forming model to the crash model.
Bharat Chittepu, Matthias Hörmann, Ulrich Stelzmann – CADFEM GmbH, Harald Wels, Thomas Albrecht – KRONES AG
The increasing use of PET bottles has been and continues to be a dramatic growth story in the packaging industry. The adaption of PET bottles for soft drinks, juice drinks, water, food and other products continues to provide exciting packaging opportunities. Increasing use meant increase in demand and the need for saving time in the process chain of the packaging industry. One area where a high speed process is possible is labeling. In higher output ranges in the labeling technology, PET bottles are subjected to undesirable deformations which in turn might result in bottle losses in the machine carrousel or to a bad placement of the label. Information on this deformation should be available in the earliest possible stage of machine planning, such that the desire to simulate comes into play. Before simulating such a high speed labeling process, it is necessary to have a reliable filled PET bottle model which is justified to be used in the simulation of the real process. The first step in this approach is to simulate the top load performance of an empty PET bottle and validate the simulation results with experimental results by comparing load and buckling deformation. Sensitivity studies are carried out with respect to material, geometry and Finite Element parameters to obtain an optimized parameter set which ensures a reliable model of the empty PET bottle. This model is then the basis to simulate the top load performance of liquid filled PET bottle. For the filled PET bottle the right modeling approach to account for the presence of liquid, i.e. water and its associated physics (inertia, compressibility and hydrostatic pressure), must be determined. Control Volume, Smoothed Particle Hydrodynamics and Arbitrary Lagrangian Eulerian approaches are discussed to highlight the benefits and drawbacks of each approach for accurately simulating the top load performance of filled PET bottle. Load-deformation curves and bucking shapes of the top load test are compared with simulation results to justify the usage of a reliable filled PET bottle. The third and final step is to simulate the high speed labeling process by identifying the right approach to account for the machine kinematics and the inertia effects of the liquid. Added element mass or SPH approach for accounting inertia of the liquid in combination with machine kinematics is investigated in order to identify the most accurate combination for bottle deformation in the labeling process.
Katharina Witowski, Martin Liebscher – DYNAmore GmbH, Tushar Goel – Livermore Software Technology Coorporation
Optimization of engineering structures where multiple (more or less conflicting) objectives are simulta- neously considered, is getting more and more attractive in automotive industries. They usually involve a large number of design variables and the objectives are subject to certain constraints. Unlike single- objective problems, there are many trade-off solutions. The most common approach of using a single aggregate objective function (AOF), though simple, is not appropriate in most cases because a) it requires a priori information e.g., weights associated with each objective for weighted linear sum of the objectives method, that might not be available; and b) this approach yields a single trade-off solution instead of all possible trade-off solutions. Multi-objective evolutionary algorithms (MOEA) seem to be the best choice at the moment to overcome these issues. A set of solutions (Pareto data) is obtained as result, which reflect distinct trade-off so- lutions. A (optimal) decision needs to be taken to choose the most suitable trade-off among multiple conflicting objectives. Data mining and visualization techniques for high dimensional data provide helpful information to sub- stantially augment the decision making (alternative design selection) in multi-objective optimization en- vironment. A graphical approach to visualize the Pareto frontier is an intuitive and suitable approach to investigate the trade-off for three or fewer dimensions (objectives). However, it is not trivial to study relations in higher dimensions hence many visualization methods are proposed. The basic idea of these techniques is to reduce the dimensionality without loosing the relevant information required to recognize and understand relations and characteristics of the high dimensional Pareto data. Among the several developments in these fields, the Parallel Coordinates Plot (PCP), the Hyper-Radial Visualization (HRV), and the Self Organizing Maps (SOM) have been found the most promising. The parallel coordinates plot assigns one axis to each dimension and many dimensions are aligned in parallel. A data point is represented as a line connecting different axes. The HRV is based on a ra- dial calculation and transfers the multi-dimensional data to a two-dimensional data set by grouping the weighted objectives, that leads to a final solution with respect to the selected weights and the grouping. The designer incorporates his preferences by modifying the selection. The SOM algorithm projects the multi-dimensional Pareto data onto a two-dimensional map, whereby similar data is mapped to neigh- boring locations on the map. The lattices are color-coded to show the variation of the data on the map. The concepts of PCP, HRV and SOM are explained along with the various forms of visualization of Pareto data. All three approaches are investigated and respective pros and cons are identified using a shape optimization case crash application executed with LS-OPT. An implementation in the data mining and visualization framework D-SPEX is also provided.
B. Hochholdinger, P. Hora – ETH Zurich, H. Grass, A. Lipp – BMW AG
Due to the increasing number of body-in-white parts that are manufactured by hot forming of boron alloyed sheet metal (22MnB5), the demand for a virtual representation of this specific manufac- turing process is evident. For a realistic simulation of hot stamping processes, the accurate modeling of the flow stress as function of strain, strain rate and temperature is essential. In the last years a large varity of empirical-analytical as well as physically based models for the yield stress has been proposed. Three existing models that have shown a good capability to represent the flow behavior of 22MnB5 in recent publications are presented and fitted to the experimental data. The underlying experimental data for the determination of the flow stress is obtained by stack compression tests, which were conducted in a high-speed deformation dilatometer. In a first step the model parameters are fitted to the experimental flow curves without considering the friction, which inherently is present in a compression test. Since the friction between die and specimen has significant influence on the state of stress within the specimen, an inverse, simulation-based ap- proach for the determination of the model parameters is employed. For this purpose a simple 2D FE model for each test configuration is set up. The resulting “friction-free” yield stress is up to 15% lower than the one without considering friction. Regarding the models considered, the approach developed by TONG and WAHLEN, which is based on the Z ENER -H OLLOMON parameter and a H OCKETT-S HERBY type formulation, provides the best fit of the experimental data.
Mark Tyler-Street, Joke Luyten – TNO Defence
A research programme is being undertaken at TNO to investigate vulnerability reduction on warships. In this framework, studies have been performed regarding the structural damage due to both an internal missile explosion and close in explosions. Due to the severity of the explosive loadings the structural deformation is considerable and an accurate method to predict both the initiation and progression of material damage would be of significant value to assist in the design of more blast resistant structures. The ability to model material failure is available with many of the material models in the explicit finite element code LS-DYNA although further development is required in order to correlate to observed experimental results. In the material models, the initiation of failure is typically defined by the uni-axial failure strain ef. This parameter is not an independent material constant and the failure characteristics will vary depending upon the applied stress state (for example sensitivity to tri-axiality), the temperature and rate of loading. For ship plate steel the manufacturing process of rolling may introduce anisotropic failure characteristics which differ in the rolling and transverse directions. The failure strain needs to be adjusted for coarser meshes which are unable to represent local strain gradients. This paper describes some of the work that has been done to describe the failure properties of typical ship plate steel with the development of a user defined material model to predict and further understand the failure characteristics.
Shinya Hayashi – JSOL Corporation, Masahiro Awano, Isamu Nishimura – Mitsubishi Motors Corporation
A biofidelic flexible pedestrian legform impactor (Flex-PLI) has been developed by Japan Automobile Manufacturers Association, Inc. (JAMA) and Japan Automobile Research Institute (JARI). The Flex-PLI has good biofidelity as well as several knee ligament elongation measurement capabilities, three femur and four tibia bending moment measurement capabilities. For these reasons Flex-PLI is likely to be used for future pedestrian Global Technical Regulation. This paper introduces a finite element model of the Flex-PLI type GT for LS-DYNA and compares a full vehicle Flex-GT impact simulation with test. A very accurate vehicle model is needed to predict Flex- PLI injuries. In this paper, a detailed and correlated vehicle model was used. The Type GT is the 5th version of Flex-PLI and has almost the same structure and performance as final design type GTR. The Flex-PLI type GT LS-DYNA model was carefully created to ensure every important detail was included. Geometries, masses and material properties of all parts were reproduced from drawings and inspection of the real components. Connectivity and component interaction within the model was determined by thorough experiments. Accurate prediction of injury indices and kinematic behaviour was achieved by correlation to static and dynamic calibration tests. A fine mesh was used but reasonable calculation cost assured by imposing an analysis time step of 0.9 micro seconds.
Mario Mongiardini, Malcolm H. Ray – Worcester Polytechnic Institute,, Marco Anghileri – Politecnico di Milano
This paper describes the development of the Roadside Safety Verification and Validation Program (RSVVP), a software that automatically assesses the similarities and differences between two curves. This program was developed to assist engineers and analysts in performing curve comparison during the verification and validation process of a numerical model. RSVVP was designed to automatically preprocess the two input curves to make them comparable. Also, in order to ensure the most accurate comparison as possible, several options are available for the pre-processing of the input curves before the comparison metrics are computed. Data can be filtered and synchronized or any shift/drift effect can be removed. Once the signals have been pre-processed, the user can select to compute the values of one or more of the available sixteen different shape-comparison metrics. Any operation, from the input of the curves and selection of the pre-processing options till the final visualization of the results is accessible through an easy and intuitive graphical user interface. The numerical results are automatically saved by the program into a convenient spreadsheet format and the graphs are saved as bitmap images for any further investigation. Simple examples using an analytical shape are presented to illustrate the characteristics of the metrics. Also, the comparison of the acceleration time histories of a full-scale test involving a small car and the corresponding Ls-Dyna simulation is presented as an example of application of the metrics in the validation process of a numerical model.
Dr. Ing. Harald Mandel – BA-Stuttgart, Paul Du Bois -, Tim Rzesnitzek – Daimler AG, Stuttgart Germany
Heavy trucks have large masses and only small deformation zones. Because of this, they are loaded relatively severe in case of a crash. Under those conditions structural response is characterised not only by plastic deformation but also by failure in terms of cracks or fracture. Hence, failure prediction is essential for designing such parts. The following article describes the procedure of generating material models for failure prognosis of solid parts in the Commercial Vehicles Division at Daimler. Sheet metal parts are mostly discretised by shell elements. In this case the state of stress is characterized by hydrostatic pressure over von-Mises effective stress, the so-called triaxiality. For many real-life load cases which can be modeled by thin shells this ratio is between –2/3 and –2/3. Within this range the Gurson material model with the Tvergaard Needlemann addition leads to sufficiently accurate results. Furthermore, the Gurson material model allows considering the effect of element size, which amongst others is important for ductile materials. Most often however, in the case of solid parts the state of stress is more complex, which results in a triaxiality smaller than –1 or larger than 2/3. Gurson material models are usually validated based on shell meshes and tensile tests with flat bar specimen. If applied to solid parts, these models tend to underpredict failure . Thus, for solid parts the GURSON_JC material model is used. The Johnson Cook parameters are derived from an existing Gurson material model. Afterwards the material model is adapted to test results by modifying the load curve giving failure strain against triaxiality. This requires tensile tests with grooved and non-grooved round bars, shear tests and validation tests on actual parts.
Mazdak Ghajari, Lorenzo Iannucci – Imperial College London, Caroline Deck, Remy Willinger – Strasbourg University, Ugo Galvanetto – Padova University
The Finite Element (FE) method was employed to develop and enhance numerical models that can be used for simulating accidents involving motorcyclists. They are the FE models of a commercial helmet, the human head and the Hybrid III dummy. The composite shell and foam liner of the helmet, which are the most important components in terms of energy absorption, were generated using the Ls-Dyna preprocessing software. The FE model of the human head, which was developed in the Radioss environment, was converted to the Ls-Dyna format. In order to validate the head model with respect to the skull force-deflection response and intracranial pressures, two cadaveric tests reported in literature were simulated. The model of the head was coupled, through the neck, with the body of the Hybrid III dummy. This new dummy was capable of predicting the skull fracture as well as intracranial injuries such as the diffuse axonal injury. As an application of the models, the Hybrid III dummy and the new dummy were equipped with the helmet and dropped onto a flat anvil at the 7.5 m/s impact speed. The protective capability of the helmet was assessed with respect to kinematic injury predictors, such as the maximum linear acceleration of the head, and tissue level injury predictors, such as Von Mises stress in the brain.
A. Gromer, S. Stahlschmidt, R. D’Souza – DYNAmore
In times where simulation methods are well-established in Vehicle Passive Safety Development, it is indispensable to provide a virtual equivalent of current hardware dummies. Thus, German car manufacturers decided to build up a WSID50th finite element model. Due to the experience of former FAT dummy development projects, the processing of the WSID50th model achieves a high efficiency level. The project started with building up a model using the material data of other dummy projects – release version 0. After an immense number of material and component tests the WSID50th model validation level is increased and qualifies the model for production – release version 1. This paper describes the evolution and features of version 1 in detail. There is a detailed description of the individual component test validation and its results.
J. Rasico – FTSS
FTSS has been providing Finite Element models to the safety community for over a decade. This has resulted in an expansive family of; commonly used Anthropomorphic Test Devices (ATDs), dummies primed for future regulation, and complimentary safety tools for the examination of unique protection systems, such as ejection mitigation. In conjunction, the last 10 years have seen a continuous demand for increased quality and functionality of existing dummy model products, and new tools to help safety engineers address the evolving requirements of regulatory bodies and consumer agencies. While an expanding database for development and validation has helped dummy models reach further levels of maturity and accuracy, close involvement with physical product design and development has allowed for early adaptation of hardware updates. Furthermore, collaborative efforts within the automotive community have become a key component of new model development. With this approach, these new models target the upfront requirements of OEMs and their suppliers. Further improvement of existing models and end-user participation for the development of new models is leading to more powerful ready-to-use models for safety engineers in the Finite Element community.
Dr.-Ing Matthias Hörmann, Steffen Schiele – CADFEM GmbH, Reinhard Probol – LAMY GmbH
Consumer products like cell phones, personal digital assistants, dish washers or cookers, to name just a view of them, are often exposed to drop during transportation to the customer and during usage in life time. Pre-damage, failure or malfunction due to drop is typically not acceptable and will lead to refusal through the costumer in addition with a correspondingly amount of financial and prestige loss. The present work deals with the numerical simulation of a drop test of a LAMY pencil. Special emphasis is put on the drop onto the apex of the pencil, which is most harmful to the lead mechanics. In experiments, failure of the lead mechanics was observed for this drop position, which was a result of localized high stresses in combination with plastification in those regions. It was the goal of the simulations to investigate whether an exchange of the used material for the lead mechanics would meet the requirements. Special emphasis was hereby placed on the reproduction of the overall lead kinematics translational and rotational wise as well as to account for the behavior of the floor material.
R.Cresnik, A.Rieser, H. Schluder – Virtual Vehicle
A growing number of safety systems are implemented in modern vehicles. Thereby vehicles become more complex and in succession the quantity of potential error causes is increasing. Numerical simulation and prototype tests are used to investigate vehicle behaviour and prevent aberrations at an early stage. However, prototype tests on full vehicle level are not feasible in early development stages. Numerical simulation is an effective tool reducing development time and costs, but hardware tests are still necessary to verify the simulation results. To handle these challenges in the development process new developing methods are necessary. In this paper an interface, which provides the implementation of control systems into finite element solvers is presented. This interface allows a more realistic behavior of these systems in numerical simulation. Thereby it is a useful tool, to design and adjust mechatronic systems, like integrated safety systems, at an early stage of the development process. This coupling method can also be used to check actuator configurations in substituted mechanical systems. Needed forces and accelerations are known before experimental testing, but disturbance variables cannot be pre-calculated. Therefore this method offers a possibility to verify, if the range of capacity of the actuator, the frequency and efficiency of the control algorithm are able to handle the prescribed behaviour. In order to consider the behavior of all systems in a close to realistic manner, associated control units must be built into the finite element model. This will be a prerequisite for the realization of an optimized mechatronic system configuration in future vehicles.
Arun Chickmenahalli – International Automotive Components, Suthy C. Sivalingam – ESI North America, Thomas Weninger – ESI-Group
P. Du Bois – Consultant, Prof. S. Kan, M. Buyuk – George Washington University, J. He – Engineering Technology Associates
To assess the problem of containment after a blade-off accident in an aero-engine by numerical simulation the FAA has instigated a research effort concerning failure prediction in a number of relevant materials. Aluminium kicked off the program which involved an intensive testing program providing failure data under different states of stress, different strain rates and different temperatures. In particular split Hopkinson bars were used to perform dynamic punch tests on plates of different thicknesses allowing to investigate the transition between different failure modes such as petaling and plugging. Ballistic impact tests were performed at NASA GRC for the purpose of validation. This paper focuses on the numerical simulation effort and a comparison with experimental data is done. The simulations were performed with LS-DYNA and a tabulated version of the Johnson-Cook material law was developed in order to increase the generality, flexibility and user-friendliness of the material model.
W. Tiu – University Hertfordshire
This paper will describe the work carried out to compare the experimental response of the car using a four actuator vibration rig and that obtained from an LS-dyna analysis using the same loading spectrum as that used in the physical experiment. The public domain frontal crash FE model was modified to reduce the solution time. Most of the materials used in the panels were changed to rigid as these were not expected to deform during a vibration analysis. The spring and damper rates in the FE model were modified in an iterative process until convergence was achieved. Road load data was then obtained by driving along a prescribed circuit with relevant instrumentation. The terrain of the circuit was then laser scanned to obtain a digital model. The digital terrain was then used in the simulation of the correlated car model going around the same circuit. The experimental/simulation responses were then compared against one another. The availability of the same physical and FE model has enabled our M.Sc. Engineering students to obtain a better understanding of suspension analysis and correlation process. Furthermore the exposure to both testing and simulation techniques will equip them better to face future challenges.
Prof. Dr. Mariano Pernetti, Dr. Salvatore Scalera – AMET ITALY
Road safety barriers in Europe have to fulfil the European standard EN 1317, which defines a set of crash tests for each safety barriers containment levels. Full scale tests of vehicle collision against road safety barriers have a huge importance to assess the outcomes of real accidents and, more in general, to identify barriers and vehicles features which influence crashworthiness in a meaningful manner. On the other hand, this kind of tests is really expensive and many parameters are hard to control and measure. Due to the aforementioned reasons, numerical analysis of vehicles collisions against safety barriers has become a convenient methodology that supports and integrates the previous one, especially considering the continuous technological hardware/software progress. The paper presents finite element (FE) development and the early experimental validations for a three dimensional virtual model of a bus. The main objective of this research activity is to create a simplified FE model of this kind of vehicle useful to simulate collisions against road safety barriers in a wide range of impact conditions. Particular attention was paid in modelling features of the bus such as frame, suspensions and tyres, which influence in a meaningful manner the behaviour of the vehicle during a collision. The bus model complies with the requirements for the homologation of H2-type barrier (test TB51), in accordance with European standard EN1317. To evaluate the general behaviour of the finite element model of the bus, two different impacts were simulated, (i) against a concrete wall and (ii) against an H2-type barrier. These collisions represent two situations extremely different considering transformation of vehicle kinetic energy. Indeed, concerning the impact against concrete wall, a large part of kinetic energy changes in vehicle internal energy causing a collapse in a wide portion of the bus. Differently, in the case of impact against a steel barrier, vehicle kinetic energy is transformed in device internal energy, but the impact against posts stresses tires, axles and suspensions in a huge manner. Besides, the roll angle is grater than the one registered during the collision against the wall, because the average high above ground of the global action is less than the previous one, causing a larger upsetting moment and a significant stress on the suspensions. Due to previous reasons, the collisions against a concrete wall and against a steel barrier, represent excellent preliminary tests to verify the numerical robustness of the FE model of the bus and to evaluate the general good behavior of the vehicle during collisions in a wide range of impact conditions.
A.S. Nemov , A.I. Borovkov – St.Petersburg State Polytechnical University, B.A. Schrefler – University of Padua
Superconducting cables are one of the key technical solutions used for generation of strong magnetic field in modern tokamaks. It is very important for engineers to be able to predict the mechanical deformations of superconducting cables because superconductivity depends on strains, temperature and magnetic field. Superconducting cables for ITER the International Thermonuclear Experimental Reactor  currently under construction, have a complex structure that makes any analytical estimations hardly applicable. This paper presents the application of LS-DYNA  finite element code to the solution of different mechanical problems for ITER superconducts. Stretching, twisting and transverse compression are considered and results are compared with analytical estimations where possible.
Dmitriy Mikhaluk, Igor Voinov, Prof. Alexey Borovkov – CompMechLab of St. Petersburg State Polytechnical University
Deck arresting gear is a special aero-carrier unit that is destined to provide an efficient arrestment of the deck jet-fighters of masses 10-25 tons with high landing velocities between 180 and 240 km/h. Main elements of the modern arresting gear are a cable and a hydraulic system. During deck landing the jet-fighter grasps the cable with a hook and draws it. The cable is threaded between a system of blocks, that are forming a block-and-tackle mechanism, designated to transfer the jet-fighter pull to the hydraulic braking machine. In the latter the kinetic energy of the fighter is transferred to the heat and then dissipated. In the current work a full-scale LS-DYNA model of the deck arresting gear is created. The model contains all basic elements of the real prototype and is used to analyze the dynamic behavior of the arresting gear with different arrestment conditions. Due to the feedback control system, several characteristics of the arresting gear elements vary with some of run-time changing parameters. Standard capabilities of LS-DYNA do not enable performing simulation of such a complex system and by that reason a Delphi code is developed. The code allows managing of LS-DYNA solution and automatically makes multiple restarts during the simulation to change the definition of stiffness and damping curves, describing the arresting gear parts. The developed model is used to obtain parameters of the arresting process – fighter displacement, velocity, acceleration vs. time, as well as pressure in the hydraulic system and tensile force in the cable.
A. Reyes, O. S. Hopperstad – Norwegian University of Science and Technology, T. Berstad, O.-G. Lademo – Norwegian University of Science and Technology/ SINTEF Materials and Chemistry
In this study, LS-DYNA was used to predict the experimental forming limit diagrams (FLDs) for pre- strained sheets in aluminum alloy Al2008-T4 found by Graf and Hosford . The original data of Graf and Hosford  includes numerous pre-straining situations, but it was here chosen only to investigate pre-straining by biaxial and uniaxial tension. In order to generate the FLDs, several analyses of a square patch were run systematically to construct the different points. Pre-straining was applied by first stretching a somewhat larger patch to a given pre-strain, and then trimming this patch to the standard square patch. The material model used in the analyses includes two instability criteria; a non- local criterion to detect incipient localized necking and a through-thickness shear instability criterion [2-5]. The objective was to study whether the effects of pre-straining on the FLD could be predicted by the chosen modelling approach, and good results were obtained.
Sebastian Lossau – Daimler AG, Bob Svendsen – TU-Dortmund
This work demonstrates a first approach of using virtually obtained material properties as input for forming simulations. The necessary parameters to apply a Barlat-Lian89 yield surface are computed in the so called “Virtual Lab”, which performs FE-simulations on previously cold rolled volume elements to predict the distribution of grains. The resultant stress-strain-curves serve as input parameter for a deep drawing simulation. For reference, an ordinary material data file determined by real uniaxial tension tests is compared to the virtual based material data file. Further, the results of both simulations are exhibited to allow a first evaluation of the deviation from each other. In particular, the Lankford Parameters in 0° and 90° with respect to the rolling direction are predicted by the Virtual Lab quite well. Only the 45° value requires improvement for future analysis of material properties. Likewise, the extrapolation of the hardening curve shows a strong deviation at larger deformation. The equivalent plastic strain as well as the thickness reduction is less affected by this problem. However, the calculation of the equivalent stress is influenced strongly by the deviating hardening curves at larger strains. This expresses itself in an overestimation of the stress in the simulation as based on the virtually obtained properties.
Dr Tayeb Zeguer – Jaguar Landrover
The traditional new-vehicle design cycle is very time consuming due to the sequential approach used. The need to reduce time to market for new vehicles as well as the increased affordability of high- performance computing, which can process hundreds of simulations concurrently, has led to the increased adoption of MDO processes. The goal of an MDO is to provide a more consistent, formalized process for complex system design than that found in traditional approaches, as well as to impact the design cycle through timely, performance-based direction. In essence, MDO aids in the management of the design process workflow itself. The MDO principle allows engineers and analysts to address multiple vehicle attributes such as safety performance, refinement and failure modes e.g. full frontal, offset, side and rear impacts, occupant restraints and total vehicle level NVH. This paper provides a formal and structured approach in the use of MDO at JaguarLandover to address complex and often conflicting requirements; arriving at better quality designs in a faster and more cost- effective manner. The use of MDO solutions increases the efficiency of the simulation processes by the following: Automation of many manual simulation processes to save time. Linking multiple simulation such as Crash, NVH and restrain to perform trade-off analyses Minimizing vehicle weight and meeting all vehicles attribute requirements. Find optimal designs and develop better products
D. Weiss, B. Sonntag, T. Krumenaker, Dr. D. Nowottny, Dr. J. Sprave, W. Hipp – Daimler AG
The actual paper introduces the integration of LS-DYNA and CATIA V5 into automatic geometry- based topology optimization of an engine hood regarding pedestrian head impact. In current design processes, such Computer Aided Engineering (CAE) tools, along with structural optimization, have become essential elements to provide efficient and reliable structures. However, the required iterative process of adjusting steps between simulation and design engineers is still a time-consuming task. In recent years therefore, automatic multi-criteria and multi-disciplinary optimization simultaneously considering different simulation disciplines have drawn increasing attention. For structure creation or topology variation, FE-based concepts have been developed working on a discretized design space, whereas geometry-based parameter variation on CAD models has been mainly used for shape and size variation. Although being a first step toward design process automation, both concepts are a trade-off between accuracy and creativity. The final goal would therefore be to combine the topology variation ability of the FE-based method with the ready-to-use solution of the parameter concept. Hence, extending the idea of parameter variation with the addition and removal of entire geometrical features, automatic topology variation on CAD structures is introduced. However, applying such geometry variation implies further considerations regarding a fully automated optimization loop such as accurate CAD build-up, update-stability, high quality batch meshing and a rapidly increasing number of free parameters. The project this work is based on aims at full automation of a geometry-based optimization loop for optimum structure generation using CATIA V5 and LS-DYNA. The concept is applied to pedestrian safety considerations, analyzing different engine hood topologies regarding their head impact performance. In a first step, parameter studies and simplified impactor load cases are run using the automatic CAD- FE loop as a pre-stage to a full multi-criteria optimization. The paper’s focus is set on the concept’s applicability to industrial processes. Hence, solutions regarding automated CAD-FE transition for evaluation are discussed as well as general limitations of CAD-based topology optimization. In particular the demanding task of batch meshing for varying topologies and sensitivity analyses to reduce the number of free parameters are addressed.
Uwe Reuter – TU Dresden, Martin Liebscher, Heiner Müllerschön – DYNAmore GmbH
The main purpose of global sensitivity analysis is to identify the most significant model parameters affect- ing a specific model response. This helps engineers to improve the model understanding and provides valueable information to reduce the computational effort in structural optimization. A structural optimiza- tion is characterized by a set of design parameters, constraints and objective functions formulated on basis of model responses. The computational effort of a structural optimization depends besides the complexity of the computational model heavily on the number of design parameters. However in many cases an objective function is dominated only by a few design parameters. Results of global sensitivity analysis may be used to select the most significant design parameters from a number of potential can- didates and thereby reduce the optimization problem by the insignificant ones. In this paper different sensitivity measures and associated algorithms are evaluated with respect to their capabilities and com- putational costs. In particular the variance-based approach after Sobol is compared with the correlation analysis, the linear and quadratic ANOVA approaches, and the FAST approach. This is done using a comprehensible academic example as well as an optimization problem from engineering practice. Keywords:
Aleksandra Piotrow, Stephan Pannier, Wolfgang Graf – TU Dresden, Martin Liebscher – DYNAmore GmbH
Structural analysis under consideration of the uncertainty of input parameters, such as loads, material, and geometry leads to uncertain time-dependent results. Such uncertain structural process shows for the uncertain input parameters all possible behaviours of a structure. Modelling of uncertainty in input parameters when only incomplete or expert knowledge based information is available requires the introduction of the uncertainty model fuzziness. A fuzzy process is a fuzzy set of real valued processes, whereas each of them possesses an assigned membership value indicating the degree of possibility. In order to obtain an engineering interpretation of a fuzzy process some representative crisp processes have to be chosen from numerous realisations of this uncertain function. In this paper a cluster analysis based approach for grouping similar and detecting different time-dependent structure behaviours is introduced. The similarity of processes within one cluster is assessed with similarity metrics: neighbouring location, affinity, and correlation. The uncertain assignment of real valued realizations of fuzzy process to clusters is executed with the Fuzzy-c-Means cluster algorithm. The capability of this approach is demonstrated within the controlled collapse simulation of a reinforced concrete framework structure carried out in LS-DYNA. In this example an analysis of a fuzzy process is performed by means of cluster methods and “-level discretization in order to select collapse sequences, significantly differing from each other and having other degrees of possibility.
D. Lopez, U. Jankowski, S. Oldenburg – TECOSIM GmbH
Based on several years of CAE experience TECOSIM has developed an advanced inhouse CAE process. Each TECOSIM tool was developed on the basis of TECOSIM internal projects as well as several projects carried out successfully at customers’ site. All tools used in this process are able to run as stand-alone solutions, but the combination of them provides a very effective CAE process, thus reducing the entire vehicle development cost. The TECOSIM modular principle makes one of the differences to other compact CAE process tools. TECOSIM ́s process is very flexible because each module can be run separately and it could be also added to almost any existing process.
Deck Caroline, Willinger Rémy – Strasbourg University
This paper presents an original numerical human head FE models followed by its modal and temporal validation against human head vibration analysis in vivo and cadaver impact tests from the literature. The human head FE model developed presents two particularities : one at the brain-skull interface level were fluid-structure interaction is taken into account, the other at the skull modelling level by integrating the bone fracture simulation. Validation shows that the model correlated well with a number of experimental cadaver tests including skull deformation and rupture, intra-cranial pressure and brain deformation. This improved numerical human head surrogates has then been used for numerical real world accident simulation. Helmet damage from eleven motorcycle accidents was replicated in drop tests in order to define the head’s loading conditions. A total of twenty well documented American football head trauma have been reconstructed as well as twenty eight pedestrian head impacts. By correlating head injury type and location with intra-cerebral mechanical field parameters, it was possible to derive new injury risk curves for injuries as different as subdural haematoma and neurological injuries. Illustration of how this new head injury prediction tool can participate to the head protection system optimisation is also provided.
Surviving the economic slowdown by increasing cost-saving technology: Don’t spend more; spend more effectively Today’s trends – more complex features, increased global competition, and more government regulations – increase automakers’ reliance on CAE simulations HP and Intel help automakers maneuver through the obstacles by providing innovative technologies that boost computational performance and lower costs
Dr Timothy Lanfear – NVIDIA
The NVIDIA® TeslaTM C1060 transforms a workstation into a high-performance computing machine that outperforms a small cluster. This gives technical professionals a dedicated computing resource at their desk-side that is much faster and more energy-efficient than a shared cluster in the data centre. The Tesla C1060 is based on the massively parallel, many-core Tesla processor, which is coupled with the standard CUDA C programming environment to simplify many-core programming. Many applications have been ported to the CUDA architecture and show significant increases in performance compared with equivalent implementations on standard microprocessor architectures.
Prof. Dr.-Ing. habil. Stefan Hiermaier , Matthias Boljen, Dr. Ingmar Rohr – Fraunhofer-Institute for High-Speed-Dynamics
Deformation processes of structures under dynamic loading have been investigated both experimentally and by simulation for many years now. Various rate dependencies in many materials, wave and shock wave phenomena as well as material tests for their quantitative description have been identified. In parallel, mathematical formulations for the observed material behavior and numerical schemes for time dependent approximations of the governing partial differential equations have been developed. Since both the experimental characterization and the numerical simulation demand for assumptions, e.g. the state and distribution of stress and strain in a specimen or in a discretizing unit, increasing complexity of materials demands for advanced test set-ups and numerical methodologies. In this paper, a brief discrimination between the regimes of quasi-static, low-dynamic and high-dynamic loading conditions is given. Related experimental means for material characterization as well as components in the numerical model needed to represent the relevant physical aspects are given by some example cases. Specific emphasis is placed on the characterization of low-impedance materials and on the implementation of a micro-continuum based fabric model into LS-DYNA. An application of the resulting fabric model to ballistic simulations is shown in the final part.
A. J. Sangi, I. M. May- Heriot-Watt University Edinburgh
The behaviour of reinforced concrete in quasi static regimes has been investigated extensively but there have been few investigations into its transient behaviour, especially under low velocity regimes. This paper describes the finite element modelling and analysis of reinforced concrete slabs under drop-weight impact loads using LS-DYNA. The results obtained from the numerical simulations have been compared with tests that were carried out at Heriot-Watt University to generate high quality input data to validate numerical modelling. The experiments were conducted on four 0.76 m and two 2.3 m square slabs under drop-weight loads. A drop-weight system was used to drop a mass of up to 380 kg with velocities of up to 8.7 m/s. The output from the tests included time histories of impact force, acceleration, strains and video footage using a high-speed video camera which recorded the images at the rate of up to 4,500 frames per second. The simulation results show reasonable agreement when compared to the tests and for the overall kinematic response of the slabs.
Muhammad Noman, Bob Svendsen – University of Dortmund
Sheet metal forming processes cover a wide range of applications in industry. In order to model sheet metal forming processes using numerical simulation an accurate description of the material behavior is required. To this end a material model has been implemented which is capable of capturing the move- ment and proportional expansion of the yield surface along with the change in the shape of the yield surface. The former is described as kinematic and isotropic hardening, respectively, and the latter is termed as distortional (cross) hardening. Once the model is implemented the second step consists in identifying the material parameters. In this contribution, a strategy for the identification of material param- eters is presented. The strategy is based on identifying the isotropic hardening, combined hardening (isotropic-kinematic hardening), and complete hardening model (isotropic-kinematic-cross hardening) sequentially, in such a way that the parameter values identified in the previous step are used as start- ing values for the next step. Hence, the isotropic and kinematic hardening are first identified using the monotonic shear and Bauschinger shear test data, then the distortional (cross) hardening effect is de- termined using orthogonal tension-shear data using the isotropic-kinematic hardening parameter values as starting values. The material model was implemented in LS-DYNA using user defined material and LS-OPT based parameter identification for the steels LH-800 and DC06 is performed. The identified pa- rameters are first validated and then used in F.E. simulations using ABAQUS and LS-DYNA. A complete account on application of identified material model is presented in the talk ”Numerical investigation of draw bending and deep drawing taking into account cross hardening” presented at this meeting.
Apostolos Papaioanu, Prof. Dr. Mathias Liewald MBA – Institut für Umformtechnik, Stuttgart, Ralf Schleich – HochschulInstitute Neckarsulm, Neckarsulm
Today’s stretch forming technologies mainly are used for production of large and flat parts made of sheet metal mainly for the aircraft industry (wing fabrication) and for shipbuilding. Because of the high investment costs and high process time, the use of such conventional stretch forming technologies is not qualified for production of car body panels. However, benefits of present stretch forming methods such as improvement of the mechanical properties of these parts today makes stretch forming technologies attractive for automotive industry. For this very reason a new technology for stretch forming of sheet metals (Short-Cycle-Stretch forming SCS) has been developed at the Institute for Metal Forming Technology (IFU) at Universitaet Stuttgart . The SCS technology combines a plane pre-stretching and subsequent deep drawing operation for production of small car body panels with high demands concerning surface quality. SCS technology is based on a low cost tool which is used in a single action deep drawing press with short process cycles . Former investigations have shown the tremendous potentials of the SCS technology by using typical mild steel alloys for car body panels. Conducted investigations about theoretical achievable effective strain in the stretched region included experimental validation which approved an effective strain value of φ≈0.09 in the stretched region of the specimen . In order to fulfil increasing environmental regulations, the automotive industry focuses on reducing car body’s weight by using lightweight materials such as aluminium or high strength steel. SCS technology offers the possibility of producing car body panels with high surface quality at a minimum of investment costs. Therefore it is necessary to verify SCS technology for new lightweight sheet metal materials as described in . Because of the material properties of high strength steel it does not make sense to investigate such materials for their use with SCS technology regarding the denting resistance and the part stiffness. However, aluminium is due to lower material properties predestinated for a pre-stretching process to increase such properties. The SCS technology offers a huge potential for pre-stretching aluminium blanks and to produce parts with significant better part quality with regard to part stiffness and dent resistance.
Dr. Chiara Silvestri, Mario Mongiardini, Prof. Dr. Malcolm H. Ray – Worcester Polytechnic Institute
A detailed review of an existing LSDYNA finite element (FE) model of the Knee-Thigh-Hip (KTH) of a 50th percentile male was accomplished. The main scope was to refine some aspects of the model for obtaining a more appropriate and biofidelic tool for injury mechanics investigation of the KTH in frontal car crashes. Detailed reviews of this model were performed with regards to material properties of the bone models used for representation of the pelvis, femur and patella. To investigate bone fracture mechanisms due to impact, the erosion material failure method was abandoned in favor of the adoption of a more realistic detection of failure locations using stress contour plots. Qualitative validations of the pelvis and femur bones of the new model were performed against cadaveric specimen tests conducted at University of Michigan Transportation Research Center. In addition, quantitative validations were performed with use of the Roadside Safety Verification and Validation Program (RSVVP), developed to validate numerical models in roadside safety. The approach for these validations was also different. Earlier work had compared the finite element results to the physical test corridors whereas this work used a direct comparison of each finite element validation simulation to a specific corresponding test. Validation of the bone models were based on comparison of the impact forces from contact between the dashboard and knee region of the KTH model. For each case, force simulation results were in good agreement with experiment outcomes, and FE fracture locations matched failure modes from cadaveric tests. Quantitative results indicate that the test and FE time histories can be considered to be the same, and they therefore represent the same impact event. A new validated dynamic representation of ligaments was adopted for prediction of avulsion ligament injuries in high speed frontal automotive collisions when lower extremities are subjected to high strain rates. FE results from ligament avulsion agreed with test data and injury criteria recommended from literature. A different model of the knee patellar tendon was implemented with use of material SEATBELT and the introduction of slip-rings to constrain the patellar tendon to the biomechanically correct line of action. This refined LSDYNA finite element model of the KTH resulted in a more biofidelic representation of the human KTH and represents a suitable and reliable tool for exploration of KTH fracture mechanisms resulting from frontal vehicle crashes.
Ashok L. Ramteke,Ph.D, Prasad B. Nadgouda – Hema Engineering Industries Limited,
In general, main assembly consists of different sub-assemblies. These sub-assemblies are joined together using rivets/bolts, welds etc. Individual subassembly is often verified for performance using commercial CAE software. To save time, rivet/bolt joints are usually modeled with beam-spider arrangement. Spider represents the rivet/bolt head and a beam connecting two spiders at the centre represents the rivet/bolt diameter. Analyst’s always try to perfect the verification close to practical conditions. In this article, the belt anchorage bracket in seat track assembly is considered for simulation. The performance of rivets joining the belt anchorage bracket to the upper rail of the track is studied in detail. In first simulation, these rivets are modeled with solid hexahedral elements. In the second simulation, these rivets are modeled with beam-spider arrangement. Stresses around the holes, in sheet metal belt anchorage bracket, are studied in both simulations. It has been found, that solid rivet proved to be better option over the beam-spider arrangement. The simulation is carried out as quasi static analysis in Ls-Dyna 971.
Dr. Holger Meissner – AUDI AG, Marko Thiele – DYNAmore GmbH
The increasing demand to evaluate vast amounts of different load cases has led to a highly standardized and automated way of model assembling at AUDI. This model assembling is greatly assisted by the software “CAx Load Case Composer” which has been developed by DYNAmore in cooperation with AUDI. The Loadcase Composer (LoCo) provides the user with convenient ways to manage FE-model include files and allows to automatically select appropriate include files for each load case. Thus the use of redundant includes can be avoided or at least reduced by significant amounts. One mayor concept of LoCo is the capability of integrating parameters in the FE input files. Parameters for design changes, such as for example airbag settings or seat/dummy transformations can be specified and administrated within the software. This allows the user to apply parameter studies, optimization and stochastic analysis very fast and easily. Through the integration of LS-OPT in LoCo, powerful optimization algorithms can be employed. In this paper it is shown how morphing parameters for geometrically shape changes have been integrated in LoCo. This will be demonstrated with an example. The close integration into the standardized simulation workflow allows performing parameter studies of shape design changes with a minimum effort. In addition, it can be used in conjunction with the LS-OPT integration in LoCo. Together with LS-OPT and LoCo an engineer at AUDI has the ability to set up an optimization with very little effort. Thus it allows that optimization, parameter studies and stochastic analysis become operations of daily use.
Andreas Wüst, Torsten Hensel, Dirk Jansen – BASF SE
The Integrative Approach described in this paper incorporates effects of the part’s manufacturing process (here: injection molding) into a new workflow for optimization of the part performance. The new approach is able to close the gap between process simulation/optimization and mechanical simulation/optimization. New classes of design variables linked to the manufacturing process complicate the workflow of the optimization. The newly introduced optimization discipline “manufacturing simulation” acts as a preprocessing step for all other disciplines while it can simultaneously be seen as a full optimization discipline as well. Shape optimization by morphing is included as well and further complicates the workflow. The paper outlines the necessary changes in the workflow and discusses the influence in different optimization scenarios. In a first example the prototype workflow based on state-of-the-art software packages and newly developed script and interface tools was designed, defined and proved to work. A screening phase as well as an optimization had been done with reasonable results. The part considered in this study is a thermoplastic structure, manufactured by injection molding. The most important process induced changes are based on the anisotropic orientation of short glass fibers in the material during filling. These effects had been taken into account using BASF’s ULTRASIMTM software. Filling simulation as well as warpage simulation and a mechanical impact simulation were used as single optimization disciplines.
Katsuhiko TAKASHINA, Kazuhiro UEDA, Takeo OHTSUKA – MITSUBISHI MOTORS CORPORATION
To improve the accuracy of crashworthiness simulation, it is preferable to consider the effects of metal forming. However, this approach was difficult in practice since analyzing the stamping simulation in detail requires much work. This paper describes the influence of residual strain, work hardening and material thickness changes resulting from the stamping process on the crashworthiness simulation. In almost all impact load cases, the results show that deformation is reduced by the work hardening effects. These results are verified by actual experimental data.
Ralf Schleich – HochschulInstitute Neckarsulm, Christoph Albiez – AUDI AG, Apostolos Papaioanu, Prof. Dr. M. Liewald MBA – Universität Stuttgart
The lack of accuracy of buckling prediction in forming simulation is widely known. This is mainly caused by insufficient element stiffness as well as a very simplified strain path but not sheet thickness dependent buckling criterion. Within this contribution a methodology for investigating such issue is developed. This paper also reveals possibilities concerning an experimental analysis of buckling sensitivity of AA6016 aluminium sheet metal alloys. For this purpose, specimen shape referring to Yoshida which cannot be used for aluminium alloys have been enhanced simulatively. Thus, nine geometries ensuring different strain paths have been developed and validated experimentally. Based on this simulative and experimental test set up a buckling criterion for plane aluminium sheets under uniaxial tension is given here.
L. Rorris, D. Siskos, Y. Kolokythas – BETA CAE Systems SA
The increasingly demanding and complex requirements in Crash Analysis, require continues and innovative software development. BETA CAE Systems in an effort to meet, and exceed, the demands of the industry is introducing new cutting edge technologies. Both in the pre processing area with ANSA, and in post processing with μETA. This paper presents these new technologies. With the introduction of a new version of ANSA in 2009 a new user interface was presented. The new interface is a long term effort to give the CAE engineer the capacity to work in a modern software interface environment leading in increased productivity and “ease of use”. On the same time the development of highly specialized tools can greatly reduce the time of pre processing by automating various difficult operations. Some of these are a kinematic solver that allows the manipulation of complex kinematic mechanisms of crash models and tools that automate the procedures for occupant and pedestrian testing. In the area of post processing the advances are equally impressive in the latest μΕΤΑ versions. Better system resources utilization such as smaller memory footprint and a huge speedup in graphics performance, guarantee that the responsiveness and feel of the software environment won’t be compromised even by the biggest models. Additionally advanced functionality, like the direct calculation of section forces, provides the tools that are needed for the evaluation of the results. Recently process automation tools are introduced which together with advanced report generation functionalities make the automation of post processes easy.
Bernhard Fellner – Magna Steyr Fahrzeugtechnik AG & CoKG, Thomas Jost – Das Virtuelle Fahrzeug Forschungsgesellschaft mbH
For the customer, passive safety is one of the driving reasons for the decision when buying a new car. To ensure high safety standards, passive safety is demonstrated in vehicle crash tests. Instead of vehicle to vehicle crash tests, one vehicle is replaced by an aluminium honeycomb based crash barrier. This barrier represents the front of a vehicle by the shape, the deformation behaviour and the energy absorption. Using Finite Element Method (FEM) it is possible to show and predict the behaviour of the vehicle’s structure during a previous mentioned crash test. To ensure good simulation results compared to reality it is not only necessary to correctly build up the FE model of the vehicle, but to simulate the real behaviour of the crash barrier too. Experience shows that the deformation behaviour of the FEM crash barrier seriously influences the quality of the full vehicle simulation. The barrier models that are currently in use, show insufficient reliable results. The modelling techniques are not able to show the principle deformation and failure behaviour of aluminium honeycomb. Moreover huge barrier deformation is able to cause serious instability problems of the models. That leads to an inaccuracy in predicting the vehicle safety during a virtually based development process. It has to be considered that CAE driven design processes are only feasible when the simulation delivers results with reliable prognosis quality. During the last years a new modelling method for aluminium honeycomb structures especially based on the IIHS side impact barrier was developed. In the meanwhile the method proved to work also with the high relative deformations that have to be faced in a frontal offset crash test. A very specific sequence of tests was carried out to determine the structural properties of the aluminium honeycomb, the cladding and the whole barrier itself as well. The tests were planned to show the reproducibility of the results but also for example the dependence on the test velocity at the same energy levels. The output of this process is a stable barrier model capable to show localized deformations. This prevents overestimation of energy absorption by distributing the deformation on the whole barrier. The developed method to simulate crash barriers contributes to the improvement of full vehicle crash simulations. Reliable calculation results based on more accurate barrier models will help to reduce the risk of changes in already released toolings after analysing first real crash results.
Gilad Shainer, Tong Liu – Mellanox Technologies, Jacob Liberman, Jeff Layton, Onur Celebioglu – Dell, Inc., Scot A. Schultz, Joshua Mora, David Cownie – Advanced Micro Devices (AMD), Ron Van Holst – Platform Computing
From concept to engineering and from design to test and manufacturing; engineering relies on powerful virtual development solutions. Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) are used in an effort to secure quality and speed up the development process. Cluster solutions maximize the total value of ownership for FEA and CFD environments and extend innovation in virtual product development. Multi-core cluster environments impose high demands for cluster connectivity throughput, low-latency, low CPU overhead, network flexibility and high-efficiency in order to maintain a balanced system and to achieve high application performance and scaling. Low-performance interconnect solutions, or lack of interconnect hardware capabilities will result in degraded system and application performance. Livermore Software Technology Corporation (LSTC) LS-DYNA software was investigated. In all InfiniBand-based cases, LS-DYNA demonstrated high parallelism and scalability, which enabled it to take full advantage of multi-core HPC clusters. Moreover, according to the results, a lower-speed interconnect, such as GigE or 10 Gigabit Ethernet are ineffective on mid to large cluster size, and can cause a reduction in performance beyond 16 or 20 server nodes (i.e. the application run time actually gets slower) We have profiled the communications over the network of LS-DYNA software to determine LS-DYNA sensitivity points, which is essential in order to estimate the influence of the various cluster components, both hardware and software. We evidenced the large number of network latency sensitive small messages through MPI_AllReduce and MPI_Bcast operations that dominate the performance of the application on mid to large cluster size. The results indicated also that large data messages are used and the amount of the data sent via the large message sizes increased with cluster size. From those results we have concluded that the combination of a very high-bandwidth and extremely low-latency interconnect, with low CPU overhead, is required to increase the productivity at mid to large node count. We have also investigated the increase of productivity from single job to multiple jobs in parallel across the cluster. The increase of productivity is based on two facts. First, good scalability of AMD architecture that allows to run multiple jobs on a given compute node without saturating the memory controller. Second, the low latency and high bandwidth available on the InfiniBand interconnect that allowed us to offload the CPU to CPU data traffic from MPI communications via the interconnect instead of in compute node. The net result of that practice is an increase of the productivity by a factor of 200% with respect to the single job run. Finally, the increase of productivity on single job runs with high speed interconnects has been analyzed from the point of view of power consumption leading to a 60% reduction or energy savings when using InfiniBand with respect to Ethernet.
Prof. Dr.techn. Joachim Danckert, Dr. Benny Endelt – Aalborg University, Denmark
Long precision tubes are commonly made using the floating plug tube drawing process. The process has been analyzed using various methods e.g. upper bound method and FEM [1-10]. The die land and the plug land are usually cylindrical and form a cylindrical bearing channel between the die land and the plug land. The influence from the length of the bearing channel on the drawing force has only been dealt with in very few papers. In  it is recommended to use the shortest possible bearing channel in order to reduce the drawing force. A short bearing channel is also recommended in  both in order to reduce the drawing force, but also in order to increase the stability of the drawing process. The authors have not found any papers dealing with which influence the shape of the bearing channel has. The paper describes an analysis of tube drawing with a floating plug carried out using LS-DYNA®. The analysis shows that the drawing force, with conventional tooling, is heavily influenced both by the length and the shape of the bearing channel. The analysis has given inspiration to a new plug design, where the cylindrical plug land is replaced with a circular profiled plug land. Simulations of tube drawing with the new plug design show that the drawing force can be decreased and that the drawing force is nearly independent of the length of the die land and of small variations in the die land angle. With a conventional plug it is necessary at start up to make a dent in the tube behind the plug in order to force the plug into the right position in relation to the die. Without a dent the plug will be pushed ahead of the die and no reduction of the tube wall thickness will take place between the plug land and the die land. The dent is commonly made manually with a hammer and making the dent is difficult. If the dent is not made big enough the plug may pass the dent without being brought in the right position in relation to the die and if the dent is made too large this may lead to tube fracture. To ease the threading process at start up it is suggested to make the plug with a conical front end. By doing so the plug becomes self-catching; that is the frictional forces between the conical front end and the inside tube wall will set the plug in the right position in relation to the die during start up. Simulations show that the “self-catching plug” principle works.
F. Schoenmakers – TASS
While safety legislation becomes more stringent and vehicles intended for global markets must conform to the requirements of a wider range of regulatory bodies, the cost of physically testing crash safety performance continues to rise and the vehicle safety system design is under high pressure to adopt virtual development techniques. Structural analysis and safety system optimization have traditionally been undertaken in two totally different computational environments. MADYMO is the worldwide standard occupant safety simulation software. It is renowned for its fast simulations, high-quality dummy models, and accurate restraint system modelling techniques. LS- DYNA is known for its accurate and robust structural FE calculations, optimizing the vehicle structure for crash integrity and deceleration levels. This presentation describes the mechanism to couple MADYMO and LS-DYNA to take full benefit of the best of these two worlds to further enhance the performance of vehicle safety performance designs. Due to the large number of simulations required, restraint system design and optimization can be done in MADYMO, using input from FE analyses and/or tests. The MADYMO dummy + optimized restraint design can then be implemented in the LS-DYNA vehicle model to verify and fine- tune the restraint performance in the full vehicle crashworthiness analyses. The use of the same MADYMO dummy model in the total design process ensures a transparent and controlled manner of judging the restraint performance. Typical use case examples will be presented to show the benefit of the combination of MADYMO and LS-DYNA.
S. Kolling, M. Neubert – Giessen University of Applied Sciences, J. Subke, J. Griesemann – Biomechanics Lab
An experimental setup is presented for the material characterization of rubber-like sensomotoric insoles. This setup consists of local hardness measurements, quasi-static compression tests and dynamic testing using the 4a Impetus II pendulum test system . A correlation between the measure of shore hardness and the stress strain relation of rubber-like materials is presented and verified in order to consider the inhomogeneous properties of insoles due to milling work of the manufacturing process. The dynamic response of the material is modeled by MAT_SIMPLIFIED_RUBBER/FOAM (material no. 181) in LS-DYNA  and MAT_SIMPLIFIED_RUBBER_WITH_DAMAGE respectively. The presented modeling technique is capable to describe the entire process chain from milling of the insole up to its usage. A further experimental setup is presented for converting the inlay and the human foot to a finite element model. By means of the Streifenlichttopometrie (SLT)  it is possible to record the complete surface of the object in a practically photorealistic fashion, i.e. three-dimensionally. In comparison with the classic method of photogrammetry, Streifenlichttopometrie is remarkably faster (10,000 points/s instead of 1 point/s). In this paper we present a modification of this method towards the measurement of dynamic processes.
Gaetano Caserta, Lorenzo Iannucci – Imperial College London, Ugo Galvanetto – Padova University
A 3D Finite Element model of an innovative composite material, configured as a layer of expanded aluminium honeycomb placed on top of a layer of expanded polystyrene foam, has been developed and validated against experimental data obtained from quasi-static tests. Ls-Prepost was used to generate the model. Ls-Dyna was used to simulate the behaviour of this material under compressive loads. The objective was to reproduce deformation mechanisms and to compare the numerical load-displacement curves with those obtained from experiments. The loading direction was chosen perpendicular to the plane of the alignment of the honeycomb cell walls. Particular emphasis was given to the contact between the aluminium honeycomb cell walls and the surface of the foam. Because of the periodicity of the geometrical and material properties, these composites were modelled as a unit cell according to the principles of the micromechanics analysis of periodic structures. In addition, to further reduce computational costs, the inner symmetries of the unit cell were exploited to generate and validate a smaller unit cell model (here called sub-cell). The results obtained from analysis of both the unit cell and the sub-cell were compared with experimental data. Numerical results showed good accuracy even when the smaller unit cell was used.
Dr. P.K.C.Wood, Dr. C.A.Schley, Mr. R.Beaumont – IARC, University of Warwick, UK, Dr. B.Walker – ARUP, ARUP Campus, Blythe Valley Business Park, UK, Mr. T.Dutton – Dutton Simulation Ltd, Kenilworth, UK, Mr. M.A.Buckley – Jaguar Land Rover, UK
The project has developed spotweld failure models capable of industry application for a range of steel grades to support development of automotive products, and their compliance to international crash safety requirements. An important consideration in this project is a requirement to balance the cost to develop the data input to models and their application capability in CAE based crash simulation tools to predict spotweld failures. Shear and tension spotwelded joint specimens in a variety of automotive sheet steel materials with thickness varying in the range 0.8 to 2 mm have been tested at low and high speed. The joint specimens have been spotwelded under controlled laboratory conditions and simulated factory assembly conditions to compare performance, and validate spotweld models for industry application. All specimens have been subjected to a heat treatment that simulates the paint bake conditioning applied to the BIW. All spotwelded specimens are tested under controlled laboratory conditions. At low rate, spotwelds are tested at 1 mm/s and these may be referred to as quasi-static tests. At high rate, spotwelds are tested at 2 m/s and these may be referred to as dynamic tests. Accordingly test procedures were developed and refined to support the development of quasi-static and dynamic test results. In total some two hundred tests were performed. A method to characterise the test results, and calibrate models to predict spotweld failure under quasi- static and dynamic-impact conditions is described.
H.H. Wisselink, J. Huetink – Materials Innovation Institute (M2i) / University of Twente
Damage and fracture are important criteria in the design of products and processes. Damage mod- els can be used to predict ductile failure in metal forming processes. Nonlocal models avoid the mesh dependency problems of local damage models. A nonlocal damage model has been implemented in LS- DYNA using the user-subroutines UMAT and UCTRL1. The implemented model will be compared with results obtained with the available option in LS-DYNA to combine *MAT PLASTICITY WITH DAMAGE with *MAT NONLOCAL. Advantages and disadvantages of the different implementations will be dis- cussed. The user nonlocal damage model has been applied to a bending and a blanking process. Results of these simulations will be shown.
Silke Sommer – Fraunhofer Institut Werkstoffmechanik IWM, Frederik Klokkers – Laboratorium für Werkstoff- und Fügetechnik LWF
The light weight potential in automobile fabrication is increasing due to the development of new high strength steels. The realisation of this potential requires the use of adjusted joining techniques for the combination of optimized material properties with optimized joint properties. With the development of laser welding to a series-production technology a joining technique is available to practise the light weight potential of high strength steels. The advantages of laser welding compared to conventional welding techniques are the high process velocity, the low thermal influences of the material and the flexibility in joint figure and position [1, 2]. The flange width can be reduced that leads to weight reduction and design advantages. Softening in the heat affected zones beside the welds is avoided because of the compressed heat input during laser welding. The strength of the high strength steels remains in the joint. The application of laser welding is increasing for joining automotive components to the body in white . But laser welding is less used compared to conventional joining techniques like spot welding. The reasons are the missing knowledge about crash worthiness of laser welds and missing methods to model the laser welds in crash simulation. The questions are how laser welds behave under crash loading: What are the experimental methods for a reliable characterisation? What are the numerical methods for an efficient simulation of the load-bearing capacity of the joints? A working group of FAT AK27UA crash and occupants simulation initiated a research project to answer these questions. Two different weld geometries of laser welded step welds were investigated. Those are short linear and c-shaped welds as single parts of the step welds. The investigations of the laser welded joints are done using two different steel kinds. First a low strength steel, DC04, with a tensile strength of about 300 MPa and second a high strength steel, TRIP700, with a tensile strength of about 670 MPa are used. The laser welding of the 1.5 mm and 2.0 mm thick steel sheets was done by Daimler Forschung in Ulm, Germany. Metallographic investigations of the welds were done. The welds were cutted and grinded and hardness profiles were measured over the weld. The average hardness was about 200 HV0.1 in the weld metal of DC04 linear welds compared to 100 HV0.1 in the base metal. The average hardness was about 470 HV0.1 in the weld metal of TRIP700 linear welds compared to 215 HV0.1 in the base metal. The width of the linear laser weld was about 1 mm and the length of the single linear laser weld was about 18 mm. Single welds were characterised using different specimen geometries to realise different loading situations like shear, tension, bending and combined shear-tension (KSII-0°,-30°,-60°,-90°, coach-peel and shear-tension specimens). For investigation of strain rate effects the loading velocity has been varied between quasi-static and 1.5 m/s. The load bearing capacity of the DC04 linear laser weld under shear-tension and KSII-0° are less (about 10 %) than under tension (KSII-90°) loading. The linear laser weld is shear loaded in weld direction with the KSII-0° specimen and perpendicular to the weld direction with the shear-tension specimen. In both shear loading cases the weld fails through interfacial fracture. In all other loading situations like KSII-90°,-30°,-60° and coach-peel the welds were buttoned out or peeled out of the connection. While investigating the linear welds in TRIP700 some changes in fracture mode occurred. Here, interfacial fracture occurred also in other loading situation especially under bending loading in the coach-peel specimen test. The scattering of load bearing capacity is higher as a result of the changes in fracture mode compared with the results in DC04 steel. For investigation of the local loading situation a detailed model with solid elements for the sheets and the weld is used containing different material zones for base metal, heat affected zone and weld metal. The result of this detailed simulations was, that the loading in the weld line is not homogeneous. For example under tension loading, KSII-90°, there are high normal stresses located in the base metal at the ends of the linear weld. The distribution of loading is also seen in a simplified model. Here the laser weld of 18 mm length is modelled using five connected solid elements. The metal sheets are modelled with shell elements, of course. The elastic-plastic material model *MAT_024 without failure strain is used for the shells. The cohesive material model *MAT_ARUP_ ADHESIVE is used for the solid elements of the laser weld defined here as cohesive elements. The fracture parameters of *MAT_ARUP_ADHESIVE are determined with simulation of shear-tension and KSII-90° tests taking into account the local distribution of shear and normal stresses in the weld. The exponent in the fracture law is optimised using the test results of KSII-30° and -60°. It is possible to model the coach-peel and KSII-0° tests, which were not used for parameter determination. But this is only possible for the laser welds in DC04. The same strategy leads to an overestimation of strength of nearly 100 % in the coach-peel simulation for TRIP700 laser welds because of the changes in fracture mode. To take this into account, a separate fracture criteria for bending is necessary and a material model for spot welds, *MAT_SPOTWELD_DAIMLERCHRYSLER, was used successfully taking energy absorption behaviour with the new option DG_TYP=3 into account. For the verification of the laser weld modelling component test are done using a so called T-specimen with four or six critical loaded laser welds depending on the loading direction. The simulations of the component tests with specimens made of DC04 have shown a good agreement with the test results using the cohesive material model for the simplified laser weld model. While simulating the TRIP700 component tests with the spot weld model the basic necessity of modelling the energy absorption was shown.
N. Eches, D.Cosson – Nexter Munitions, Q. Lambert – C.T.A. International, A. Langlet – C.T.A. International / Université d’Orléans, J. Renard – Université d’Orléans
This paper deals with the development of a finite element model of a 40 mm Case Telescoped Ammunition and its associated gun, able to describe the in-bore travel of the projectile during firing. This work is part of a PhD thesis, supervised by CTA International and the PRISME Institute, dedicated to the study of the parameters relevant to the accuracy of the Kinetic Energy Round. In order to conduct efficient parametric studies, they asked Nexter Munitions to take the lead for the finite element simulations of the in-bore travel, which is expected to be one of the most contributing phases of the firing event on the ammunition consistency. Nexter Munitions has set up a gun/ammunition model, applying the principles used for the same kind of work in 120 mm ( and ). The 40 mm study added some issues such as the progressive rifling of the gun, and the slipping obturator, which reduces the projectile spin rate. The model is compared to actual firing carried out by CTAI, through the strains measured on strain gages lying on the barrel and on an instrumented penetrator, ftted with an on-board data recorder. i The paper focuses on the correlation between the strain gages measurements and the simulation of the exact test configuration. It was necessary to run several configurations, with different contact logic and friction coefficients to match simulation and experiment.
Le Blanc Gaël, Jacques Petit, Pierre-Yves Chanal – Centre d’Etudes de Gramat, L’Eplattenier Pierre – LSTC, Livermore, Avrillaud Gilles – ITHPP
For several years the “Centre d’Etudes de Gramat (CEG)” has been studying the behaviour of materials by means of experimental devices using High Pulsed Powers technologies. Among them, GEPI is a pulsed power generator devoted to ramp wave (quasi isentropic) compression experiment in the 1 GPa to 100 GPa pressure range. It may also produce non shocked high velocity flyer plates in the 0.1 km/s to 10 km/s range of velocity. The basic principle is based on a strong current circulation into electrodes. This current generates within the electrode a magnetic pressure wave (several GPa via the Laplace forces) and a strong rise of the temperature (several thousands K) due to Joule effect. Depending on that temperature, materials may be locally subjected to phase transitions such as solid to liquid or liquid to vapor. Modelling a GEPI shot requires an Electromagnetism/Mechanical/Thermal 3D solver to study all the physical phenomena. CEG has selected LS-DYNA because a new electromagnetism solver is coupled to the historical solvers (mechanical and thermal) in LS-DYNA beta version 980. However, there is, at the moment, no equation of state with phase transitions available in LS-DYNA standard version. It is for this reason that the GRAY multi-phases EOS, developed at LLNL, is implemented as a user subroutine in LS-DYNA. The GRAY EOS allows taking into account phase transitions thanks to energies threshold. In this paper, the GEPI device is briefly described as well as the LS-DYNA EMAG solver. The GRAY EOS is described and its implementation is discussed. Examples of applications are presented, in particular, the modelling of a GEPI experiment involving local liquefaction of the electrodes. The numerical free surface velocities are compared to experimental measurements. The liquefaction process is analyzed and compared to post-mortem observation on the electrodes. To conclude, the model limitations and potential improvements are presented.
Korbetis Georgios, Siskos Dimitrios – BETA CAE Systems S.A
Simple optimization techniques may serve well for the improvement of product performance at early concept design phases. At final phases though, optimization problems become more complex, with many variables and multiple optimum solutions. This leads to the need for the deployment of multi- disciplinary optimization techniques where many different load cases and analysis types, such as for Crash, NVH, CFD etc., are combined to achieve the optimum solution. The combination of ANSA CAE pre-processor, LS-OPT and mETA post-processor, offers an efficient and reliable tool for solving multi- disciplinary optimization problems. In such a process, starting from a common initial model, multiple outputs for different load cases and disciplines can be defined in ANSA. Design Variables that handle model shape and parameters are controlled in a centralized manner by the dedicated Optimization Task tool that is integrated in the core ANSA functionality. Further more, the newly released coupling between LS-OPT and mETA, provides a valuable tool for the definition of multi-disciplinary optimization scenarios, as mETA is able to extract responses from numerous solvers and load cases and feed them to LS-OPT.
L. Adam, A. Depouhon, R. Assaker – e-Xstream engineering
This paper deals with the prediction of the overall behavior of polymer matrix composites and structures, based on mean-field homogenization. We present the basis of the mean-field homogenization incremental formulation and illustrate the method through the analysis of the impact properties of fiber reinforced structures. The present formulation is part of the DIGIMAT  software, and its interface to LS-DYNA, enabling multi-scale FE analysis of theses composite structures. Impact tests on glass fiber reinforced plastic structures using DIGIMAT coupled to LS-DYNA allow to analyze the sensitivity of the impact properties to the polymer properties, fibers’ concentration, orientation, length … For such impact applications the material models used for the polymer matrix are usually based on nonlinear elasto-viscoplastic laws. Failure criterion can also be defined in DIGIMAT at macroscopic and/or microscopic levels and can be used to predict the stiffness reduction prior to failure (i.e. by using the First Pseudo Grain Failure model). Theses failure criterion can be expressed in terms of stresses or strains and use strain rate dependent strengths. Finally, the interface to LS-DYNA, available for the MPP version, will be used to run such multi-scale FE simulations on Linux DMP clusters. The application will thus involve: LS-DYNA MPP to solve the structural problem. DIGIMAT-MF as the material modeler. DIGIMAT to LS-DYNA MPP strongly coupled interface to perform nonlinear multi-scale FEA DIGIMAT-MF composite material models based on : – An elasto-viscoplastic material model for the matrix, – An elastic material model for the fibers as well as the fiber volume content, fiber length and fiber orientation coming from an injection code, – Failure indicators computed at the microscopic level.
Dr. Ahmed Elmarakbi, Mr. Niki Fielding – University of Sunderland, United Kingdom
This paper is an investigation into the design of an energy absorbing street pole, concerning the frontal impact of a vehicle. With design engineers now are looking at other ways to improve vehicle occupant safety by focusing on the advantages that can be achieved by improving the crashworthiness of street furniture. The study of axial crush behaviour of metal materials are investigated along with a number of variables such as cross-sectional shape, shell thickness, materials, as well as the velocity affects on tubes. Different simulations are carried out on the effects of bedded crumple initiators placed a various heights from the top of the tube, in determining the desired value of peak load reduction, along with the effect in energy absorption of the tube. With the conclusion of the desired variables for the design of an energy absorbing tube, the tubes are placed 90 degrees to that of the base of the model street pole to modify the pole design . Simulation of frontal impact of a vehicle and street pole are analysed and compared to that of the energy absorbing street pole concept. Studies are carried out by numerical simulation via the explicit finite element code LS- DYAN. Results compare the absorbed energy and the deflection of each variable, and recommend optimum design for the pole structure which improved vehicle crashworthiness.
C. T. Wu – LSTC
In this presentation, an update on LS-DYAN EFG method for solids and structures analysis will be given. Several features were developed in the past two years to solve specific challenging problems as well as to improve the efficiency. This talk will emphasize on three new features including an adaptive Meshfree scheme based on a local Maximum Entropy approximation for metal forging and extrusion analysis, a semi-Lagrangain formulation in foam materials under severe compression, and a discrete meshfree approach in the failure analysis of brittle materials. Several practical examples are included to demonstrate these capabilities.
Philip Ho – LSTC
The introduction of the new LS-PrePost 3.0 will be presented here. A completely redesigned graphical user interface has been implemented in the new version of LS-PrePost 3.0. Tool bars and icons are being used for the main manual system to replace the old text based button system. The icons can be set to have text or without text. The new interface provides the maximum possible graphical area for the model rendering at the same time allow users to define their own toolbar with frequently used icons put together as they like. Besides using icons from the toolbars, a pull down manual system can also be used to reach to the function interfaces. Popup windows are used for each functional operation. Only one functional operational will be active at one time. Users can easily switch between the old and new interfaces if they do not feel comfortable in using the new interface. Also, an old to new interface button system has been implemented to transition users from the old interface to the new interface. Another major feature in LS-PrePost 3.0 is the newly developed geometry processing engine. The geometry processing engine is based on Open Cascade Technology 6.3. LS-PrePost 3.0 supports basic geometry entities such as lines, surfaces, and solids. It supports shape fixing and reshaping, such as fixing hole, small edge removal, vertex reposition and deletion, small face removal or face extension. It also supports faces stitching to provide better meshing result in the auto mesher. Geometry data can be imported via Iges or Step file format, while modified geometry also can be exported in iges file format. Surfaces can also be created from existing mesh using LSTC’s own reverse engineering module. Beside the new interface and geometry processing engine. New applications have been added to the LS-PrePost3.0 such as the Roller Hemming job setup and the LS-DYNA ALE job setup. An application frame work has been created such that new applications can be easily added in the future.
Juan Fernández – Takata-Petri AG
An accurate Out-of-Position (OoP) simulation will be a mile stone for the development of restraint systems, as this would save hundreds of expensive hardware tests. OoP load cases are currently required by the US legislation and as in-house specification by many car manufactures for other markets. But simulating OoP is a difficult issue, as several challenges have to be addressed, for example: An improved dummy model validated against new component tests and an accurate modeling of the folding/unfolding of an airbag including an appropriate gas model. One of the missing pieces for this simulation is the accurate representation of the inflator mass flow. The current method to characterize gas inflators is the tank test. This method has big advantages, being cheap, reproducible and independent of the inflator geometry. The tank test shows, however, some important drawbacks: The lack of similarity of the bag inflation regarding volumetric work of the gases and initial conditions, a uniform and immediate pressure distribution must be assumed, the measurement –tank pressure– must be derived to obtain the inflator gas mass flow resulting in a higher measurement error and heat losses are high and not uniform during the process.
Clemens Barthel, Till Clausmeyer, Bob, Svendsen – University of Technology,Dortmund, Germany
Sheet metal forming is one of the most important manufacturing processes. Computer simulation has become the tool for modeling such processes and also for predicting the springback in the final part. The talk will deal with two different metal forming processes, namely draw bending and deep drawing of a cup. In the draw bending case a metal strip is clamped into a testing device in such a way that it is bent over a roller. The strip is then moved along so that it undergoes bending and back bending while passing the roller. Subsequently the strip is released resulting in springback. Due to the geometry of a strip springback is significant in this process. Since during the draw bending process the state of stress is not the same in the middle of the strip and at the outer edge, high residual stresses remain in the workpiece. Though taken into account in the finite element model, contact is here of minor importance. Contact and also cross-hardening plays a more significant role in the second process. During the deep drawing a compression takes place in circumferential direction and in radial direction the ma- terial undergoes bending and unbending. Therefore orthogonal loading path changes occur. Some metals exhibit an increase of the yield stress after orthogonal strain path changes, i.e. so-called cross hardening. This is also the case for the air hardening steel LH 800 which is the material for both processes. To take the cross hardening into account one needs a material which goes beyond stan- dard isotropic and kinematic hardening. Such a model has been implemented via the user-material interface of LS-Dyna and ABAQUS. A comparison with standard hardening models as well as a com- parison of the results for the two different FE-codes will be given in the talk. Both processes were investigated also experimentally which facilitates a comparison of simulation results and test results. The parameter identification was done with LS-Opt and is subject of the contribution “Identification of an advanced hardening model for single phase steels” given by Muhammad Noman which will be presented at this conference as well.
S. Stahlschmidt – DYNAmore GmbH, A. Hirth – Daimler AG
The BioRID v2.5 model is in a very fine validation state until now. In some new tests for further validation of the model, one can observe a very strong scatter in some major signals of the tests. Some of these signals are used to calculate injury criteria which are used to determine the quality of a seat in rear crash scenarios in consumer tests. This paper describes the new validation test setup and gives an overview about the latest validation state of the BioRID model. Furthermore, the problem of scatter in the tests is shown and possibilities where this scatter may come from have been studied. The main focus is on the parameters of the BioRID model which have an influence on the neck load cell signals.
H. Daiyan, F. Grytten, E. Andreassen, R.H. Gaarder, E.L. Hinrichsen – SINTEF Materials and Chemistry, O.V. Lyngstad – Plastal AS, H. Osnes – University of Oslo / Simula Research Laboratory
Simulation of ductile polymers subjected to impact loading has become an important topic [1, 2], especially for automotive components related to passenger and pedestrian safety. The aim of our work is to establish and validate numerical models for impact response, and this presentation will focus on a study of impact on injection molded polypropylene plates. SAMP-1 (Semi-Analytical Model for Polymers)  was selected as constitutive model in LS-DYNA. This model takes tabulated data from experiments as input, and includes strain rate effects, pressure sensitive plasticity, plastic volume dilatation and damage. As the material behavior is complex, and data for large strains are needed, a major task is to obtain reliable data from material tests. Uniaxial tension and uniaxial compression tests were performed to calibrate the constitutive model. Three-dimensional digital image correlation (3D-DIC)  with two cameras and stereo vision was used to determine full-field displacements during uniaxial tensile tests, in order to quantify plastic volumetric strains and to obtain true stress-strain curves (the isochoric assumption is invalid for the present material). Uniaxial compression tests were made with short specimens in order to avoid buckling. Falling weight impact of plates (centrally loaded, circular clamping) and bars (three-point bending), and quasi-static three-point bending of bars, were simulated. Measured force vs. displacement, and permanent deformation of plates, were compared to numerical predictions. Figure 1 shows results for falling weight impact of a 4 mm thick plate. SAMP-1 is suitable for such materials, but improved material data are needed for e.g. strain rate effects and stress state effects.
Alon Brill – Netvision, Paul A. Du Bois – Consulting Engineer
Blast wave generated by energy released when a mine is detonated, travels through the air at supersonic velocity. Has a high pressure front and hits a vehicle in about 100 μm giving rise to overpressure on it. Blast created by expanding explosion products and air moving behind shock front; results in dynamic pressure on a vehicle. Soil ejecta thrown up when mines are buried in the ground; augment significantly impact of blast wave. Survivability of military vehicles with respect to mine blast loading has two aspects. First structural survivability must be ensured meaning that the vehicle’s armour must not be penetrated and the passenger compartment must remain intact to prevent overpressure resulting from mine explosions acting directly on crew causing primary injury (lung damage, ear drum puncture) there must be no air paths though which it could propagate. The second aspect is occupant survival which is mainly a function of acceleration levels in the occupant pelvic, spine and head region due to the initial acceleration pulse. To attenuate transmission of stress waves crew seats should be attached to hull sides or roof and incorporate damping materials. To avoid being struck by bulging floor crew’s seats should be well above it and their feet should not rest on it but on inner floor spaced from hull bottom plate or on raised foot rests. Reduction can be achieved by designing the vehicle motion due to explosions. The Motion of vehicles due to mine explosions is related not only to their weight but also their size and shape because of the impulse which act on them. Impulse is a function of pressure, angle and time of the blast, the vehicle’s projected area and is minimized by the hull being “streamlined” with a V-bottom and no sponsons or wheel wells. Another way to keep acceleration levels low is to use energy absorbing seat systems which come however at substantial cost. In this study numerical simulation was used to determine the critical mine blast loads in terms of an equivalent charge of TNT for vehicles of different mass. Although many simplifying assumptions were made (such as a rigid vehicle capsule), the simulation allows a fast estimate of the need for energy absorbing seat systems for a given vehicle class under a variety of loading scenarios. The study’s conclusions were largely confirmed by a series of full scale experiments that were performed 6 months after the start of the simulation work.
Karl Schweizerhof – Universität Karlsruhe / DYNAmore GmbH, Stephan Kizio – Universität Karlsruhe
Adaptive Finite element analysis in structural analysis has reached a fairly mature status; however, in practice only fairly little is visible in engineering applications. This contribution gives an overview over some major aspects with a specific focus on structural dynamics of shell-like structures. Adaptive analysis in structural mechanics is mainly focusing on static problems where a large number of linear and nonlinear tasks in 3D continuum problems as well as in shell problems have been tackled. The developed procedures show really considerable improvements for the executed numerical problems. In statics on one side the competition among the methods is between low and high order approximation – the h- and p- or hp-enhancement and among error estimation between global and local estimation. For a good overview over the subject for a large number of problems it is referred to , for some mathematical background see . For some simple benchmark problems the differences between the various approaches as presented in . More recent developments are concerned with time dependent problems thus we are focusing here on structural dynamics . In statics – linear or nonlinear – the spatial error distribution is the dominating quantity whereas in dynamics the consideration of the spatial error distribution over the complete considered time range and as well the consideration of the error due to time integration is needed. Here the consideration of dual problems – known in statics from the Betti-Maxwell principle and extended here with the according reciprocity idea, the Graffi-theorem  – allows checking the error in specific quantities at certain points in time, the so-called goal-oriented error computation or local error computation  . On the basis of such error estimations, in principle, the adaptive modification of the finite element mesh as well as the time step is possible. However, while in a semi-discretization approach the time step could be fairly easily adjusted – which is frequently done in the so-called explicit FE programs using the central difference scheme – the modification of the finite element mesh introduces major problems. First the data have to be properly mapped between meshes avoiding non-physical artifacts and second the dual error estimation scheme has to take into account different time steps and meshes. Both actions introduce further errors into the analysis which can hardly be judged. In addition the effort for the numerical analysis concerning the computation as well as the required storage becomes overly large  leading to the conclusion that adaptive analysis of real world problems based on dual error estimation cannot be handled – at least with the current computer environment. Thus the focus of this contribution is on a discussion first on the importance of different parts of the error estimation and on the adaptive procedure and second how the major ingredients of the adaptive duality based analysis for practical engineering problems – restricting to shell problems – can still be used, regaining efficiency . For some classes of shell type problems some simplifications can be suggested while still improving the quality of the analysis considerably by adaptive procedures . In structural dynamics also eigenmodes and eigenvalues are important, thus improvements concerning these are also briefly discussed  . Obviously the dominating quantity for achieving good results applying finite element methods in structural mechanics is a consistently refined mesh; not unexpected for high frequency excitations and interest of the engineers in these almost uniformly refined meshes with high mesh densities are required. It is shown, how the developed schemes can be applied to homogeneous problems and the limits concerning real world engineering models which include a large number of violations concerning standard continuum mechanics are presented. Also the procedures implemented in LS-DYNA ,  for adaptive analysis are discussed with the background set above. Further some model adaptivity for large structural computations where some parts are – at least in some early states of the analysis – hardly deforming. This effect can be used to introduce rigid bodies in the analysis; the question then arises, how this can be handled. References
B. Feng, J. Hallquist – LSTC
Constitutive equations, which may be used for modeling dummies during impact, are presented. In addition, a unified constitutive equation, for dummies and rubber-like materials, is presented. The new constitutive equation applies to elastic, viscoelastic, incompressible as well as compressible materials. Some special cases, e.g., neo-Hookean, Mooney-Rivlin, Ogden incompressible, Ogden compressible, and many currently used constitutive equations, are given to demonstrate the versatility of the new constitutive equation. The material constants for the constitutive equation can be determined from uniaxial and biaxial tests. A constitutive equation for chronorheological materials that describes the aging and viscoelastic behaviors of elastomers is presented. A recurrence formula, that saves computing time and requires no data storage space for time-dependent physical quantities for viscoelastic constitutive equations, is also mentioned briefly. These constitutive equations are used in numerical analyses for selecting materials to improve the performances of a dummy model used in car-crash simulations. Some results are shown. Future work is mentioned briefly.
Thomas Hofer – Altair Engineering GmbH, Peter Karlsson – Saab Automobile AB, Niclas Brännberg – Altair Engineering AB, Lars Fredriksson – Altair Engineering GmbH
Validation of occupant lower leg injury performance is a difficult procedure due to the complex interaction between occupant and vehicle structure. As a starting point, a carefully validated structural model is crucial to ensure the accurate load of the occupant model in terms of acceleration and applied forces. However, even after a tedious validation of structural performance and occupant environment, the calculated tibia values might still deviate from the test results. Some of these deviations may be caused by restrictions in the occupant model fidelity. This becomes evident in an offset-crash simulation (EuroNCAP), since the complex force behaviour (x, y, and z- components) do not seem to be reproduced by existing occupant models to 100% satisfaction. Especially the dummy joint representation for the ankle, knee and pelvis might cause deviations between the occupant model and the test. Modifications of joint parameters were done to demonstrate a significant potential in the improvement of the fidelity of the occupant model and to bring the tibia injury criteria closer to the test results. This paper presents a detailed numerical analysis to point out the discussed difficulties and proposes possible approaches for a more realistic prediction of tibia values in the case of the EuroNCAP front crash. The proposed changes in this paper will not replace the need to permanently improve the standard FE dummies, but should be seen as a part of the discussion in order to improve the fidelity of the standard dummies in the future.
Marco Perillo, Vito Primavera, Luca Fuligno – EnginSoft SpA, Giulia Fabbri, Casper Steenbergen, Nicolò Pasini – Automobili Lamborghini SpA
Experimental quasi-static and dynamic tests were conducted over different types of advanced material samples, such as composite sandwich, in order to derive both mechanical and numerical input parameters for LS-DYNA material models. The characterization of models addressed to reproduce the behaviour of real materials takes great importance in order to simulate accurately complex phenomena such as crash tests and impact events. This work deals with an innovative procedure aimed to calibrate the constitutive parameters of LS- DYNA advanced material models, and use them for prediction, design optimization and robustness analysis, hence reducing the need of further expensive experimental tests. This kind of approach allows also to understand the influence of physical and geometrical variables on composite dynamic structural response, or to get improved solution for industrial case studies. More in details, the available experimental data were imported in the modeFRONTIER Process Integration and Design Optimization software. An efficient stochastic optimization algorithm performed the calibration of the mechanical and numerical parameters of the existing LS-DYNA models, with a fully automated process. Such models could then be handled by modeFRONTIER to steer LS-DYNA simulation campaigns improving the design of composite and sandwich laminates. Any kind of free parameters to be investigated can be included in such a process, and the constraints to be respected and the multiple objectives to be pursued too. A short description of the most innovative techniques to do that will be given. An experimental-numerical procedure example from Automobili Lamborghini Composite Technical Department is shown.
Laszlo Farkas, Cédric Canadas, Stijn Donders, Tom Van Langenhove, Nick Tzannetakis – LMS International, Johan Tielens, Danny Schildermans – PUNCH Metals N.V.
This paper deals with the design and optimization of a vehicle bumper subsystem, which is a key scenario for vehicle component design. More than ever before, the automotive industry operates in a highly competitive environment. Manufacturers must deal with competitive pressure and with conflicting demands from customers and regulatory bodies regarding the vehicle functional performance and the environmental and societal impact, which forces them to develop products of increasing quality in even shorter time. As a result, bumper suppliers are under pressure to increasingly limit the weight, while meeting all relevant design targets for crashworthiness and safety. LMS Virtual.Lab offers an integrated platform to design engineers who are challenged with multi-attribute design of mechanical structures. For the vehicle bumper subsystem of interest, engineers can start from the CAD design, define a generic assembly model, define multi-attribute simulation models and meshes, as well as multiple analysis cases. The entire process is fully associative, enabling automated iteration of design and model changes, which is key towards an efficient optimization process with OPTIMUS. The structural bumper model is created, parameterizing its geometric and sectional properties. A Design of Experiments (DOE) strategy is adopted to efficiently identify the most important design parameters. Subsequently, an optimization is performed on small-sized Response Surface Models (RSM), in order to minimize the vehicle bumper weight, while meeting all design targets.
Yih-Yih Lin – Hewlett-Packard Company, Jason Wang – Livermore Software Technology Corporation
Using crash simulation models, we investigate the multicore performance of the newly developed hybrid LS-DYNA, a method whose speedup arises from both shared-memory and message-passing parallelisms. Theoretically, the hybrid method gains performance advantages over the traditional, message-passing-parallel (MPP) LS-DYNA for two reasons. First, the addition of shared-memory parallelism to the message-passing parallelism reduces the number of messages and their sizes dramatically, which in turn reduces latency and bandwidth requirements on interconnect. Second, the same addition enhances spatial and temporal localities for both code and data accesses, which in turn allows the size-limited cache to work more efficiently. Armed with this theory, we characterize performance of the hybrid method with respect to problem size, core count, core placement, and interconnect speed; thus provide users guidance on when and how to use the hybrid method efficiently. We also attempt to verify the theory by examining message patterns and the effect of core placement.
David Salway, GRM Consulting Ltd. UK., Paul-André Pierré, GRM Consulting Ltd. UK., Martin Liebscher, Dynamore GMBH, Germany
With the ever increasing demand for the efficient use of materials to reduce manufacturing costs and product mass; the use of optimisation techniques have become common place in CAE. The optimisation techniques used for optimising in the non-linear domain, and those used for linear domain problems have until recently been distinctly separate philosophies. For instance topology optimisation would be used with linear static analyses, but could not be applied to non-linear problems. VR&D GENESIS provides a fully integrated linear static analysis and optimisation solver code. GRM have developed an interface so that GENESIS can be coupled to non-linear problems solved in LS- DYNA. The coupling allows the advanced analysis capabilities found in LS-DYNA to be coupled to the Topology, Topometry and Shape optimisation techniques of VR&D GENESIS. The paper outlines the processes already developed by GRM Consulting Ltd1 to allow this coupling, and the most recent developments. These developments allow the analysis methods available in LS- DYNA to be optimised by the optimisation methods available in VR&D GENESIS. The latest developments have taken the method from a research project to a code suitable for use in production level optimisation tasks The practical examples are intended to show how the use of this method allows non-linear domain optimisation to consider thousands of design variables, non-linear and linear load cases, whilst reducing the number of function calls required to converge.
Tanja Clees, Daniela Steffes-lai – Fraunhofer Institute for Algorithms and Scientific Computing SCAI, Martin Helbig – Fraunhofer Institute for Mechanics of Materials IWM, Karl Roll, Markus Feucht – Daimler AG
During the fabrication of products, important material and process parameters, geometry and also external influences (e.g. room temperature) can vary considerably. It is known that they can have a substantial, even critical influence on the quality of the resulting products. Therefore, software tools and strategies supporting an efficient and thorough analysis of sensitivity, stability and robustness aspects as well as a multi-objective robust design-parameter optimization are necessary. This is especially true for parts of a car with a potentially critical influence in crashes as, for instance, the B- pillar which consists of several formed and connected blanks. We propose a new strategy, built upon several software tools as well as new material models, supporting an analysis of variations for the process chain forming to crash. The strategy roughly consists of the following parts and software tools: forming simulation (LS-DYNA) – – parameter sensitivity analysis (DesParO) – reduction/compression of input and output (DesParO) mapping (SCAImapper) – crash simulation (LS-DYNA) – – stability analysis (DIFF-CRASH) – sensitivity analysis (DesParO) – reduction/compression of input and output (DesParO) multi-objective robust design-parameter optimization (DesParO) – comparisons with physical experiments (as far as available) – Efficient, novel methods are proposed and employed for sensitivity analysis of simulation results on fine grids depending on parameter variations, for a reduction of the design space and the simulation results as well as for mapping an appropriately constructed data base of most influencing trends, not only comprised of thicknesses and strains, but also damage information. Including the latter turns out to be a crucial point. Results are shown, in particular, for a ZStE340 metal blank of a B-pillar. Comparisons to experiments demonstrate the abilities of the strategy proposed.
Dr. Bazle A. Gama, Prof. John W. Gillespie Jr. – University of Delaware, Dr. Travis A. Bogetti – US Army Research Laboratory
Progressive damage of plain weave S-2 Glass/SC15 composites under in-plane tension, compression, and shear, through-thickness tension and compression, and transverse interlaminar and punch shear loading is presented for a unit single element using the MAT162 composite damage model in LS- Dyna. While the detail formulation of the MAT162 material model can be found in the Keyword user’s manual , the main objective of this paper is to describe a methodology to determine a set of softening parameters using a unit single element analysis. The analytical formulation of post-yield damage softening is presented with stress-strain behavior of a single element under different loading conditions. Since MAT162 uses four different softening parameters, i.e., AM1 and AM2 for fiber damage along material directions 1 and 2, AM3 for fiber shear and crush, and AM4 for matrix crack and delamination; the choice of a set of these four AM values is not obvious. The stress-strain plots presented in this paper will serve as an additional user guide to select a set of AM values for a specific material and a specific application. Unlike linear-elastic design of composite structures with max-stress/strain or quadratic failure theories, modeling the post-yield softening behavior allows one to simulate the energy absorbing capabilities of a composite structure. It is important to choose a set of AM values which represent a material’s behavior through single element analyses and validation of the model with other quasi-static and dynamic experiments. A poor choice of the AM values may lead to prediction of either higher or lower energy absorption capabilities of the composite structure. In order to accomplish this objective, the single element analysis is presented with appropriate loading and boundary conditions. Model validation studies simulating static and dynamic experiments can be found in , and further studies will be presented elsewhere.
Sayaka ENDOH, Takahiko MIYACHI, Yasuyoshi UMEZU – JSOL Corporation
As one of the important issues in correlation of crash analysis, we know that the press-forming effect has large influence for the result of the analysis. Some tries and studies were carried out to assume the forming result to the crash analysis. However these actions haven’t spread to the actual product development because of some difficulties. We mark the Inverse Solver to solve this issue and have developed the new solver HYCRASH, which can calculate the plastic strain and thickness distribution from the final shape of the product. We would like to represent two things in this study. First, the hardening effect plays an important role in the crash energy absorption. Second, HYCRASH can consider this correlation factor much easier and faster than the usual press simulation technique.
Dr. John Hallquist – LSTC
In this presentation Dr. John O. Hallquist, founder and president of Livermore Software Technology Corporation (LSTC), will give an overview about recent developments in LS-DYNA. LS-DYNA is a highly advanced general-purpose nonlinear finite element program that is capable of simulating complex real world problems. The distributed memory solver provides very short turnaround times on Unix, Linux and Windows clusters. The major development goal of LSTC is to provide within LS-DYNA capabilities to seamlessly solve problems that require • “MULTI-PHYSICS”, • “MULTIPLE STAGES”, • “MULTI-PROCESSING”. Its fully automated contact analysis capabilities and error-checking features have enabled users worldwide to solve successfully many complex crash and forming problems. LSTC develops sophisticated tools for modeling and simulating the large deformation behavior of structures. In addition to LS-DYNA the tools LS-PREPOST for pre – and post-processing, and LS-OPT for optimization are developed by LSTC. The main applications are: • Large Deformation Dynamics and complex Contact Simulations • Crashworthiness Simulation • Occupant Safety Systems • Metal Forming • Explicit/ Implicit Analysis • Metal, Glass, and Plastics Forming • Multi-physics Coupling • Failure Analysis • Sophisticated Material Models • Fluid-Structure Interaction • SPH (Smooth Particle Hydrodynamics) • EFG (Element Free Galerkin) LSTC was founded in 1987 by John O. Hallquist to commercialize as LS-DYNA the public domain code that originated as DYNA3D. DYNA3D was developed at the Lawrence Livermore National Laboratory, by LSTC’s founder, John O. Hallquist.
Nielen Stander, Willem Roux, Tushar Goel- Livermore Software Technology Corporation, David Björkevik, Christoffer Belestam – Engineering Research AB, Katharina Witowski – DYNAmore GmbH
This study expounds the multi-objective optimization of a realistic crashworthiness problem with special reference to the incorporation of uncertainty and the visualization of the Pareto Optimal Frontier (POF). LS-OPT® and LS-DYNA® are used for the optimization based on the C2500 truck model developed by NHTSA. The design problem is set up as a Reliability-Based Design Optimization (RBDO) problem which includes specifications for the variation of the input parameters. For the purpose of design, reliability-based constraints on the displacements and stage pulses (interval-based integrals over the acceleration history) are specified. Nine thickness variables were assigned to various parts affecting the crashworthiness performance. Solution of the example employs Radial Basis Function networks as surrogate functions with Space Filling sampling as well as the NSGA-II algorithm for determining the POF starting from an infeasible design. Post-processing is done to determine a subset of optimal points of interest using the Viewer of LS-OPT® Version 4. This post- processor is based on a new architecture which allows window splitting and detachable windows for flexible viewing. It also includes the following new features: (1) Correlation Matrix, (2) Parallel Coordinate plot (POF) and (3) Hyper-Radial Visualization (POF). Thus 3 types of POF viewing are available, including the current 3D scatter plot. The study shows that a complex decision-making process such as optimal design involving uncertainty and multiple objectives can be simplified by using appropriate analysis and visualization tools.
Brian Croop,Hubert Lobo – DatapointLabs
Foams are multi-phase materials that exhibit dramatically different properties that depend on the matrix material as well as the pore microstructure. This additional degree of freedom from the presence of the gas phase makes material modelling for foams a difficult matter. LS-DYNA offers a variety of material models, each with capabilities designed to capture the unique behaviour of a different types of foam. The selection of the correct material model depends to a large extent, on the observed behaviour of the foam during the test. Other factors will include the actual situation under simulation, which becomes important for highly non-linear materials, where a single material model often cannot capture all the dependencies, forcing a localized material calibration. The material calibration itself is not easy because of the lack of set procedures for characterization. Previous research has devoted a lot of effort to enhancing these material models to improve their capabilities as well as to make them easier to use. In our current work, we seek to lay down a framework to help us understand the different behavioural classes of foams. Following a methodology that we previously applied to plastics, we will then attempt to propose the right LS-DYNA material models that best capture these behaviours. Guidelines for model selection will be presented as well as best practices for characterization. Limitations of existing material models will be discussed.
Richard Brown, David Coleman, Ian Bruce – Jaguar Land Rover
A biofidelic flexible pedestrian legform impactor (Flex-PLI) has been developed by Japan Automobile Manufacturers Association, Inc. (JAMA) and Japan Automobile Research Institute (JARI). The Flex-PLI has good biofidelity as well as several knee ligament elongation measurement capabilities, three femur and four tibia bending moment measurement capabilities. For these reasons Flex-PLI is likely to be used for future pedestrian Global Technical Regulation. This paper introduces a finite element model of the Flex-PLI type GT for LS-DYNA and compares a full vehicle Flex-GT impact simulation with test. A very accurate vehicle model is needed to predict Flex- PLI injuries. In this paper, a detailed and correlated vehicle model was used. The Type GT is the 5th version of Flex-PLI and has almost the same structure and performance as final design type GTR. The Flex-PLI type GT LS-DYNA model was carefully created to ensure every important detail was included. Geometries, masses and material properties of all parts were reproduced from drawings and inspection of the real components. Connectivity and component interaction within the model was determined by thorough experiments. Accurate prediction of injury indices and kinematic behaviour was achieved by correlation to static and dynamic calibration tests. A fine mesh was used but reasonable calculation cost assured by imposing an analysis time step of 0.9 micro seconds.
S. Edelmann, C. Groß, H. Chladek – INPROSIM GmbH
Clamping rings are used in a wide range of mechanical applications in order to assemble two or more cylindrical parts, e.g. tubes and pipes, pressure vessels and tanks. Another application area of clamping rings is in turbo engines, where they connect compressor, bearing and turbine casings for example. For normal operating conditions, standard rules or simple static analyses are adequate to determine the relevant design parameters of this device. But these analyses are not sufficient for highly dynamic loading as in case of misuse or failure, e.g. shock waves, compressor surge and in particular the impeller burst. In these cases the loading of the clamping ring is no longer static nor linear. The impulse transmitted and the mass inertia of the parts connected play an essential role for the loading scenario. In addition the non-linear material behaviour, the high geometric deformation and plastification up to material failure as well as the complex contact situation have to be taken into account. For these extensive analyses explicit simulations using LS-DYNA have proven to be a highly efficient tool. This presentation gives an overview on how to use CAE simulation for designing a clamping ring for highly dynamic loading. As a first step in the process described, a quasi-static pullout test is used to achieve a high correlation between hardware testing and simulation. The paper also gives an idea of the influences of some typical design parameters of a clamping ring, e.g. wall thickness and numbers of segments of the v-shaped lower strap. A focus of the development needs to be on the balance of structural stiffness of the clamping ring for one thing and the flanges of the parts connected for another. The presentation concludes by showing a successful simulation using LS-DYNA Explicit for the highly loaded clamping ring due to an impeller burst.
Yun Hang – Livermore Software Technology Corporation, Mhamed Souli – University of Lille Laboratoire Mecanque de Lille, Rogelio Perez – Schneider Electric Industries Calcul & Simulation
The present work concerns the new capability of LS-DYNA® in solving acoustic and vibroacoustic problems. In vibroacoustic problems, which are assumed to be weak acoustic-structure interactions, the transient structural response is computed first. By applying the FFT, it is transformed into a frequency response. The obtained result is taken as boundary condition for the acoustic part of the vibroacoustic problem. Consequently, the radiated noise at any point into space can be calculated. The new developed LS-DYNA keyword is based on boundary element method (BEM) in which only the surface of the acoustic domain needs to be discretized. Besides BEM that solves the Helmholtz equation as a linear system, the new card allows, also, to use two other approximative Rayleigh and Kirchhoff methods. Both methods do not require a system of equations to be assembled and solved. Consequently, they are faster than BEM. Rayleigh method assumes that the radiating structure is a plane surface clamped into an infinite rigid plane. In Kirchhoff method, BEM is coupled to FEM used for acoustics in LS-DYNA by prescribing non reflecting boundary condition. In this case, at least one fluid layer needs to be merged to the vibrating structure.
Torodd Berstad, Cato Dørum – Structural Impact Laboratory (SIMLab) / SINTEF Materials and Chemistry, Odd Sture Hopperstad, Tore Børvik – Structural Impact Laboratory (SIMLab) / Department of Structural Engineering
A novel method for simulation of crack propagation has been developed in LS-DYNA. The method combines damage-driven fission adaptivity and element erosion or node splitting to simulate crack propagation in the finite element mesh. A damage model is used to describe the evolution of material damage with plastic straining and fracture is assumed to occur at a critical value of the damage parameter. Coupled or uncoupled damage models may be used. Mesh refinement by fission adaptivity  occurs at user-defined damage levels, and further the user defines the maximum number of subdivisions. If element erosion is adopted, this happens when the critical damage level is reached within the element. When node splitting is used, multiple nodes are generated for the sibling elements and nodal values of damage are estimated. As the critical damage value is reached in a multiple node, selected bonds are released to allow for a crack to develop. The direction of the crack propagation is determined based on damage values in neighbour nodes. The method has been developed for 2D continuum elements, axisymmetric elements and shell elements. Applications of the method are shown for two cases: I) tearing of cast aluminium thin-walled profiles discretized with plane-stress elements and II) plugging of steel plates modelled with axisymmetric elements. For each case simulations with element erosion and node splitting are carried out and the results compared with experimental data.
Muhammad Ilyas, Christine Espinosa, Frédéric Lachaud, Michel Salaün – Université de Toulouse
Delamination initiation and propagation of aeronautic composites is an active field of research. In this paper we present a methodology for critical energy release rate correlation of numerical simulation and experimental data. Experiments of mode I critical energy release rate were carried out at quasi static and pseudo dynamic loading rates. Cohesive finite elements are used to predict the propagation of delamination in a carbon fiber and epoxy resin composite material. A bilinear material model is implemented via user defined cohesive material subroutine in LS-DYNA. The influence of mode I energy release rate in mixed mode loading, due to a low velocity impact, is also investigate.
J. L. Lacome – Livermore Software Technology Corp
“Standard” SPH methods are based on an Eulerian kernel. In most cases, the support remains constant and the neighbors search is carried out at each time step. This allows to deal with large deformation problems. However, this technique may suffer from instabilities (such as the so-called tensile instability [Xiao 05]). In the case of a Lagrangian kernel, the neighbors’ list remains constant throughout the calculation. This formulation overpasses the tensile instability problems but the treatment of large deformations becomes limited [Xiao 05]. In order to solve this problem, Vidal et al. [Vidal 07] proposed a very interesting approach based on a formulation with updated Lagrangian kernel. They showed that the reference state update and the neighbor list update allow large deformations modeling. Nevertheless, when this reference state is too frequently actualized, numerical instabilities can appear. Recent developments based on Eulerian and Lagrangian kernel SPH coupling have been introduced in lS-DYNA version 971. First, we present a coupling between Lagrangian kernel particles and Eulerian kernel particles. Then, an impact of a rigid projectile on a composite material modeled with a lagrangian kernel is showed in order to visualize the advantages of this new element technology.
Dr. Tatsuo Sakakibara, Dr. Toru Tsuda and Ryo Ohtagaki – ITOCHU Techno-Solutions Corporation
The local damages of the concrete plate are produced by high velocity impact of rigid projectile. In order to represent the cracking or perforation behaviours of the concrete, SPH is appropriate method because of completely mesh free. In this paper, SPH simulations for the local damage of the concrete plate due to high velocity impacts are performed to study the effects of the impact velocity and the strength of the concrete plate. To represent the nonlinear failure behaviour of concrete, MAT_PSEUDO_TENSOR is used as constitutive model in LS-DYNA. The strain rate effects of concrete are also take into account. The numerical results of the local damage of concrete plates are discussed through comparing with experimental results.
Leonard E Schwer – Schwer Engineering & Consulting Services
When concrete impact and penetration simulations are discussed, the question of increased strength due to high strain rates arises. Many concrete material modelers cite and use the seminal work of Bischoff and Perry (1991), or the widely accepted standard reference for concrete Comite Euro-International du Beton (1993) or CEB for short. Bischoff and Perry amassed a large amount of concrete laboratory data addressing strain-rate induced Dynamic Increase Factors (DIF) or the ratio of the measured dynamic to quasi-static strength. Figure 1 is taken from Bischoff and Perry (1991) and shows the large amount of data they collected, along with the strain-rate equations recommended in the CEB for two concrete strengths. The data shows a large amount of scatter in reported strength increases. The depicted CEB equations approximately bound the data. The CEB recommended strain-rate induced strength increase equations are: [ … ] “It should be noted that the sharp increase predicted at rates greater than 30/s is only tentative, and other recent recommendations 1 have also been made which disregard this effect for concrete strength in compression.” The data collected by Bischoff and Perry clearly indicates there is some measurable increase in unconfined compressive strength of concrete with increasing strain rate, and we can accept the CEB formulass as being representative of the data. However, the unanswered question is “Does this unconfined compression data translate into the simulations of interest, e.g. blast and penetration of concrete targets, and in particular, do the strain-rate forms used in constitute models?” The above question is addressed in two parts: 1. What do simulations of dynamic unconfined compressive strength tests predict? 2. What do the corresponding simulations of dynamic confined compressive strength tests predict? And, what data, if any, can be used to verify these models.
Dr. Markus Seitzberger, Richard Graf, Dr. Philipp Heinzl, Andreas Rittenschober – Siemens Transportation Systems GmbH & Co KG, Sebastian Haupt, Gerhard Schmidt – Siemens AG
Rail transit provides a very safe means of public transport, which is due to the railroad specific principle of track guidance in combination with high active safety measures during operation. However, train accidents cannot totally be excluded and in the last two decades the subject of passive safety has become an issue of growing importance also in the railway industry. Administrations, operators, railway research institutes, and manufacturers have been active in investigating train collisions and defining relevant recommendations and standards for the realisation of a crashworthy rail vehicle design, providing the last means of protection when all possibilities of preventing an accident have failed. An analysis of structurally significant accidents shows that most fatalities and serious injuries of occupants occur as a result of end-on collisions, often accompanied by overriding of the coach bodies. Consequently, the most effective means of reducing passenger and crew casualties in railway accidents is to concentrate on the design of crashworthy vehicle ends and to avoid overriding, which is also reflected in customer specifications and mandatories, e.g. the British Group Standard GM/RT 2100, the new crashworthiness standard EN15227, the TSI requirements for high speed trains, or the US APTA and FRA regulations. In this paper an overview of current requirements for a structural crashworthiness design of rail vehicles is given. Herein, a focus is put on the new EN15227, which will be the relevant passive safety standard for the next generation of rail vehicles in Europe, covering all kind of passenger carrying rolling stock, from light rail, metro and commuter up to long-distance and high speed main line trains. With regard to design and verification usually a combination of different steps of simulation and prototype testing is applied to consider the individual crash zone design, but also the dynamic behaviour and the crash energy management over the whole train rake. An outline of the methods and tools usually applied for the design and verification process is given. Different examples from Siemens for the development of modern crashworthy trains are shown for both steel and aluminium railway vehicle structures, with an emphasis put on metro and commuter trains. The principles and main challenges of a crashworthiness design are stated and different design variants like car body structures with fully integrated crash zone areas or deformation zones with replaceable attached crash elements are shown. For the latter, particular consideration has to be put on the behaviour under non-perfect loading conditions, e.g. caused by colliding vehicles, which are vertically offset, because such a configuration may be particularly prone to overriding.
Sascha Kutschenreuter, Maxime Dagonet – Takata Petri AG
Studies of crash simulations resulted in the assumption that an interaction of the upper occupant extremities with the vehicle interior has effects on the occupant loadings, e.g. the chest acceleration. Quantifying this effect requires a dummy model also valid for the area of upper extremities. A suitable test procedure shall help to identify possible force paths via the arms into the dummy torso and, as a result, to evaluate the resulting dummy loadings. This test procedure will be transferred into a virtual model and compared with the results gained in physical testing. The final analysis shows the potential and the limits of the modular FTSS dummy for the quantitative evaluation of interactions between the upper extremities and the vehicle interior.
Lutz Berger, Micha Lesemann, Christian Sahr – RWTH Aachen University, Simon Hart, Richard Taylor – ARUP
Over the last years, total vehicle weights have risen significantly. With their direct influence on the power demand of vehicles, the reduction of weight is one among other measures in order to decrease the fuel consumption and CO2-emissions. The European project SuperLIGHT-CAR (SLC) is aiming at a weight reduction for the body-in-white (BIW) of a compact class passenger car, following the multi- material approach. By this, the perfect material is chosen for every component of the body structure, based on criteria such as energy absorption, structural integrity, stiffness etc.. Simulations are required in order to assess the concept and to show further potential for improvement. LS-Dyna is used in this project to a large extent since it offers excellent opportunities for both static and dynamic load cases that are regarded. The model is therefore built-up from different include files which offer the capability to change quickly between load cases and concept versions. In addition, a multidisciplinary optimisation based on LS-Opt reveals further potential for weight reduction. The main goal of the project, a weight reduction of 30 % for the BIW, is overachieved while the structural performance of the reference vehicle is maintained or even improved. The fact that only one model had to be used for all simulations decreased the required time for a full analysis run and hence accelerated the development process.
Dr. Jan Seyfarth, Dr.-Ing. Matthias Hörmann – CADFEM GmbH, Dr.-Ing. Roger Assaker – e-Xstream Engineering, Chandra S. Kattamuri, Bastian Grass – BSH Bosch und Siemens Hausgeräte GmbH
In the context of light weight construction the replacement of metal parts with substitutes made from plastic plays a major role. These devices commonly are manufactured through injection moulding and reinforced by different amounts of glass fibres to enhance the strength of the material. In everyday application this poses a challenge to the engineer as due to this processing the local orientation of the reinforcements is varied on a broad scale leading to pronounced different material properties. Finally this can influence the overall stability of the part which is especially true for regions where welding lines occur. In the early stages of the virtual development of such plastic parts it is therefore significant to the take into account the material microstructure while carrying out macroscopic simulations. For explicit calculations non linear material properties, strain rate dependency and the failure of material play an important role in this context. The DIGIMAT to LS-DYNA interface allows to couple microstructure information coming from injection moulding simulations to be integrated in the structural mechanics calculation. Within this approach DIGIMAT is implemented as LS-DYNA user material and offers an independent description of the local composite in each element. The interface uses homogenization schemes which take constitutive laws for fillers and matrix, the percentage of fillers and the filler shape as an input and calculate the average macroscopic stiffness of the material based on the local microstructure. In the standard workflow of a coupled analysis several steps have to be carried out. Within DIGIMAT the constitutional laws are described by mathematical functions. These functions are fitted to the experimental measurements of the material. Usually these experiments are already carried out for fibre reinforced samples leading to the necessity that within DIGIMAT the full composite has to be reverse engineered for the fixed microstructure of the samples. The result is a set of material parameters which can then be taken for a coupled analysis connecting the injection moulding with the structural simulation for the full part under multiaxial load. As both types of simulation usually bear vastly different meshes a preparing step is required in which the local fibre orientations is mapped from the injection moulding mesh to the mesh used in the structural simulation. DIGIMAT offers all tools necessary to carry out the above described steps. In the presentation the workflow of a coupled DIGIMAT to LS-DYNA is demonstrated. Within the virtual material laboratory DIGIMAT-MF the composite is reverse engineered. The resulting parameter set is compared to coupled MOLDFLOW/LS-DYNA calculations on tensile bars under uniaxial strain as well as three point bending. For the application in explicit calculations also failure indicators can be defined within the coupling scheme. As at each step of an analysis DIGIMAT automatically offers all information about the microstructure failure criteria can be derived from the matrix phase or fibre phase separately and used for element deletion within the explicit calculation. All necessary descriptions for composites in explicit simulations can be defined within DIGIMAT, from nonlinear materials over strain rate dependency to failure. On that base the results of a coupled analysis show convincingly better results for an impact through an injection moulded plate than with the conservative approach with isotropic material.
Jürgen Kohler, Thomas Frank, Markus Feucht – Daimler AG
Crashworthiness simulations of car body structures are an important part of the CAE development chain for car design. In recent years, the requirements on passive safety of cars have grown to high standards, leading to a permanent demand on an increase in simulation accuracy. Additionally, demands on fuel efficiency and CO2-reduction are confronting the car body designers with the need of weight reduction to an immense effort. One way to achieve light-weight structures is to replace conventional body-in-white materials, like conventional deep-draw steels, by more sophisticated materials. Besides of using metals such as advanced high strength steel grades, aluminium or magnesium alloys, the use of composite materials and hybrid metal-polymeric structural components is increasing in the automotive industry. Since these materials often show rather complex mechanical behaviour, it is of great importance to precisely predict failure under crash loading conditions. Additionally, especially for metals it seems more and more evident that the before going treatment of the material through the manufacturing process chain significantly influences crash performance of the respective material. Here, an emphasis has been laid on identification of damage parameters and a comparison of relevant damage accumulation models and theories from forming to crash simulation. Special attention was paid to existing differences regarding stress states between forming and crash loading, to clarify differences in failure prediction models respectively. The numerical simulation of structural parts made from plastics is becoming increasingly important nowadays. The fact that almost any structural requirement can be combined in a lightweight, durable and cost effective structure is the driving force behind their widespread application. More and more structurally relevant parts are being constructed and manufactured from plastics. This, on the other hand, drives the demand for reliable and robust methods to design such parts and to predict their structural behaviour sufficiently close to reality. The key ingredients that need to be available are verified, calibrated and validated constitutive models for any family of polymeric materials. This holds true not only for crashworthiness applications – which are the main focus of this contribution – but for any other field of application, too. The application of new materials in body-in-white structures is leading to higher requirements in the joining techniques, too. One example is the conventional spot welding of press hardened steels, where the heat affected zone loses its properties that were achieved by heat treatment. This local change in properties has to be taken into account for a sufficiently precise description of spot weld failure mechanisms. The actual development here is to consider the amount of fracture energy that is dissipated during spot weld failure. This seems to play an important role even regarding the global behaviour of structural parts in full car crash simulations. Finally, it will be shown that for ensuring a maximum in predictive performance, advanced modelling techniques have to be used simultaneously for all topics described above in order to capture possible interactions of the phenomena described above.
H.Zimmer, M.Prabhuwaingankar – SFE GmbH, F.Duddeck – Privatdozent
Today’s vehicle development process demands for quick evaluation of new designs considering the various attributes in the conceptual phase. Various CAx tools and methods are essential to realize these assessments in a very narrow time frame. New design variants with desired criteria should be quickly created and analyzed. Beside NVH behavior and other criteria crash safety needs to be addressed too as early as possible. Synergy in CAE analysis and geometry description is an absolute necessity for a seamless vehicle development process. Where to position beads and how to shape these beads considering the design space and manufacture criteria is a challenging task. Geometry based shape and topology optimization is an enabler for such a seamless vehicle development process. Function driven geometry and geometric requirements based on other criteria are the key factors to determine the design space and the non-design space. Application of realistic load cases based on experience and best practices is a prerequisite for optimization. The geometry based topology and shape optimization offers the necessary flexibility in proposing new design alternatives by modifying the geometry and respecting the manufacturability aspects. This procedure includes more valuable “engineering” information compared to the knowledge of “material distribution” of the standard topology optimization. This paper describes the feasibility of above mentioned seamless vehicle development network where CAE analyses and geometry description go hand in hand. To demonstrate this, optimization of a crash box of a car body structure by inserting and optimizing the position and shape of beads is carried out.
Richard Brown – Jaguar Land Rover
Legal and consumer crash tests use crash dummies as the key measurement device in the assessment of crash severity. The dummies are complex assemblies in themselves, and sophisticated DYNA FE models are available, with a significant amount of validation testing to support them. In a large vehicle crash model, the dummy is typically responsible for 10-20% of the CPU time, and there is no strong motivation to reduce its size; in fact, the tendency is to make the dummy models more complex, and the latest versions have undergone a significant refinement, which has increased the CPU time needed to run the dummy on its own by a factor of 2. If the size of the vehicle crash model is reduced, however, the proportion of CPU time taken by the dummy increases, and can constitute 90% of the total. Recent developments in the use of DYNA for frontal occupant modelling at Jaguar Land Rover require a significant reduction in overall run-time, and the standard full dummy models impose a limit on the reduction that can be achieved. For this reason, simpler dummy models have been created by FTSS, which allow the selection of the full, sophisticated representation, where maximum fidelity of measurement is necessary, but provide a simpler model, where this is adequate. The model is constructed using a modular structure, allowing any combination of complex and simple dummy parts to be assembled, and ensuring that geometry, joint configurations and output references are maintained. A set of component validation comparisons has been made at FTSS and Jaguar Land Rover between the simple and complex models, to demonstrate the degree of approximation inherent in the new models. Additionally, a comparison has been made in vehicle sled models to demonstrate the usefulness of the approach. The CPU requirement has been compared, using a number of configurations of dummy and vehicle models. The validation results show that the simple models are a valid representation of the dummy in areas where the detail of the local behaviour is not required. In many areas the simple model can also provide adequate dummy measurements. There is scope for further development of the simple dummy parts, but the modular nature allows current limitations to be avoided, through the use of the fine model, where the application requires it. The CPU demands of the dummy model can be significantly reduced, allowing the demands of the smaller vehicle crash models to be met. Additionally, the criteria for usability, such as maintenance of geometry, and of positioning configuration are fulfilled.
Arthur B. Shapiro – LSTC
Presented is a methodology for finite element modeling of the continuous press hardening of car components using ultra high strength steel. The Numisheet 2008 benchmark problem BM03  is selected as the model problem to be solved. LS-DYNA  has several features that are useful to numerically model hot sheet metal stamping, such as: (1) modeling high rate dynamics for press forming; (2) conduction, convection and radiation heat transfer; (3) tool-to-part contact conductance as a function of interface pressure; (4) material models that account for temperature dependent properties, phase change, phase fractions, and Vickers hardness prediction; and (5) a CFD solver for tool cooling.
Dr.-Ing. Holger Wenzel – SIMULIA SLM Europe
Optimization and the improvement of robustness and reliability in the early stages of the product development is often attempted today using simulation methods together with algorithms that run these simulations in an automated manner. In this context the need often arises, to run the simulation models with small changes of the geometry hundreds to tenth of thousands times. If these simulations are computationally expensive, like a full vehicle crash analysis, the wall clock time to perform the runs is often prohibitively large. Even the usage of big compute clusters cannot always remedy this problem. For this reason approximation methods are often used. But no matter what actual technique is employed, polynomial approximations, radial basis functions, kriging or support vector machines, these techniques are purely mathematical and don’t have any knowledge about the physical problem they approximate. This paper presents a different approach. Here the same physical phenomenon is modeled using two different simulation models. One is very accurate but computationally expensive, the other is less accurate but computes faster. Both models are used to simulate the baseline design and the difference is recorded either as an additive correction delta or multiplicative correction factor. Then the multiple runs of the optimization algorithm or stochastic technique are performed using only the low fidelity quick code, and the correction is applied. When the baseline point is sufficiently far away from the actual point the correction delta or factor needs to be updated. This methodology is explained on a Taguchi Robust Design study for a full vehicle side crash using LS-Dyna. The focus of this paper is to explain the methodology, not to discuss the results.
Prof. Dr.-Ing. Helmut Behler, Jan Göbel, M.Eng. – Hochschule Mannheim, Steffen Heute, M.Eng. – Alpha Engineering Services GmbH
Interference fits are a commonly used means to couple shafts and wheels for example. The usual dimensioning is performed by a static verification. As long as the system geometry is not too complicated and the deformation is assumed to be linear elastic, the interference pressure can easily be calculated with the more familiar solutions of the equations of elasticity. The maximum static contact forces can be calculated together with an assumed coefficient of static friction. In order to investigate whether a cylindrical interference fit provides sufficient stability against slip the real loads have to be known. However in various applications this is not the case and the interference fit is subjected to dynamic loads, especially to impact loads. We simulate a model interference fit that is first axially mounted and later also axially loaded. This is a typical case in hydraulic systems. Similar problems occur in gears, e.g. worm gears, especially if there are reverse torques as in many applications. The crucial number a design engineer seeks is the safety against slip, S. A dimensional analysis shows that S is dependent on the length l of the interference fit, its interference Z, the velocity v and the mass of the impacting body m and the static friction coefficient μ. Altogether we find: S ~ Z l μ v-1 m-0.5. Numerical experiments have shown that the easiest way is to vary the velocity of the impacting body to find the design with minimum safety S = 1. The desired safety can then be achieved by simply changing the parameters. We investigate the influence of different contact types, and find the OSTS contact as optimal for the shaft-hub contact. The same way we consider the NTS contact as optimal for the shaft-impacting body contact. The results also show that the forces due to an impact are huge and that it is not possible to make an appropriate design without a numerical or experimental analysis.
Dr. Wolfram Volk, Pierre Charvet – BMW AG
Nowadays the engineering and planning process in sheet metal forming is fundamentally supported by CAD and CAE systems. Beside the full 3D design process of parts and tools the simulation of sheet metal forming processes has established itself in the last 15 years within standard industrial practises. Nevertheless the virtual engineering and planning consists of more than just CAD and CAE tools. For a coordinated and effective process it is recommended to make use of the so-called process chain model. Therewith the interactions between different technologies or single processes can be taken into account. The process chain “Painted Car Body” consists of geometry and functionality development as well as forming, joining and coating processes. The backbone of a process chain is generally called “Synchroplan” where the main technical and business milestones for the different technologies and development processes are fixed. The challenges for the virtual planning process are response time and accuracy with respect to the Synchroplan milestones. In the early phase of product development it is helpful to make use of standards. These standards give guidelines for the product design process with respect to feasibility and robustness without restricting “engineering freedom” which will enable new styling and technical innovations. These standards are sometimes much more than just single numbers. For repeatable geometry details (door entrance, rear lights etc.) one can define so called meta models if these details can be represented by few parameters. The benefit of these meta models is the quick assessment of parameter combinations with an adequate accuracy. With this argumentation it is clear that an effective and efficient virtual engineering and planning process consists of three major components: – standards for geometry and process technology – fast CAD tool for creation of geometry proposals – effective CAE tool for fast and accurate assessment enabling definition of improvements The more standards are defined and accepted over the whole process chain the less detailed simulations and CAD loops are necessary. Nevertheless realisation of new styling ideas and technological improvements (new materials, improved crash worthiness etc.) always require CAD and CAE support. The backbone of the CAD process at BMW Group is currently the CAD system CATIA V5. All geometry information in the process chain has to be finally delivered in native CATIA V5 data. But especially in the early or so called concept phase of a project it is not necessary for sub processes that all CAD work is done in the backbone system. A typical example is the concept die face for the geometry definition of a forming simulation. With this geometry no physical tool is built and therefore no native CAD data is required. It is more important to realise the ideas and proposals of the engineer as fast as possible with a sufficient accuracy for FE-simulation. Nevertheless the geometric proposals after the engineering loop should be finally available in the CAD system. For the definition of a concept die face several working steps are (typically) necessary: – import of part geometry (ideally with native CAD data) – flange unfolding and lay out of geometry details from following operations – definition of the basic production idea (double part, symmetry, …) – definition of drawing direction – part preparation (filling of holes, smoothening of boundary, ….) – creation of blank holder – design of addendum – preparation for simulation All these working steps beside the preparation for simulation can obviously be realised also in the standard CAD system. The most time consuming work is the creation of the addendum in comparison to specialised alternative solutions. This is the main reason why currently the concept die faces are not generally designed in CATIA V5. The accuracy and necessary design work for concept die faces strongly depends on the examination objectives. Especially the prediction of surface quality of outer skin panels necessitates much engineering work for the blank holder. Therewith the first contact of the blank with the forming tools is determined which causes sometimes unacceptable skid or impact lines. For the FE-simulation of concept die faces a powerful CAE tool is necessary. Beside of short calculation times an easy applicability is of high interest. Nevertheless one has need for well described complex material models and powerful user interfaces to solve extraordinary boundary value problems, e.g. for the virtual assessment of new forming technologies. LS-DYNA fulfils most of these demands and has a high application rate in research work at universities. In the past, the main objectives of forming simulations were only the assessment of feasibility (e.g. occurrence of necking and wrinkles). Nowadays additional and more complex examinations are possible due to improvements of the simulation systems. Some examples are press force calculation, multi stage forming, spring-back, surface quality, failure prediction for complex strain paths. Many of these applications need an accurate stress calculation. For new material grades like ultra high strength dual phase steels the classical material description is not sufficient anymore. The advantage in competition for automotive companies is the controllability in the virtual planning and engineering process even without having experience of series production. The more accurate the material description in the simulation tools the less problems and scrap rate occur in the production. Normally the first simulation of a concept die face will not lead to a feasible part geometry. In an effective virtual engineering and planning process it is necessary to show the way to feasible and robust production processes. The fast translation of simulation results in geometric proposals is an essential step. The handling of geometry updates is a big challenge for the work with concept die faces. An easy and robust parametric design of the concept die faces is still one of the biggest problems in this context. Even for specialised systems for the creation of concept die faces there is still much room for improvements. Due to this problem we should not restrict ourselves to single software systems from the general viewpoint of BMW Group. It is necessary to define useful interfaces and data formats. Therewith a fruitful competition and a market also for smaller software companies or university spin offs can exist. An example for such an interface is the description of a forming process based on a concept die face. It is necessary to define links on the tool geometry and sheet material, the forming direction and additional information like cam positions and directions. Tool meshes and detailed material data should not be included in this interface. The big advantage of intelligent interfaces is the possibility to combine different CAD and CAE systems as well as the possibility for fast modification loops. We expect a higher innovation velocity with widely accepted interfaces due to a wider market and more competitors.
Karin Hofstetter, Josef Eberhardsteiner, Reinhard Stürzenbecher, Christoph Hackspiel – Vienna University of Technology
Wood is one of the oldest construction materials known to man. Over thousands of years it has been mainly used in a craft framework, so that current design rules are often based on experience and tradition. The scientific knowledge about the material behavior is often surprisingly poor. In order to exploit the extraordinary ecological potential of the material and to enable its structural use also in an industrial framework, improved material models are required. Modern timber construction is characterized by increasing demand of two- and three dimensional bearing components. Dimensioning and design of such sophisticated structures require powerful material models for numerical simulation tools such as the finite element (FE) method. Moreover, the large variability of the macroscopic material properties has to be understood and suitably described to prevent exaggerated safety factors resulting in an uneconomic over-dimensioning of timber members. In order to understand the variability of macroscopic properties of solid wood and the underlying phenomena and to suitably describe them in material models, the hierarchical microstructure of the material has to be considered. At sufficiently small length scales universal constituents common to all wood species and samples as well as universal building principles can be identified. Namely, lignin, hemicellulose, cellulose, and water are such tissue-independent universal constituents with common mechanical properties across the diverse wood species at the molecular level. They build up cell walls resembling fiber-reinforced composites, which are arranged according to a honeycomb pattern. A mathematical formulation of the univeral building principles results in a multiscale micromechanical model for wood which links microstructural characteristics of individual wood samples to macroscopic mechanical characteristics of these samples. Homogenization techniques are employed for this purpose. In particular, the composite structure of the wood cell wall motivates application of continuum micromechanics for estimation of its elastic properties. At the cellular scale, plate-type bending and shear deformations dominate the mechanical behavior, which are more suitably represented by a unit cell approach. Formulation of the localization problem corresponding to the multiscale homogenization scheme allows determination of strain estimates at smaller length scales for given macroscopic loading. Quadratic strain averages (so-called ‘second-order estimates’) over microstructural components turned out to suitably characterize strain peaks in these components. Combination of estimates for such averages with microscale failure criteria delivers predictions for macroscopic elastic limit states. As for solid wood, experimental investigations indicate that wood failure is initiated by shear failure of lignin in the wood cell wall. This can be suitably described mathematically by means of a von Mises- failure criterion. The multiscale models for wood stiffness and elastic limit states are validated by comparison of model predictions for stiffness and strength properties with corresponding experimental results across a multitude of different wood species and different samples. The small errors of the model predictions underline the predictive capabilities of the micromechanical model. For example, the mean prediction errors for the elastic moduli and the shear moduli related to the three principal material directions L, R, and T are each below 10 %. The capability of micromechanical approaches to link macroscopic properties to microstructural characteristics renders such approaches also very appealing for wood products. In this paper, models for a representative of strand-based products, namely the Veneer Strand Board (VSB), as well as for a representative of solid wood-based products, namely the DendroLight panel, are shown. VSB consists of large-area, flat and slender strands with uniform strand shape and dimensions and is typically built up of several layers with different strand orientations. The high-quality strand material results in increased stiffness and strength of the board compared to conventional strand and veneer- based panels. The multiscale model for VSB spans three scales of observations: the strand material, a homogeneous board layer, and the multi-layer board. Continuum micromechanics is applied first in order to estimate the elastic properties of a homogeneous board layer from the stiffness of the strands, their shapes, and their orientations. In the second step, effective stiffness properties of a multi-layer panel are determined by means of classical lamination theory. Thereby, the stacking sequence, the orientation of the principal material directions of the single layers, and the density variation across the board thickness are taken into account. Model validation is again based on independent experiments. Results of tests on specially produced homogeneous boards as well as inhomogeneous boards with a well defined vertical density distribution show a good agreement with corresponding model predictions. This underlines the capability of the model for estimation of the stiffness of strand-based engineered wood products from microstructural features and renders it a powerful tool for parameter studies and product optimization. DendroLight is a three-layered lightweight panel consisting of thin outer layers of solid wood or particle board and a middle layer made up of small cells with webs inclined by an angle of 45° facing alternatively upwards and downwards. The periodic microstructure motivates application of the unit cell method for prediction of the mechanical behavior of this panel. As for plane periodic media, macroscopic unit curvature states are considered as loadings of the unit cell in addition to macroscopic unit strain states. In particular, effective in-plane stiffnesses and bending stiffnesses are obtained. For the purpose of model validation, several panel samples were produced by hand and tested in tension. The experimental results show a good agreement with corresponding stiffness predictions by the model. The multiscale model has already been successfully employed for product characterization and further product development. Since wood is a naturally grown material, it shows growth irregularities, primarily knots and site-related defects. Knots result in a pronounced reduction of stiffness and strength of wooden boards. Due to the highly anisotropic material behavior of wood, the influence of the grain orientation on the mechanical properties of a board is very pronounced and results in high variability in strength and stiffness of structural timber. The latter is a major difficulty in solid wood utilization and brings about the need for wood grading. This motivates investigation of the effects of knots on the mechanical behavior of boards by means of physically-based numerical simulations. In particular, the FE method is combined with sophisticated models for the fiber course and the material behavior. For the description of the local fiber course around a knot, a mathematical algorithm based on a fluid flow approach and polynomial functions fitted to the annual ring course is employed. The algorithm is evaluated at every integration point of the FE model and yields the local three-dimensional fiber orientation there. With respect to the mechanical material behavior, the previously described micromechanical model for solid wood is used, enabling consideration of local variations of microfibril angles or chemical composition of the wood tissue in the vicinity of knots. First results obtained with the numerical simulation tool indicate its capability to estimate the stiffness and strength reduction of wood boards in consequence of knots. On the whole, micromechanical models provide accurate estimates for the mechanical properties of wood and wood products in a fully three-dimensional and orthotropic framework. Also various couplings, e.g. between moisture transport and mechanical behavior, are suitably captured by these models. This makes these models highly valuable for structural simulations, whose predictive and also descriptive capabilities are often limited by the lack of suitable input data or the poor accuracy of available data. Hence, micromechanical modeling activities are expected to support structural analyses of wood structures, but also optimization of processes in wood drying technology.