Well, this entry is a bit delayed. April was again an interesting month due to the coronavirus pandemic. It has certainly been interesting attending classes online – I’m not sure that I like it, nor that I’ll ever get used to it. I hope we will be able to get back to some level of normalcy before too long.
A prime focus in April was my optimization class project. I was joined on the project by two other graduate students – one of whom (Chris) is also a Coreform employee. The primary objective of our project was to wrap Sandia’s DAKOTA optimization code around Coreform’s smooth-spline FEM suite of tools. Due to the nascent status of Coreform software stack, and because we were DAKTOA neophytes, we chose to address a problem that was simple to solve with FEM and that lent itself to analytic verification (i.e. we knew what the answer should be). The title of our project was:
Determination of optimal support locations to maximize fundamental frequency of printed circuit boards
The course required the report to be formatted in a similar fashion as one would expect in a journal article and that the reports to be uploaded to an Open Access repository that would provide a DOI for our submission. We chose to publish our paper to Zenodo and you can find our report here:
As I mentioned, we used DAKOTA to wrap Coreform’s Crunch solver via Dakota’s “Black-box interface” functionality. We had to write a fairly “sophisticated” set of Python scripts to automate everything, but in the end everything worked fairly smoothly.
That’s mostly it for this post – the report goes into quite a bit more detail regarding our efforts. In June I’ll be virtually attending the online NAFEMS CAASE conference and am particularly interested to attend this interesting presentation:
I recently had some discussions with some coworkers and fellow graduate students and felt a bit lacking in my explanations regarding element inversion and the difference between interpolatory and non-interpolatory basis functions. I decided to make some visual examples inspired by the unpublished textbook by Prof. Carlos Felippa and thought I’d share some thoughts in this post.
First, it’s probably good to point out that while there are several approaches to defining a finite element, the approach most commonly used (and the one I’m going to write about) is the isoparametric finite element. Technically, you’ll find siblings of this approach called superparametric or subparametric, so let’s just call the general concept: parametric finite elements. What is meant by this approach is that we can define a mapping between a standardized element that lives in it’s own parametric coordinate system, and each of the instantiated elements (i.e. transformed elements) of a mesh that live in a unified, global coordinate system. What this enables is the one-time definition (e.g. quadrature) of a library of standardized elements, which can then be readily used for each of the (potentially) millions or billions of elements in the global mesh.
We then use some function that can transform information defined within the standard element into each of the elements in the global mesh. We call this “transformation function” a map, or mapping, or mapping function. So how about we demonstrate this by using a “real map” and let’s use to refer to this mapping function. This mapping function is fed some bit of information from the standard element and then spits-out the information in the transformed element. In this case, I’ve chosen to map each pixel’s coordinate of the world-map into its new location in the transformed element. An important criteria, however, is that this mapping be invertible (i.e. bijective) – just as I can take information from the standard element and compute its form in the transformed element, I need to be able to take information from the transformed element and compute its form in the standard element. In other words, I should be able to put my finger on Greenland (ironically, the non-green (white) continent) in the standard world-map and through the mapping find Greenland’s location in the transformed world-map (this is also called a forward mapping), and I should also be able to put my finger on Greenland in the transformed world-map and through the inverse mapping () determine where Greenland must be in the standard world-map. What shouldn’t happen is that I put my finger on Greenland in the standard world-map and then find both Greenland and Australia at the same location in the transformed world-map.
In this next part, we’ll take a look at two different kinds of basis functions and explore this choice has in our mapping. But first, lets talk a bit more about the mapping function, specifically: what is the equation?
That’s not too much help as now we’re faced with new terms that we’ve not yet defined! First, lets just state what these things are, then later we’ll describe where these things come from: corresponds to one of the basis functions used to parameterize the standard element and is the coordinate of the basis function’s node in the transformed element. Essentially, this is the user stating “I know how I want these positions to be transformed in the mapping, and now that’s enough for the computer to map the rest of the positions (or other quantity of interest) into the transformed element.”
That’s all well and good, but how are you, the user, supposed to know where to put all these nodes? Well, we let the computer do that for us at various points, such as during mesh generation (see picture below) and during various stages of the finite element solve (e.g. residual convergence, contact constraints).
Now let’s talk about the basis functions. There are two main classes of basis functions, interpolatory and non-interpolatory. The primary basis function family within the interpolatory class are the Lagrange Basis Polynomials. In the top image below are the basis functions for polynomial degrees: . Something that should immediately stand out is that as we increase the polynomial degree, the basis functions begin to have negative values and have ever-wilder oscillations. The negativity is a key factor in the interpolation property of these bases.
On the other hand, the Bernstein Basis Polynomials remain positive for all degrees and the magnitude of their oscillations diminishes with higher degrees. Now, it’s important to point out that any function that can be represented by a set of Lagrange basis functions can also be represented by a set of Bernstein basis functions of the same degree – what will change is the value of the scaling coefficients. And something we desire in the finite element method (or really, any approximation) is a well-behaved relationship between the the coefficients and the resulting approximation. In a little bit, I’ll make the argument that non-interpolatory basis functions are more well-behaved than their interpolatory brethren.
The above basis functions are valid for a univariate element, but in the examples that we’ll eventually get to, we’ll construct bivariate basis functions by taking the tensor product of each set of univariate basis functions – leading to a standard element that is square:
Let’s talk a bit about interpolatory vs. non-interpolatory. If a function exactly passes through a set of points, we say that the function interpolates the points. If a function is constructed such that it accepts a set of points as input and then interpolates said points, it is an interpolatory function. Conversely, a non-interpolatory function accepts a set of points as input and returns an approximation of those input points. So it would seem that interpolatory functions are highly desirable, and in certain cases they are. But as we see in the figures below I’ve plotted some data that are representative of experimental measurements along with various fits, interpolatory and non-interpolatory, of the data.
Now, if I were to rank these various fits by almost any metric I would rank them, worst to best: Top-Left, Top-Right, Bottom-Left, Bottom-Right. A bit more explicitly and candidly:
Worse than nothing
Piecewise interpolatory polynomials
Better than nothing
Spline approximation (non-interpolatory)
Linear approximation (non-interpolatory)
Hopefully this little exercise has removed the possible misconception that “interpolatory functions must be better than non-interpolatory.” So, now that we’ve defined how to compute the mapping, and at least introduced interpolatory and non-interpolatory functions, lets apply these ideas to our finite element mappings.
As you might imagine, it is valuable to not just talk about the value of mapped values, but also to talk about the derivative of the mapping. We call the derivative of the mapping with respect to the parametric coordinates, the Jacobian, and we write this as and we compute it by:
In two dimensions, this has the form:
The Jacobian effectively describes the distortion of the mapping and we can use this to show how the orthonormal coordinate system of the standard element is changed for each point that undergoes the mapping.
We can then conveniently summarize this distortion as a scalar by taking the cross product of the transformed coordinate systems. Recall that in two-dimensions the magnitude of the cross product is the area of the parallelogram formed from the input vectors as shown in the below image. In three dimensions, this would be a parallelepiped and the measure would be its volume. Finally, if the sign of the cross product is negative, then the area (volume) is also negative and we would say that the finite element is inverted at this location — we’ve lost our bijective mapping.
An easy way to compute the cross product of these transformed coordinate system vectors, given the form of the Jacobian, is to take the matrix determinant:
Note that we often also call this value the Jacobian, so from here on I’ll refer to this value as the Jacobian, and the matrix found via as the Jacobian Matrix. Now, if we compute the Jacobian at each point for a 2D quadilateral element might be able to color our element by the value of the Jacobian, which we do in the figure below. Note that I’ve centered the colormap around . This means that any region that is reddish is seeing an overall “expansion” while a blue area means that the region is overall experiencing a “compression”.
And notice what happens if we add in lines of constant and … they produce a pattern that exactly replicates our transformed coordinate systems, so from here on we’ll use the isocurves instead of plotting the transformed systems. Plus as quadrilateral shapes emerge from this plotting, it makes the local distortion more clear.
Can we invert this element? Yes, we can. In the animation below we pull the bottom-right node up to the top left node and we can watch as the element inverts — here the colormap goes from -1 to +1, centered around 0, so red means positive and blue means negative.
In this next section we will compare the interpolatory bases against the non-interpolatory bases to see how much we can move a single node (specifically, the bottom-right node) before the element inverts. But before we continue, try to think about what you would expect to see and maybe what the best you could hope for would be. Maybe sketch it out on paper. In each of the figure sets below, the interpolatory basis functions will be on the left, while the non-interpolatory will be on the right.
Hopefully you noticed a pattern for the behavior of both types of basis functions, and noticed the subtle differences between the two types. Admittedly, before this exercise I thought that for both basis functions the element wouldn’t invert until the bottom right node became colinear with the nodes immediately to its left and top, in a manner similar to the linear case. Instead we see that only the non-interpolatory basis functions have that property – for all degrees. And we note that for the interpolatory basis functions as we increase the polynomial degree the amount of allowable relative movement decreases.
In the video below we continue exploring the robustness to nodal deformation of the different types of basis functions. The video consists of sequences between interpolatory and non-interpolatory bases, for each sequence the exact same nodal deformation is prescribed.
One thing that stood out to me was that we can clearly see the Runge phenomenon emerging for the interpolatory basis functions. In one of the sequences, we find that we can actually invert the element by pulling a boundary node away from the element. This just feels wrong, it should give you a feeling of cognitive dissonance whereas the non-interpolatory basis appears to have a more sensible effect. Below is another example to help us compare the interpolatory vs non-interpolatory basis functions. Both examples have the same amount of nodal displacements, which were determined randomly between 0 and 0.05 (on a biunit domain). The same color scale is used for both and is centered at a value of unity — the closer to white, the better.
You might be asking whether it’s valid to talk about such highly distorted elements, especially those with maybe one or two nodes that are excessively deformed. Wouldn’t the constitutive model spread the deformation across multiple nodes? Well, one area that this can show up, and during which there is no constitutive model to “spread out” the effect is in mesh generation. Below is an example of a part that contains a web feature that is filleted. Note that the linear mesh (left) has some highly distorted elements on the fillet boundary, but all elements are at least positive. However if we increase the element degree to and project the nodes to the geometry, then we can see that we almost certainly have a zero Jacobian at the curve tangent. Even if it’s slightly positive, it should follow from the discussion above that the quadratic mesh is likely much more prone to inversion – and thus having the simulation crash. In other words, the linear elements, in this case are a more robust mesh.
Something I want to talk about now is, “but why do we care about the values of the Jacobian Matrix and the Jacobian?” We often use the (scaled) Jacobian as standard metric in determining our finite element mesh’s quality. But why? What’s the point? Let’s explore that question using that weird inversion example from the video. Let’d return to using a map-map, this time of 19th century Europe, to understand our map.
Now, imagine you’re a navigator on board a ship and, due to some most unfortunate events, the ink on your map runs, resulting in a “new” map that looks like the image below (minus the black lines) and you’ve also suffered a concussion whilst staving off a pirate attack, making you forget everything you knew about European geography — all the information you have is the ruined map. You can imagine that if you were traveling between England and Denmark, that you wouldn’t have too difficult a time reading the map and determining which direction you should travel. However, if you had to travel from Cyprus to Barcelona, it would be quite difficult. First of all, according to your map and due to the element inversion, Cyprus is both in the Black Sea and in the Mediterranean sea — it’s in two places at the same time! Luckily, as sheer dumb-luck would have it, a storm blows you to the west of Crete. Out of curiosity you try to read the names of a few landmasses that appear at the bottom of the map. Notice that while you would have had no problem reading Malta or Tunisia in the undamaged map, that instead on your damaged map the words are completely illegible. The high distortion of the map in this region has rendered it useless. This is analogous to the effect this has in our finite element code. The quality of the mapping directly impacts the ability of the solver to converge to accurate solutions.
Contrast the above exercise with the mapping below, where we used the same amount of nodal deformation. The map appears virtually unchanged in the northern regions and the southern regions, while deformed, are still legible and relatively easy to distinguish. Which map would you rather have?
Now, you might be saying “well that’s not fair, you should apply the same amount of distortion to both maps!” And well, we could. As both types of bases I presented are polynomial, they can exactly match any distortion that the other can produce. For instance, there is some arrangement of the nodes in the interpolatory case that would produce the better map that I showed, and vice versa there is some arrangement of the non-interpolatory basis’ nodes that would produce the horrifically disfigured map. But the point I’m making is this: In optimization problems we typically prefer to have a small relative change in the output for a given change in the input. We also prefer this change to have a monotonic effect on the output. Imagine a contact algorithm where we’re trying to optimize the placement of the finite element nodes to fulfill the contact conditions. Having a well behaved Jacobian is a key enabler to achieving robust convergence.
My penultimate comment hearkens all the way back to the beginning of this post. Not all elements are as dependent on this particular concept of the Jacobian. In fact, there are some formulations that allow for elements that we would normally consider inverted, but of course, they’re not inverted within the contexts of their use. A couple of these kinds of elements are those that use harmonic or barycentric basis functions. These methods don’t use this parametric map to build their basis functions in the global coordinate system, rather they explicitly build the basis functions within the global system. This means you can construct basis functions on elements like those shown below.
My final comment is that while we seem to commonly use the (scaled) Jacobian as a means of quantifying our mesh quality, this is usually shown to the user as a single value for the entire element (see image below) which I can only assume must be the Jacobian as calculated at the the quadrature point in a 1-point Gaussian scheme () What about higher-order quadrature schemes? What about higher-degree elements (for which few if any programs support visualization of Jacobian values)?
Hope you enjoyed reading this post. Let me know your thoughts and please point out any mistakes I’ve made!
Well, March was an interesting month. Between the COVID-19 pandemic shutting down BYU campus, transitioning to online classes, hitting the homestretch for classes, turning 31(!), and work obligations, I found some time to work through the orthoplanar spring problem in Coreform Flex and Crunch. Below is a video I captured of my first start-to-finish simulation in Coreform’s software. The purpose wasn’t to necessarily accurately analyze the problem, but familiarize myself with the Flex workflow as it exists today.
I also started to explore, conceptually, some of the merits behind immersed, smooth-spline, finite elements. The primary value behind the immersed finite element method is the promise it shows for dramatically reducing the amount of model preparation for an analysis, e.g. it removes the need to defeature and partition geometry for the sole purpose of constructing a body-fitted hex mesh. This model prep time has been shown to consume ~85% of the finite element workflow. For example, in the figures below I demonstrate trying to take a deceptively difficult-to-mesh geometry from a NIST repository which I helped make available to the general public through my employment. Figure 1 shows the CAD BREP as shown in Fusion360.
In figure 2 I show my best attempt at putting a conforming all-hex mesh on this geometry, without any defeaturing. The astute eye will notice that near the front of the part, where the large fillets meet their neighboring, perpendicular faces, that the mesh is not a conforming mesh — the mesh elements do not share nodes at the interface. Effectively, these regions are now , which means they are not connected across this interface. In conventional FEA applications we might apply a tie constraint to “connect” these elements, but this is undesirable as it leads to incorrect solutions near the tie constraint and the introduction of constraints negatively impacts convergence of the problem.
Consider instead, the immersed finite element method. In figure 3 we see the unmodified CAD BREP immersed in a rectilinear background grid, which can trivially be generated via a tensor product construction. To be clear, a high-quality, body-fitted hex-mesh will likely always be able to produce more accurate results — it’s just that we should be able to get good-enough results, in a much shorter period of time using an immersed approach.
If there is a specific region that we desire to obtain high-accuracy results (say a mission-critical feature) we aren’t limited to a fully immersed mesh. Instead, we can do a small number of partitions, which result in a background mesh that can be partially body-fitted. In figure 4 we see an example of this applied to the same bracket. Unlike the first, entirely body-fitted mesh, this geometry processing is much simpler and faster to do because we only care about a localized region — in this case, the front cut-out region.
In figures 5 and 6 I demonstrate the immersed concept on a linkage assembly that is chock-full of complex fillets (especially on the white piston). In traditional FEA software, I would immediately begin removing fillets — I may not keep a single fillet. Instead, the immersed approach allows for individual background meshes for each part in the assembly and allows us to retain the full feature set of the geometry.
And finally, figures 7, 8, and 9 I present a few classes of problems cases that I believe require the use of immersed finite element techniques is generative design for additive manufacturing. Figure 7 is a subsection of a printed circuit board with some mounted components, where I’ve immersed each individual component of the assembly into their own background grid. Having done some analyses at work on circuit boards I can tell you that these are extremely challenging to partition and mesh.
In figure 8 we see a piece of a metal foam, as captured by Prof. Ashley Spear’s group at the University of Utah. The colors on the foam represent the grain structure of the metal foam. All-Hex meshes tend to be easiest to build for blocky structures, but blocky structures are rare in nature. Once again we choose a nice, blocky, background domain to discretize with our finite elements and immerse organic-looking metal foam into this grid.
And last but not least, in figure 9 we have an entry of the GE Bracket Challenge. This entry was generatively designed using topology optimization. As such, the part has few “blocky” regions – the part has a distinctly organic shape. I honestly don’t know how I would proceed with putting a quality body-fitted hex mesh on this part. But with the immersed method, it’s trivial.
The immersed finite element technique isn’t a new idea, in fact there are already commercial solvers available that provide some form of this technique, namely ANSYS Discovery Live and Midas MeshFree. However, Discovery Live appears to use a very poor basis (constants) for its solutions and thus seems relegated to linear problems. This may be fine for preliminary design studies, but I don’t want to fly on an airplane that used Discovery Live to engineer the landing gear, turbine blades, or wing-box!
This is where it seems to me that we can instead utilize the functionality of Coreform’s U-spline technology to create locally-refined, high-degree, smooth-spline bases in these elements which should allow us to achieve very accurate results for nonlinear problems. This may be the technology that allows FEA analysts to use and trust immersed finite elements on a wide-variety of nonlinear, mission-critical, problems. Chris, a fellow graduate student and current Coreform employee, is working on developing the immersed smooth-spline FEA technology and I will be following his progress closely.
In February, Coreform held its second short-course, which I attended. It was fairly similar to the last short-course, though a bit more polished. The biggest difference, however, was the unveiling of their beta version of Flex preprocessing software — which is built on the Trelis meshing software they obtained through their acquisition of CSimSoft. Inspired by one of my work colleagues, I made a video of their stress-testing of one of the tutorial problems.
After the short-course I began developing a strategy to build a simulation model of the planar spring in Flex. I ran into a few bugs, but was able to work with Coreform to get some fixes. Namely I was trying to construct a U-Spline on the mesh shown in figure there was an infinite loop happening in their algorithm to apply creases along one of my boundary conditions.
Caleb and Florian were able to quickly troubleshoot the error and get a fix pushed to their master branch. Now I’m just waiting to get a new build of Flex so that I can run the simulation on the working U-Spline. As you can see in figure 2, it’s still not perfect, but this will be an important milestone for me: building an entire simulation from scratch using only Coreform’s software stack.
U-Spline vs. B-Spline
Another thing I worked on with another graduate student was coming up with a simple example demonstrating the difference between a traditional B-Spline and a U-Spline — in 2-D. Remember that a traditional B-Spline is a basis-spline constructed via the Cox de Boor recursion algorithm in 1-D and then expanding to higher-dimensions via tensor products. The figure below shows an unstructured spline topology and demonstrates a quadratic spline basis function with maximal continuity, B-Splines on the left and U-Splines on the right. Notice that the B-Spline basis is and has to be constructed via three carefully parameterized B-Spline tensor product surfaces. In practice (i.e. moden CAD) this basis wouldn’t even be continuous, but rather . The U-Spline, on the otherhand, is constructed as a single entity and is able to acheive continuity, except at the extraordinary point though I’m told this isn’t a theoretical limitation, but rather that Coreform hasn’t yet implemented smooth extraordinary points. Later I’ll talk about the far-reaching implications this may have for constructing watertight-CAD models.
That’s it for February, hopefully March proves to be another fruitful month!
It seems to me that many practitioners of finite elements are engineers (myself included), as opposed to mathematicians. Part of the mythos of finite elements is Gauss quadrature. We see its name thrown around when discussing integration in finite elements, and we know that it is an efficient scheme. But us engineers don’t necessarily know what it is, where it comes from, or why it works. I’m going to try to address this gap in knowledge over several few posts — I hope you find these insightful.
Legendre Basis Polynomials
The Legendre polynomials are an orthogonal set of polynomial functions – a useful consequence of this is that they form a linearly independent basis of the polynomials. In other words, any polynomial function can be represented as a linear combination of Legendre polynomials. Let’s walk through an example to demonstrate:
The normalized (i.e. orthonormal) Legendre polynomials through degree-3 are:
Our goal will be to find coefficients, such that . To accomplish this, let us define a residual function When then we will know that we have decomposed into a linear combination of Legendre basis polynomials. We start by trying to find the coefficient for the highest degree term in our residual function. Note that all we need to do in each step is to match the coefficient of the highest-degree monomial term in the residual function with the highest-degree monomial term of the Legendre basis.
Legendre Decomposition Example
Then we update and
Updating and :
Updating and :
Updating and :
We can summarize these results in a vector:
The main result of this is to demonstrate that we can decompose any polynomial function into a linear combination of Legendre polynomials. And we can use this property to define the polynomial’s integral in terms of the Legendre polynomials:
All we need to do now is find an efficient means to integrate these basis polynomials (i.e. . It just so happens that such a means does exist for the Legendre polynomials over the bi-unit domain (i.e. ). The method we can use to integrate these polynomials is called Gaussian Quadrature, which is a technique that seeks to obtain the best numerical estimate of an integral by picking optimal points at which to evaluate the function, evaluating the function at these optimal points, scaling these function evaluations by scale factors (i.e. weights) associated with each optimal point, and summing the results. Furthermore, the optimal points of an N-point Gaussian quadrature formula are precisely the roots of the orthogonal polynomial for the same interval and weighting function. When the orthogonal polynomials are the Legendre polynomials this is called a Gauss-Legendre scheme. In summary, Gaussian quadrature over the bi-unit domain is:
Building Gauss-Legendre Quadrature Rule
Step 1: Optimal points for integration scheme
Let’s first investigate how to find the optimal points for a 3-point Gauss-Legendre scheme. We begin by plotting the functions and noticing that over the bi-unit domain every Legendre basis has an integral of , except for , which has an integral of .
Step 2: Corresponding scale factors (weights)
Now let’s find the corresponding scale factors (i.e. weights) of the three roots, , of . First let’s replot the basis functions, this time showing evaluated at .
But remember, we know what these integrals should evaluate to:
We can encode these conditions in a linear system of equations:
Note that the bottom row is trivial, leading to a more compact system:
Solving this linear system of equations yields:
Gaussian Quadrature Example
Let’s now try to apply this 3-point Gauss-Legendre scheme to our example function . We analytically compute the integral so that we can compare results:
It worked! But this wasn’t example wasn’t exactly true to the fundamental theory of Gaussian quadrature — instead of evaluating the function at the optimal points, we evaluated the Legendre basis polynomials. But here is the key that lets us evaluate the function: Recall that is equivalent to some linear combination of Legendre basis polynomials; . When we evaluate it’s as if we’ve evaluated for every value of . Thus instead of needing to evaluate two nested summations (every Legendre basis evaluated at every optimal point) we only need to evaluate a single summation of evaluated at each optimal point. Observe:
True Gauss-Legendre Quadrature
There you have it. A thorough, if a bit pedantic, walkthrough of how Gauss quadrature works. There are some additional points that I didn’t mention, that are important from the perspective of implementing in a finite element code – for both traditional and smooth-spline finite elements. I will try to cover these in a future post.
January was a bit of a slow month – most of my thesis time was spent on building some exemplar CAD models and constructing example finite element meshes on them. I’ve sent the meshes to the rest of Dr. Scott’s research team to begin discussing maturity of their production code for handling the various volumetric meshing schemes available in Cubit.
Cast Iron C-Frame
This example problem comes from a textbook in machine design and is a cast-iron C-frame used for bearings. A few things to point out about the geometry and analysis. The purpose of this analysis will be to predict the onset brittle failure due to various loading conditions.
The load conditions will be applied on the flat regions surrounding the through-holes and will “pull-apart” and “squeeze” on the frame similar to pulling apart a wishbone or squeezing a hand strengthener. Another thing to notice is the prevalence of fillets – particularly the very small radius fillets. Fillets can make it extraordinarily difficult to manually decompose a geometry for hex-meshing and we would expect such small fillets to not have much effect in minimizing stress concentrations – so we’re going to remove those tiny fillets (i.e. defeature). We’ll keep the large fillets, because they’re large enough that they may have a significant effect on the result.
Below is a linear finite element mesh that I would probably use if I were using a conventional FEM code like Abaqus. Almost the entire geometry can be meshed using the sweep scheme in Cubit. There are four small regions (blue) that need to be meshed using the polyhedron scheme. Currently, Dr. Scott’s code can construct volumetric U-Spline basis functions on swept mesh schemes, but not any other schemes (soon…?).
Since I want to be testing workflows for smooth-spline based FEM, I want to build as coarse of a mesh as possible (Analysis-suitable geometry). Below is an example of how coarse I’m talking:
And then refine the geometry as necessary for accuracy. One way we might refine the geometry is by degree-elevation. Below is the same mesh, but elevated to degree-2 elements. See how much better this represents the geometry? Since these basis functions are isoparametric they represent the geometry and the solution with the same basis functions, meaning that the solution is approximated more accurately as well. Note that I’m using basis functions, since those can be constructed, but obviously the eventual goal is to build a rational, volumetric U-spline with max continuity – everywhere except at extraordinary points – so that we match the geometry exactly.
I’ll be much briefer on these next two examples. Below is the coarse mesh I’ve made of an orthoplanar spring, as described in Professor Larry Howell’s research in BYU’s Compliant Mechanism Research group. I’ve colored the mesh by the block-decomposition I used to “guide” the construction of the mesh. Note that some of the colors of adjacent blocks are equivalent – ParaView isn’t yet smart enough to ensure this isn’t the case.
I’ll be treating this spring as a copper trace in an electrical connector and will be determining force-displacement curves and predicting the number of cycles to fatigue failure based on the equivalent plastic-strain increment and the Coffin-Manson law.
The last problem I worked on in January was a single column of an lattice structure test sample. This structure would be additively manufactured and includes integrated platens to improve experimental testing and numerical analysis. Traditional foams don’t have integrated platens and it can be difficult to prep the test-equipment’s platens to control the boundary condition between the foam and the plate.
Perhaps the most important concept to understand in smooth-spline FEA is called BézierExtraction – extraction of the piecewise Bézier elements (composed of Bernstein polynomials) that constitute a basis-spline (aka B-spline). One can actually extract any arbitrary family of piecewise elements (e.g. Lagrange), but we’ll cover that in a future post.
The Bézier extraction process effectively is a process through which the continuity across the element boundaries is systematically reduced, which creates a new degree of freedom (i.e. node / control point), and a corresponding basis function, that had previously been linearly dependent on its neighboring degrees of freedom. A single iteration in this process is often referred to as knot insertion, while the complete process that results in a global spline (or in some implementations, a global spline) is called Bézier decomposition. Neither knot insertion nor Bézier decomposition change the resultant spline – each knot insertion step creates a new spline-space that produces an exactly equivalent spline (albeit with different parameterization).
Because of the prior linear dependence of the new node and of the exactness of this process, a linear transformationexists that relates the new basis function to the original spline basis functions. This linear transformation is can be trivially generated during the Bézier decomposition process – this constructed transformation is called the Bézierextraction operator.
Question: How is it that you can trivially generate the extraction operator through the decomposition process?
Figure 1 below demonstrates the general idea of constructing a transformation operator during a decomposition process. Let’s pretend that we want to decompose a line segment into two equivalent line segments. As shown in figure 1 we can insert a node at the midpoint of the line segment. While the midpoint could be trivially identified via averaging the two end-points, note that we can encode this subdivision via a matrix operating on a column vector of the two endpoints.
Ironically, the importance of Bézier extraction is not due to the extraction of individual Bézier components, but rather the use of the extraction operator to transform the extracted basis functions backinto the smooth-spline basis functions. This relationship is described in the equation below.
It’s dangerous to go alone; Let’s use an example to guide us
First we define the spline space to be a partitioning of the domain , into four elements where all internal element interfaces are continuous. The basis spline for this spline space is shown in figure 2 below. A challenge with implementing a general, smooth-spline finite element method on this spline-space is that, unlike a basis functions or certain classes of basis functions, we would need to define two reference elements that the finite element code could reference.
Next we construct the global extraction operator by decomposing the spline into a spline, the latter of which is shown in figure 3 below.
And here is the global extraction operator for this example
And plugging this into the smoothness equation defined above we can more readily see how this operator reconstructs the spline basis functions from the spline basis functions.
A key insight that we can discern from figures 2 and 3 is that the amount of basis functions supported by each element in the splines is invariant of the continuity, . Thus, as shown in figure 4 below, we can identify square sections of the global extraction operator that operate on just the basis functions for a given element. We call these sub-matrices local extraction operators or element extraction operators and denote these as
So how does this relate to the finite element method?
The value of these local extraction operators is made clear in figure 5 below where we show a single reference Bézier element can be linearly transformed into a smooth-spline element. Not only does the local extraction operator define the transformation of the Bézier basis functions into the smooth-spline basis functions, but it can also be used to compute the transformation of the Bézier element’s nodes into the nodes of the smooth-spline element’s nodes.
The overall effect of these local extraction operators is that we can easily adapt the traditional finite element method into a smooth-spline finite element method. For example, below are two algorithms for computing the local element stiffness matrices, first for a FEA code and then for a FEA code.
There you have it, a simple change is all it takes! And note that if was the identity matrix in the second example that it would be functionally equivalent to the first example. The construction of a smooth-spline finite element’s local stiffness matrix is a generalization the local stiffness matrix calculations found in the traditional finite element method! In a future post we’ll dive deeper into global assembly operators in smooth-spline FEA, but for now I think this is a good place to stop.
Glossary of Symbols
: Polynomial of degree : -Continuous : Depending on the context this may refer to an entire spline-space, or a single spline-element, having these properties. : The smooth (i.e. ) spline basis functions : The basis functions : The global Bézier extraction operator : The local or element Bézier extraction operator : Domain of a finite element
Footnotes  For example, the Hermite basis functions  These smooth-spline nodes are almost universally called spline control-points in literature, but hopefully this makes it clear why I prefer to call these smooth-spline nodes instead.
References Borden, M.J., Scott, M.A., Evans, J.A. and Hughes, T.J.R. (2011), Isogeometric finite element data structures based on Bézier extraction of NURBS. Int. J. Numer. Meth. Engng., 87: 15-47. doi:10.1002/nme.2968
Scott, M.A., Borden, M.J., Verhoosel, C.V., Sederberg, T.W. and Hughes, T.J.R. (2011), Isogeometric finite element data structures based on Bézier extraction of T‐splines. Int. J. Numer. Meth. Engng., 88: 126-156. doi:10.1002/nme.3167
Cottrell, J. Austen., Yuri Bazilevs, and Thomas J.R. Hughes. Isogeometric Analysis: toward Integration of CAD and FEA. Wiley, 2009.
After graduating from RPI in 2012 I began working as a finite element analyst in support of my employer’s manufacturing mission. In 2016 I began investigating a flavor of the finite element method called isogeometric analysis (IGA), which eventually led to a full-fledged research project into the method. The project consisted primarily of a collaboration with the IGA research group headed by Prof. Michael Scott at Brigham Young University (BYU) in Provo, Utah. In 2019 my employer granted me a two-year technical fellowship to attend BYU and study under Prof. Scott in order to gain a deeper technical understanding of IGA.
One strong opinion that I’ve developed over the duration of my investigations is that IGA is a misleading moniker for the technology; a more precise name for the technology is simply spline-based finite element analysis. Splines are a rich mathematical construct that were first formally described in the 1940s and by the end of the 1970s the contemporary description of splines was complete. Their use in interpolation and approximation has been well documented, but while splines have been highly successful in interpolation they have almost no utility (outside of fundamental research) in approximations – largely due to the limitations of our spline descriptions.
In the 2000s new research into splines began to change the way we describe and utilize splines, especially in finite element methods. In fact, it appears that these new spline descriptions may change the way we view, teach, and apply the finite element method. A purpose of this blog is to try to explain some of these new spline technologies, pedantically breaking them down and demystifying them for other FEA analysts.
My thesis will focus on applying a commercial spline-based FEA software to solve various classes of structural mechanics problems and compare the workflow and results to those attained by a conventional FEA software package. Thus a second purpose of this blog will be to track the progress on these problems and share interesting results with my coworkers, graduate committee, and the FEA community at-large.