A couple of days ago, I saw a conversation thread on Twitter about geometric modeling kernels. It wasn’t much of a thread—just a few comments back and forth to the effect that modeling kernels are like car engines, and that if you can’t tell the difference without looking under the hood, it doesn’t matter which one you have.

CAD users don’t think too much about what kernel their software uses. I suppose most of them can’t tell anyway. But that doesn’t mean kernels don’t matter.

There are all kinds of potential problems that can crop up with modeling kernels. A while back, I published a couple of articles about interoperability problems (which are inherently related to kernels), one from an academic perspective, and one from the perspective of a kernel guru.

About a month ago, I wrote a series of articles on configuration modeling, pointing out that no modern CAD systems can really do this. A couple of days ago, I made an off-hand comment in an article that a picture I showed (of a sphere) was really a cube that had its edges blended (e.g., start with a 2” cube, and fillet all the edges at 1”.) I learned that trick 15 years ago with SolidWorks. Several readers wrote or commented that they were unable to do it with their modern CAD systems.

The most common sign of a kernel-based problem happens when a CAD user tries to create a geometric feature, and the result is a failure.

Think about that for a moment.  You’re working on a CAD system, trying to create a feature, and the system does something unexpected.  That’s a big red flag saying the modeling kernel can’t handle what you’re asking it to do.

As an aside, I think it’s mighty interesting that one of the signs of an expert CAD user is their ability to work around limitations in the kernels of their CAD programs that would otherwise create modeling failures.

So, yes, geometric modeling kernels matter. Even to CAD users who don’t realize it.

Yet, there is no best alternative when it comes to geometric modeling kernels. ACIS, Parasolid, CGM, Granite and the proprietary kernels out there each have their own kinks. None is so much better than its competitors that I want to jump up and down and say “everybody look at this!”

The spark that set off the Twitter thread that inspired this article was an announcement, from Siemens PLM, of a webinar, to be held on November 8. Here’s the description from the Siemens website:

At the core of your mechanical CAD software is the modeling kernel, an often overlooked tool. The kernel is key to your ability to compute 3D shapes and models and output 2D drawings from 3D geometry. In this webcast, learn the basics about kernels and what impacts a change in this core code can have on your company’s existing and future design data. Dan Staples, development director for Solid Edge at Siemens PLM Software, is joined by medical device designer Billy Oliver from Helena Laboratories to explore the issues facing hundreds of thousands designers and millions of CAD files.

    • The math inside your feature tree
    • Real-world lessons learned in changing kernels
    • Modeling loss, data protection, and reuse risks
    • Impact on hundreds of thousands designers and millions of CAD files
    • Case study: Helena Laboratories ensures data protection

You can register for the webinar here.

While I expect the webinar will be, by its nature, slanted towards Siemens PLM and its Parasolid kernel, I suspect that quite a lot of what will be discussed will be interesting to people who have no intention of changing their CAD tools. I’m planning on listening in.

I doubt that most CAD users will ever spend much energy thinking about their CAD programs’ modeling kernels. But CAD users should spend some energy thinking about broader issues, such as usability and interoperability, which are affected by modeling kernels.

Archimedes PalimpsestHow long is “Long-term?” Over 2,000 years, in the case of Archimedes’ The Method. This palimpsest is the only known copy of it–and it was almost lost. The story of its discovery and conservation reads like it was made for the movie theater. You can read about it here.

The need for long-term archival storage of CAD data varies, depending on its use.  For the Long Now clock, it might be 10,000 years.  For a nuclear waste repository, it could be far more than 10,000 years.  Being realistic, for many consumer products, CAD data that’s more than a couple of years old isn’t of much use anymore. For Automotive companies, product lifecycles are longer, but still not interminable. For aerospace and defense products, lifecycles can stretch on for many decades. Consider the joke among US Air Force pilots: “it isn’t your father’s Air Force, but it is your father’s plane.” (If it’s a B-56, it might even be your grandfather’s plane.)

How can 3D CAD data, with product lifecycles of sometimes more than 30 years, be reliably documented, communicated, stored and retrieved? And how can users access that data, when the CAD systems that generated it have long been obsolete?

The answer is LOTAR.

LOTAR International is developing standards for long-term archiving of 3D CAD and PDM data. These standards will define auditable archiving and retrieval processes, and are harmonized with the German Association of the Automotive Industry (VDA), and the Open Archival Information System (OAIS) Reference Model. The LOTAR International project is conducted by leading OEMs and suppliers in the aerospace and defense industry under the joint auspices of ASD-STAN, AIA, PDES Inc. and the ProSTEP iViP Association. (A shout out to Bob Bean, of Kubotek USA, who was the first person to tell me about LOTAR.)

This Friday, September 30th, 2011 at 2p.m. (CET – Central European Time), ProStep iViP is hosting a 45 minute webinar on LOTAR. And, unusually for this sort of thing, its available to the public. (Most of their webinars are for members only.) I’ve asked for, and received, permission from ProStep iViP to tell others about the webinar, so that’s what I’m doing right here.

If having long-term access to your CAD data might be important to you at some point in time, consider listening in on this webinar. To register, send an email to nora.tazir@prostep.org. Participation is free of charge and you will receive access information back via email. (Don’t wait too long — I suspect that Nora has to manually respond to all the emails.)

3D ElephantO how they cling and wrangle, some who claim
For preacher and monk the honored name!
For, quarreling, each to his view they cling.
Such folk see only one side of a thing.

 

For the last couple of weeks, I’ve written mostly about general themes affecting users of engineering software. Interoperability and usability are common (and related) themes I’m interested in—because they touch so many users, and are so important.

Today, I’d like to be a bit more specific, and talk about the Third Boeing/Northrop Grumman Global Product Data Interoperability Summit, which will be held this November 7-10, in Arizona.

Ken Tashiro of Elysium writes about the summit, and the need for it, in this month’s Aerospace Manufacturing and Design, in an article titled Interoperability is Still the 3D Elephant in the Room. (Please do click on the link, and read the article.)

The bottom line is that data interoperability is still a big problem, not just in aerospace, but in nearly any industry that uses CAD, CAM, CAE, or PLM software.

Ken makes a strong case for the summit, saying “In our rapidly evolving world, data interoperability is too important to be left to the vendor community. It is everyone’s problem and we all need to be part of the solution.”

Boeing and Northrop Grumman also make a strong case for the summit, because they’ve insisted that it be open and agnostic—an opportunity to share ideas and solutions about data interoperability challenges.

Still, open and agnostic doesn’t mean “free for all.” It’s an invitation-only conference, where you must be “sponsored” by someone from Boeing or Northrop Grumman. But I think, if you’re serious about interoperability, it’s worth the effort to find a sponsor. You can find out more information here.

 

Dr. Paul Stallings, VP of R&D for Kubotek USA, looks like a typical guy.  When I first met him at COFES, he seemed like a typical guy. Then I got to have a little conversation with him, and discovered that he’s not a typical guy.  You can do your own research on him, if you like, but suffice it to say that he’s a heavy-hitter in the world of geometric modeling kernels.  The kind of guy who doesn’t need to brag, because he’s done it.

Last week, I sent Dr. Stallings a copy of a paper that I’d found, titled Geometric Interoperability with Epsilon Solidity. I wrote about it here yesterday.  He kindly replied to me, and, surprisingly, took the time to tell me how he views interoperability. With his permission, I’m reprinting his email here, almost in its entirety.


Hi Evan,

Thanks for the paper on interoperability. It is interesting to see how the paper writing academia views the problems from time to time. While reading it I was left with the thought of “if only it were that easy”. I find that the tolerance problems, near tangent intersections, and topology mismatches are the smallest and easiest part of the problem.

The biggest problem for me is that the formats are constantly changing, and in some cases are intentionally encrypted. Ironically the intended solution to this problem—the creating of translated standards such as IGES, STEP and such—creates a new problem that accounts for the other half of all the problems that I see; that with an open standard, anyone—competent or not—can attempt to write files in the format. All too often we will see all types of mistakes made in simply making the file that range from small things like how a line is ended to larger things like missing data.

The other big problem is that geometry and topology are defined in radically different ways in different systems. One of my favorite examples is that in ACIS a cylinder is a cone and in ProE a sphere is a torus. Not a big problem but just one of many things that needs to be taken into account. Larger differences include such things as how the surfaces are parameterized. In some systems a sphere is parameterized as (latitude, longitude) in other systems (longitude, latitude). However, that is a simple flip; in the case of cones how the lateral parameterization is scaled, shifted, and flipped is more difficult. Nevertheless, the most difficult case is when it comes to advanced procedural surfaces such as blends, lofts, and such.

In the case of advanced procedural surfaces there can be many undocumented, cryptic, even unimplemented options. To add to the problem these surfaces are quite often near tangent with their neighboring surfaces or difficult to fit very accurately with general surface types such as NURBs. However, the largest problem of all is the existence of multiple flipping flags, flags to flip faces, edges, curves… get one of them wrong and all the understanding of topology, geometry, and tolerances is irrelevant.

However, I am digress. The most difficult problem with tolerances is not that one system uses one tolerance and another system uses another tolerance. The biggest problem is that some systems depend on curves in three dimensional space, and other systems depend on curves in the different two dimensional parameter space of the surfaces that they are on. The mis-match between parameter space and three dimensional space is a very big problem with ACIS and Parasolid using three dimensional space and Catia and ProE using parameter space. IGES and STEP punt by including one or both of the formats. The problem quite often comes from when both formats are included. All too often I will see a file that contains both 2D and 3D curves and the curves that were not used by the writing system are bad. IGES tries to fix this problem by actually providing a flag for the writer to set that tells which curves to trust. However, the existence of such a flag is a near admission of guilt, in that if both curves were always good, then it would not be needed.

However, the largest interoperability problem is not the format or tolerance, but the marketplace. When files are translated from expensive systems people buy less seats of that system and just rely on the ability to translate the files to less expensive systems. Hence, there is market pressure to not make the process easy. Nevertheless, no one wants to look like their files are impossible to translate because that could also decrease sales. Hence, we are left with the current situation, where fleas might be a problem until one considers how many people are employed in the flea collar industry.

I heard from a old-timer that there was a time when CAD Interoperability wasn’t a problem. He said it didn’t become a problem until the second CAD program was written.

I’m always interested to learn more about the underpinnings and sources of interoperability problems, especially in the realm of 3D solids. Recently, I came across a paper by Jianchang Qi and Vadim Shapiro, published in the Journal of Computing and Information Science in Engineering, entitled Geometric Interoperability with Epsilon Solidity. The abstract caught my attention:

Geometric data interoperability is critical in industrial applications where geometric data are transferred (translated) among multiple modeling systems for data sharing and reuse. A big obstacle in data translation lies in that geometric data are usually imprecise and geometric algorithm precisions vary from system to system. In the absence of common formal principles, both industry and academia embraced ad hoc solutions, costing billions of dollars in lost time and productivity. This paper explains how the problem of interoperability, and data translation in particular, may be formulated and studied in terms of a recently developed theory of epsilon-solidity. Furthermore, a systematic classification of problems in data translation shows that in most cases epsilon-solids can be maintained without expensive and arbitrary geometric repairs.

I will tell you, at the outset, that epsilon-solidity is not something that a the average engineer is likely to understand. No worries—it’s the background information on interoperability issues that makes this article really interesting to non-PhDs. While you may download a copy of the article yourself (it’s available for at the link above), I’m going to excerpt some of the interesting bits here.


A typical geometric data translation problem between two systems is illustrated in Fig. 1. A geometric representation can be thought of as a composition of geometric primitives by rules specific to a given representation scheme. In data translation, such a representation is transferred explicitly by various translators. However, the meaning of any representation is determined by the corresponding evaluation algorithms that usually also differ from system to system.

Perhaps the most widespread difficulty arises from the mismatch between the accuracy of the geometric representation and the precision of the evaluation algorithms used in a modeling system. For example, if the sending and receiving systems rely on different precisions, the points on surface intersections may classify differently (ON or OFF) in the two systems. As a result of such data translation, many design, manufacturing, and analysis tasks cannot be performed in the receiving system until the geometric models are either corrected “healed” or remodeled.


Many references have illustrated various data translation problems. We will not attempt to add to the long list of well-known difficulties, but rather consider a few carefully chosen but real examples that provide important insights into the nature and intrinsic sources of the general translation problem. The choice of commercial systems in the following examples is not important, because the problems are generic. The described difficulties are representative of the current state of the art and do not indicate inferiority of any specific systems.

Example 1. The first example illustrates the well-known fact that even minor changes in geometric representation may invalidate the model, causing irreparable difficulties in data translation. In this case, the model shown in Fig. 2(a) is created in SolidWorks and saved in the STEP (STandard for the Exchange of Product model data) neutral data exchange file format. Then the STEP model is reloaded into SolidWorks, but was found to be invalid. The built-in healing algorithm attempted but was unable to recover a valid solid, generating instead the model shown in Fig. 2(b). The above situation is common when geometric representations are archived in another non-native format. For example, saving the same model in ACIS format instead of STEP leads to similar difficulties. This double data translation corresponds to a situation in Fig. 1 where no new errors are introduced in the evaluation algorithm by the receiving system (because it is the same as the sending system.) The problem arises because primitives in the boundary representation—in this case, filleting surfaces and intersection spline curves in the original model—are mapped approximately into the STEP format by the translator. (Similar translation problems are common whenever tangent surfaces are approximated in the course of translation.)

Example 2. The second example Fig. 3(a) is intended to show that even when geometric healing is successful in repairing the received model, the result may not be always acceptable. The double translation procedure is identical to the first example, except, in this case, the healing algorithm is successful and generates the model shown in Fig. 3(b). The smooth blends near the corner have been replaced by sharp corners in the translated model; such drastic changes are not acceptable for engineering applications where blend radius is an important parameter.

Example 3. The third example shows that differences in precision of evaluating algorithms are also key ingredients of the translation difficulties, even when the changes in geometric representations are negligible. The solid model in Fig. 4(a) was created in SolidWorks using only planar and cylindrical primitives with integer and fixed-precision coordinates. The dimensions of the model range from 0.001 mm (the minimum thickness of the part) to 1000 mm (the length of the part.) The model is translated into Pro-Engineer through the STEP format, and both formats support exact representation of the original primitives. Therefore, it is reasonable to assume that the changes in geometric representation during the translation process remain negligible. Figure 4(b) shows the translated model after it is evaluated in Pro-Engineer. It is certainly a valid solid, but with a drastically different shape that is not likely to be consistent with the intended use of the original solid.

This last example demonstrates clearly that a geometric representation alone does not uniquely define a set of points. Rather, the set of points, and therefore all its properties, are also determined by the properties (in particular, precision) of the evaluation algorithm. In this case, Solidworks relies on incidence testing algorithms with a default tolerance of 10E-6 mm, while ProEngineer uses a relative tolerance of 10E-6 times the maximum size of the bounding box of the model measured in meters. The latter effectively determines the smallest feature size to be 10E-6 m, matching the minimum thickness of the model in Fig. 4(a). The evaluation algorithm includes the process of merging what Pro-Engineer now considers coincident geometric entities, and results in the “repaired” model shown in Fig. 4(b).


It is generally accepted that modern mass production and most of the manufacturing technologies of the past century would not be possible without the concept of interchangeable parts. The doctrine of interchangeability dictates that a mechanical part may be replaced by another “equivalent” component without affecting the overall function of the product….

With the emergence of computer-aided design and manufacturing over the last 40 years, most engineering tasks today are performed virtually, by simulating them on computer representations in place of physical parts and processes. One could argue that virtual engineering has become an enterprise for manufacturing virtual components themselves. The object of manufacturing, in this case, is the computer model of a physical artifact, and the manufacturing processes are the above computer transformations involved throughout the life cycle of this model. It is our belief that tolerancing and metrology of interchangeable virtual components is as important to the future of virtual engineering, as interchangeability of mechanical components was critical for emergence of mass production and modern manufacturing practice.


It’s worthwhile to note that the research presented in this paper was supported in part by UG PLM Solutions (now known as Siemens PLM Solutions.)

After I discovered this paper, I passed along a copy of it to Dr. Paul Stallings, VP of R&D for Kubotek USA. Dr. Stallings is one of the heavy-hitters in the geometric modeling kernel world. He wrote back to me, and his comments were far more than just interesting. I’ve gotten his permission to share them with you, and will do so. Tomorrow.