It’s pretty common to see application stories, where engineering software vendors try to demonstrate why their software is wonderful.

I find application stories interesting, but they don’t really tell me what I want to know: What is the right tool for the job? Or rather, what are the right tools for the job?

I started thinking this way maybe 28 years ago, when I’d see different types of manufactured goods, and ask myself if the CAD software I was using could handle that kind of design.

Even up to 10 years ago, the answer has too often been “no.” Fortunately, things have gotten better—but choosing the right tools for the job hasn’t gotten particularly easier. There are just so many options today.

In the sprit of understanding engineering software tools, I’d like to start a thought experiment: Looking at particular products, and considering what toolsets would be best for their design.

Not that I actually know what the best toolsets would be. I’m not that smart. But I do know a lot of smart people—on both the user and vendor sides of the markets—read this blog.  So, consider this a request for feedback. I’d like to hear from software vendors about their tools. And from people who have real-world design/engineering experience with a particular type of product and the relevant tools. I’ll gather what I learn, and write a follow-up article.

Inspired by an upcoming webinar, the first product type I’d like to look at is a golf club head.

Golf Lessons

This Wednesday (December 7, 11:00 AM CST ), Pointwise, Intelligent Light and the University of Tennessee at Chattanooga SimCenter are presenting a free webinar that illustrates the various steps of the complete computational fluid dynamics (CFD) process typically followed in aerodynamic analyses of realistic geometries.

They’ll be creating meshes with Pointwise, and with tools developed at the UTC SimCenter. Steady and unsteady CFD solutions will be computed on a distributed memory LINUX compute cluster with TENASI, a UTC SimCenter parallel-unstructured Reynolds averaged Navier-Stokes code. Post-processing will be performed using FieldView by Intelligent Light.

For this webinar, they’re using a pretty well-known type of geometry: a golf club head (a wood.)

I assume they chose a golf club head as an example because it’s interesting, and it lets them demonstrate what their tools can do—not because their software is the only (or even best) choice for analyzing golf club head designs. (I’m thinking it’s possibly massive overkill.  But there’s nothing wrong with that, is there?)

What tools does it take to design a golf club head (specifically, a wood?)

The USGA has a set of rules that govern the design of golf clubs (and their heads.) They cover all kinds of arcane details, including everything from the geometry of grooves to the volume of heads.

The goal of club designer is to work within those constraints, maximizing range and accuracy, providing as much forgiveness for swing variations and errors as possible, while making an aesthetically desirable product. (What, you don’t think golfers care about aesthetics?)

The fact that different number woods have differing face angles would suggest that that an ideal CAD program for golf club head design might have parametric capabilities. The USGA rules provide a set of constraints that also hint at using a parametric CAD program.

Still, not all CAD systems can effectively use the USGA constraints as parameters. For example, one of the rules is that a wood head’s volume must not exceed 460 cubic centimeters. For many CAD systems, it’s simply impossible to drive geometric dimensions using volume. It gets worse: The USGA limits the moment of inertia of a club head to 5900 g-cm2. See if you can plug that constraint into most CAD systems.

Woods are aerodynamic clubs, designed to be swung fast. While a wood’s face may be flat, not much else about it is. This implies that an ideal CAD system would have the ability to handle class-A surfaces. Certainly with G2, and possibly with G3 surface continuity.

With any product that’s aerodynamic in design, it’s a given that CFD should be in the design toolset. At least, if you want to compete with the market leaders. If you really wanted to complicate the analysis, you could optimize for under-water shots, for when players need to hit their balls out of  a water hazard. (Or you could add many-body dynamics analysis for when they need to hit out of a sand trap.)

It’s also a given that FEA should be in the bag of tricks, to optimize strength and stiffness within the USGA geometric constraints.

Modal, vibration, and acoustic analysis might make sense too (though these might imply analyzing a full club, not just a club head.) Modal response and vibration figure into performance, but sound figures into aesthetics. Guess which is more important? Karsten Solheim built a golf club empire based on the sound his putter made when hitting a ball: Ping.

Beyond CAD and CAE, there’s the issue of optimization. To do real justice to the problem of golf head design requires going beyond the “red is bad, green is good” school of static FEA thinking. It requires going to the Pareto frontier, to find the set of optimal design solutions. There are a number of interesting tools available to help you get there.

Chances are that, if you’re going to do a truly rigorous design of a golf club head, you might want to model a golf swing. For that, you’ll need a computer algebra system. And, since you’ll eventually want to do testing with physical prototype, you’ll probably want an instrumentation/data acquisition system, to capture and use test data.

And you thought golf clubs were simple.

Well, golf clubs look simple, at least. But they require real engineering, based on real science. That implies that they require serious engineering software tools. I don’t think you can get away with using SketchUp and AutoCAD LT. (As nice as they are for some things.)

The toolset for designing commercially competitive wood heads (which are not usually made out of wood anymore) includes CAD/CAID, meshing, FEA, CFD, post-processing, optimization, math, instrumentation, and probably a half-dozen things I’ve forgotten. I’m not counting manufacturing tools, because, at least for wood heads, most are produced by foundaries using investment casting.

While I could tell you, off the top of my head, what toolsets I think might work well for this design problem, I’m far more interested in hearing what toolsets you think would work best. If you have some thoughts, either leave a comment, or write me a note at evan@yares.com.

 

Oct 172011

Here’s a simple test for you: Use your CAD system to model a raw chicken egg.

It sounds pretty simple, but it can be maddeningly difficult, depending on how accurate you want your model to be.

As a start, the outside of the shell is a single class-A surface. Though eggs in general seem quite symmetrical, in specific, they aren’t. They have some variance. Pull one out of your refrigerator and measure it, and you’ll see. (I’m assuming that all good engineers keep fresh eggs in their refrigerator, and have calipers handy to measure them.)

If you’re going to model a real-world egg, you’ll need to account for its asymmetry. You may not find two diametrically opposite points of symmetry to use as a sweep axis–which may make using a parametric modeler a bit difficult.

But let’s say you pay attention, and use a parametric, direct, or surface modeler to model the egg shell accurately as a single NURBS surface. You’re still not done.

You need to model the inside of the egg. The whites and yolk are fluid structures, and you need to deal with their interfaces (which, I’m guessing, are non-manifolds), as well as their viscosity and surface tensions.

Suppose, though, I let you off the hook, and say you don’t need to model anything on the egg that you can’t see from the outside.

That doesn’t really let you off the hook.

There’s an old test to determine whether an egg is raw, or hard-boiled. You spin it on a hard surface. If it spins easily and quickly, it’s hard-boiled. If it spins with more difficulty, and slows down quickly (because the liquid yolk and whites are damping its motion), it’s raw. (Here’s a variant of this test that works even better.)

You may be able to model the outside surface of an egg shell, but it’s a lot harder to model the egg as a self-contained system.

There is a point to this exercise. It’s to get you thinking about abstraction. All CAD models are abstractions of the real world objects they model. The real issue with abstractions is their appropriateness for purpose.

NURBS-based b-rep surface models are appropriate abstractions for many purposes. But not for all purposes. Consider some examples: FEA, CFD, CNC, and RP. All of these require different abstractions. If you look outside the realm of purely geometric representations, there are many more useful abstractions.

Today’s CAD, CAM, CAE, and PLM systems have a difficult time managing multiple abstractions. I suspect this has a lot to do with their underlying object models. I believe it’s something that will change over time. But I don’t believe it’s something that can be easily patched onto old programs.

 

A couple of days ago, I saw a conversation thread on Twitter about geometric modeling kernels. It wasn’t much of a thread—just a few comments back and forth to the effect that modeling kernels are like car engines, and that if you can’t tell the difference without looking under the hood, it doesn’t matter which one you have.

CAD users don’t think too much about what kernel their software uses. I suppose most of them can’t tell anyway. But that doesn’t mean kernels don’t matter.

There are all kinds of potential problems that can crop up with modeling kernels. A while back, I published a couple of articles about interoperability problems (which are inherently related to kernels), one from an academic perspective, and one from the perspective of a kernel guru.

About a month ago, I wrote a series of articles on configuration modeling, pointing out that no modern CAD systems can really do this. A couple of days ago, I made an off-hand comment in an article that a picture I showed (of a sphere) was really a cube that had its edges blended (e.g., start with a 2” cube, and fillet all the edges at 1”.) I learned that trick 15 years ago with SolidWorks. Several readers wrote or commented that they were unable to do it with their modern CAD systems.

The most common sign of a kernel-based problem happens when a CAD user tries to create a geometric feature, and the result is a failure.

Think about that for a moment.  You’re working on a CAD system, trying to create a feature, and the system does something unexpected.  That’s a big red flag saying the modeling kernel can’t handle what you’re asking it to do.

As an aside, I think it’s mighty interesting that one of the signs of an expert CAD user is their ability to work around limitations in the kernels of their CAD programs that would otherwise create modeling failures.

So, yes, geometric modeling kernels matter. Even to CAD users who don’t realize it.

Yet, there is no best alternative when it comes to geometric modeling kernels. ACIS, Parasolid, CGM, Granite and the proprietary kernels out there each have their own kinks. None is so much better than its competitors that I want to jump up and down and say “everybody look at this!”

The spark that set off the Twitter thread that inspired this article was an announcement, from Siemens PLM, of a webinar, to be held on November 8. Here’s the description from the Siemens website:

At the core of your mechanical CAD software is the modeling kernel, an often overlooked tool. The kernel is key to your ability to compute 3D shapes and models and output 2D drawings from 3D geometry. In this webcast, learn the basics about kernels and what impacts a change in this core code can have on your company’s existing and future design data. Dan Staples, development director for Solid Edge at Siemens PLM Software, is joined by medical device designer Billy Oliver from Helena Laboratories to explore the issues facing hundreds of thousands designers and millions of CAD files.

    • The math inside your feature tree
    • Real-world lessons learned in changing kernels
    • Modeling loss, data protection, and reuse risks
    • Impact on hundreds of thousands designers and millions of CAD files
    • Case study: Helena Laboratories ensures data protection

You can register for the webinar here.

While I expect the webinar will be, by its nature, slanted towards Siemens PLM and its Parasolid kernel, I suspect that quite a lot of what will be discussed will be interesting to people who have no intention of changing their CAD tools. I’m planning on listening in.

I doubt that most CAD users will ever spend much energy thinking about their CAD programs’ modeling kernels. But CAD users should spend some energy thinking about broader issues, such as usability and interoperability, which are affected by modeling kernels.

A curious thing that I’ve noticed about social product development initiatives is that they tend to leave out designers and engineers (except the ones on the payroll.)

I can understand this when design and engineering is at the heart of a company’s sustainable competitive advantage–but in many social product development projects, it isn’t the case.

I’m going to use Quirky as an example. I’ve watched a few development projects on Quirky, and felt largely unmotivated to contribute. Not because I don’t have anything to contribute, but rather because Quirky wasn’t soliciting contributions where my contributions would particularly stand out. Choosing product names or colors may be important, but my opinions on these sort of things are no more valuable than anyone else’s. Answering these sort of polling questions holds about as much interest for me as participating in the customer service poll advertised on Home Depot receipts.

Now, if Quirky (or any other social site) were to ask questions where I have some domain expertise, the story would be different. Ask me about trade-offs between stepper and servo motors, and not only would I have an educated opinion, but I’d also be willing to contribute it. I might even be willing to help with motor sizing and drive design. (Once upon a time, I used to design motion/logic systems for a living.)

Of course, Quirky attracts a lot of contributors. I suspect it’s not because those contributors feel compelled to share their domain expertise, but rather because Quirky has gamified the process of contributing. It seems rather akin to voting for your favorite performer on American Idol.

An example of a social help site that takes good advantage of domain expertise is Stackoverflow, which provides social answers to programming questions. Through a combination of techniques, including moderation, voting, reputation building, and pure coolness, Stackoverflow manages to attract heavy hitters to answer serious programming questions.

Stackoverflow has been so successful that its creators have expanded the concept to a variety of other sites, covering everything from mathematics to garden gnomes. Well, maybe not garden gnomes. But my point is, that, with the right combination of pixie dust, it’s possible to attract serious contributors who have deep expertise.

Two other sites that do seem to attract real expertise are Innocentive and GrabCAD. They do this by offering serious challenges, coupled with appropriate compensation.

In the case of Innocentive, the challenges are often “non-trivial.” Not quite as hard as “fix global warming,” but in the same general neighborhood. Challenges tend to be in areas that just happen to have corresponding Nobel (or other international) prizes. And the rewards offered seem to be inline with the value of the challenge—ranging up to $1 million dollars.

GrabCAD doesn’t offer such high rewards, but it does offer challenges that seem more up the alley of design engineers. One recent challenge was to design a triple-clamp for an electric racing superbike, with the winner to receive an iPad2. In a matter of weeks, site members had submitted over 150 designs—not just pretty pictures, but high-quality solid models. I can’t say that the design of a triple-clamp is a particularly challenging engineering problem, as these things go, but I looked at some of the photos of the submitted designs, and was impressed. This was pro-level work. The challenge sponsor got way more than their money’s worth.

Something I’ve noticed about CAD designers and engineers is that they consider some things fun, and some things not so fun. Solving design problems is fun–which is probably why GrabCAD can get so much participation from its community members. To engage designers and engineers in a social product development enterprise, you have to focus on fun things, and make the not so fun things as transparent as possible.

 

Oct 112011

Remember the old folktale about stone soup? Here’s how Wikipedia relates it:

Some travellers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the hungry travellers. So the travellers go to the neck of the stream and fill the pot with water, drop a large stone in it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travellers answer that they are making “stone soup”, which tastes wonderful, although it still needs a little bit of garnish to improve the flavor, which they are missing. The villager does not mind parting with just a little bit of carrot to help them out, so it gets added to the soup. Another villager walks by, inquiring about the pot, and the travellers again mention their stone soup which has not reached its full potential yet. The villager hands them a little bit of seasoning to help them out. More and more villagers walk by, each adding another ingredient. Finally, a delicious and nourishing pot of soup is enjoyed by all.

Does the story remind you of anything? How about social product development?

There are quite a number of companies doing their own version of stone soup these days. Off the top of my head, I can think of GrabCAD, Local Motors, Quirky, The LEGO CL!CK Community, Innocentive, Instructables, and Thingiverse. I’ve probably missed a dozen or two other truly high profile projects, and hundreds of smaller projects.

Each of these projects has lessons to teach. But none of them cover the entire range of the new product development process—from the fuzzy front end to commercialization. They each start with a lot of cabbage in the soup. (Sorry about the strained metaphor there.)

Something I’ve been mulling over recently is this: What is the best way to make stone soup? That is, if you wanted to build a best-in-class hyper-social product development business—incorporating the best ideas in co-creation and open innovation—what people, processes and resources would you want to have?

Is reducing variability in the product development process a good idea, or a bad idea?

It’s a trick question. Reducing the economic impact of variability is good. Reducing variability itself can drive innovation out of the development process. Hardly the result you’d want.

Don Reinertsen, a thought leader in the field of product development for over 30 years, says that 65 percent of product developers he surveys consider it desirable to eliminate as much variability as possible in product development. He also says this view is completely disconnected from any deep understanding of product development economics.

In his 2009 book, The Principles of Product Development Flow, Reinertsen provides a compelling economic analysis of product development, and makes the case that today’s dominant paradigm for managing product development is fundamentally wrong—to its very core. You can download and read the first chapter of the book here. I think you ought to do so right now. (It’ll only take a short while, and the rest of this article can wait until you’re done.)

Let’s look at a few of Reinertsen’s key points on variability:

First, without variability, we cannot innovate. Product development produces the recipes for products, not the products themselves. If a design does not change, there can be no value-added. But, when we change a design, we introduce uncertainty and variability in outcomes. We cannot eliminate all variability without eliminating all value-added.

Second, variability is only a proxy variable. We are actually interested in influencing the economic cost of this variability.

Third… we can actually design development processes such that increases in variability will improve, rather than worsen, our economic performance.

Reinertsen provides a number of possible solutions for dealing with variability in his book. An important one is flexibility:

In pursuit of efficiency, product developers use specialized resources loaded to high levels of utilization. Our current orthodoxy accepts inflexibility in return for efficiency. But what happens when this inflexibility encounters variability? We get delays…

Flow-based Product Development suggests that our development processes can be both efficient and responsive in the presence of variability. To do this, we must make resources, people, and processes flexible.

Resources—data and tools—are an important area of interest for me. So, the question occurs to me: how can resources be made flexible?

That’s not really a question I can answer in a short article. Maybe over the next couple of years on this blog, I could start to do some justice to the question. But, as a beginning, let me suggest these concepts:

  • Data must be consumable. What matters most is that you’re able to use your data, with the tools of your choice, to get your work done. The key thing to look for is the capability of your core tools to save data accurately, at the proper level of abstraction, in formats that can be consumed by many other tools.
  • Monoculture may sometimes boost efficiency, but it often kills flexibility. Figure on using engineering software tools from more than one vendor.
  • One size does not fit all. Different people and processes need different tools. You may need more than one CAD, CAM, or CAE program.

I’d be very interested in hearing your thoughts on efficiency vs. flexibility in engineering software.

Are you curious what the next-generation platform for social product development might look like? How about who it might come from?

Michael Fauscette is the lead analyst in IDC’s Software Business Solutions Group, and writes often about software ecosystems and emerging software business models. His thinking is that the next generation enterprise plaftorm has to be built on a foundation of people-centric collaboration:

New “social” collaboration tools must connect people inside and outside the enterprise but do it in a way that provides real time communications and real time access to supporting content, data and systems in the context of the activity. More over this tool (or tools) must support ad hoc work groups that need to reach beyond traditional enterprise boundaries and at times include customers, partners and suppliers, which protecting enterprise intellectual property and providing flexible security. Contextual collaboration also implies that the tool resides inside employees workflow and thus inside current enterprise applications. Embedded, contextual, real time, ad hoc, people-centric collaboration.

To date, I’ve not seen any PLM or engineering software vendors provide a toolset that meets these criteria. But that’s not to say I haven’t seen flashes of bits and pieces of it:

  • PTC’s Windchill SocialLink, built on Microsoft SharePoint, provides a more product development-centric social graph than other enterprise microblogging platforms (e.g., SocialCast, SocialText, Novell Vibe, Salesforce Chatter.) You’d expect that, since it is, after all, integrated with WindChill. PTC also put their money where their mouth is with SocialLink, and used it as the social backbone for the development of their Creo products. Yet, it’s still a young product. A new version will be coming out soon, so it’ll likely grow quite a bit in capabilities.
  • Dassault Systemes has a number of tools that fit in the realm of social product development. In the V6 porfolio of products, 3DLive is a 3D search/viewing and collaboration tool that’s integrated with Microsoft Communication Server. It serves as a foundation for a number of other “Live” products, including Live Collaborative Review, Live Fastener Review, Live Process Review, and Live Simulation Review.
  • Siemens PLM’s Active Workspace isn’t out just yet, but, based on previews, looks to be a seriously interesting tool.
  • SpaceClaim, though not explicitly focusing on social product development, has found that their software is getting regularly used by customers (in conjunction with gotomeeting and similar streaming tools) for digital mockup and design review.

I could probably go on for a long time talking about interesting tools that support social product development in one way or another. But what I can’t do is talk about tools that meet Fauscette’s criteria of providing embedded, contextual, real time, ad hoc, people-centric collaboration. Such tools don’t seem to exist yet.

One problem I see with existing PLM tools, in the context of social product development, is that they distinguish too sharply between first-class users, and those who are stuck in economy-class. While they provide an optimal set of capabilities for people inside the enterprise boundaries, they provide a far more limited set of capabilities for people outside the enterprise boundaries. They don’t do a very good job of connecting the voice of the customer with the voice of the process.

I do wonder whether the “next-generation enterprise platforms” for social product development are going to come from the traditional PLM vendors, or from new players—companies which have been built, from the ground up, as socially integrated enterprises.

“I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write…

“I, Pencil, simple though I appear to be, merit your wonder and awe, a claim I shall attempt to prove. In fact, if you can understand me—no, that’s too much to ask of anyone—if you can become aware of the miraculousness which I symbolize, you can help save the freedom mankind is so unhappily losing. I have a profound lesson to teach. And I can teach this lesson better than can an automobile or an airplane or a mechanical dishwasher because—well, because I am seemingly so simple.

“Simple? Yet, not a single person on the face of this earth knows how to make me.”

Leonard E. Read wrote this essay, entitled I, Pencil, in 1958, a month after I was born. In it, he described the complexity of something so seemingly simple, yet requiring the knowledge and effort of thousands of minds to create.  It is an essay you must read.

Product development is not the realm of the lone genius. It is an inherently social and collaborative process. Commenting on Read’s essay, Milton Friedman said:

None of the thousands of persons involved in producing the pencil performed his task because he wanted a pencil. Some among them never saw a pencil and would not know what it is for. Each saw his work as a way to get the goods and services he wanted—goods and services we produced in order to get the pencil we wanted…

It is even more astounding that the pencil was ever produced. No one sitting in a central office gave orders to these thousands of people… These people live in many lands, speak different languages, practice different religions, may even hate one another—yet none of these differences prevented them from cooperating to produce a pencil.

Friedman was an economist, and the lessons he drew from I, Pencil were within this realm. I’m an engineer, so the lessons I draw from Read’s essay are different. Though I first read I, Pencil years ago, the question it raised in my mind has never really changed:

How can we give people better tools to help them work together, and create better products?

No, my question is not “how can we give enterprises better tools.” It is “how can we give people better tools.” Product development may be practiced within enterprises, but it is a people-centric process.

 

 

 

 

 

Imagine that you were unemployed, and wanted to learn how to use a new Mechanical CAD program. What would you do?

The first challenge would be to get your hands on the software. In some cases, that’d be easy. In other cases, not so easy.

Autodesk makes over 30 of their products available for free through the Autodesk Assistance Program website.

PTC makes a free version of Creo Elements/Direct Modeling Express available to anyone who wants it. The software has some limits (I’m particularly irritated by the lack of a shell function), but it is a decent program for learning Creo Elements/Direct (aka CoCreate.)

SolidWorks used to offer free software through their Engineering Stimulus Package website. No more. Their website says “Having achieved the goal of helping retrain the unemployed workforce, this program has officially ended.” (I’m certainly glad they achieved that goal.)

There are not too many programs tailored to get CAD software in the hands of unemployed engineers. Unless you happen to be a student:

The requirements to get student versions of software vary. I some cases, you need to provide a copy of a student ID. In other cases, you need to show proof of enrollment. In other cases (such as with Autodesk) you need to have a .edu email account. Beyond this, the software availability varies country by country.  And, as you might imagine, the software is not licensed for commercial use.  Only for personal learning.

In the United States, it seems that, in most cases, enrolling to take a single course at a local community college might be enough to make you eligible to get student software.

I’d like to argue that the bar should be even lower to get access to student versions of MCAD software, but vendors might find quibbling over $100 or $150 is a bit precious.  And it is–unless you’re unemployed, and don’t have that kind of money to spare.

Because I’m not a student, and haven’t tried to get these programs, I can’t say how the verification requirements are enforced. (If you have some experience with student MCAD software, though, I’d be interested in hearing about it.)

Ultimately, the important question I have for CAD vendors is this: Why make it harder than necessary for people to learn to use your software?

 

Last Friday, Prostep iViP held a webinar on long-term archiving (LTA). It was worth being up at 4:00 AM to listen in.

While the presenter, Andreas Trautheim, covered quite a bit of information in less than an hour, the thing that especially caught my attention was the part where he described why long-term archiving is important.

Back in 2006, the US Federal Rules of Civil Procedure were changed to require “eDiscovery.” That is, in the event your company ends up in litigation, it will be required to provide the opposing party with electronically stored information—including, potentially, CAD files. Other jurisdictions, including the European Community, have similar evidentiary rules.

While time periods vary depending on jurisdiction, you can generally count on a statute of repose lasting about 10 years after a product has been sold or put into service. Producer liability is typically much longer (Trautheim cited a 30 year period in his presentation.) Your company must maintain its CAD files, in readable condition, for at least those periods.

The following, from VDA4958-1, speaks to archiving formats:

There are no special archiving formats proscribed by law. However, storing documents in proprietary data formats for periods of 12 years and longer could prove to be exteremely difficult technically and/or cost intensive. Any loss of data, among other things, could be interpreted in a way detrimental to the manufacturer. To ensure LTA capability, the archiving of proprietary data formats and/or binary data should therefore be avoided.

Both VDA4958 and LOTAR are standards-based initiatives addressing long-term archiving. (You can find a good presentation on them here.)

A number of years ago, it occurred to me that if anything would ultimately drive the support of interoperable CAD data formats (which is essentially what archiving formats are), it would be legal requirements. It appears that’s what’s happening.

The important question is this: Is LTA something you need to pay attention to now, or can you afford to wait?

Here are the reasons Trautheim thinks the answer is “now”:

  • It reduces the risks of legal demands, potentially saving a lot of money, as well as your good reputation.
  • You get synergy effects of LTA and drawing-less processes.
  • You spend process time and resources in design, communication of engineering (3D/2D/LDM) data and collaboration with your customers/OEM’s and partners.
  • You get documentation that is independent of native CAD/PDM systems/releases over years, without migrations.
  • It makes you innovative, and puts you out in front of your competitors.