Autodesk, which has been running a teaser ad campaign promising that on 11//29/2011 “everything changes,” has announced their entry into the PLM game: 360 Nexus.

As part of their rollout, they’ve published a paper titled “Autodesk Extends Benefits of PLM to Everyone at Anytime, from Anywhere.” Volume 1. Catchy title – I’m curious if they’ve bought the domain name?

Of its 10 pages, the first 2 and the last 4 are marketing content. The middle 4 pages are a summary of some research from Gartner analyst Marc Halpern.

That’s the interesting part.

Halpern points out some things that big PLM vendors might not be thrilled about having said out loud. Here are some of his research findings:

“A growing number of Gartner clients are frustrated by the high cost of purchasing, deploying and upgrading PLM software. ”

“PLM software vendors do a better job providing technical support for their software than providing more general business- related advice and services. ”

“A Gartner survey suggests manufacturers pay substantially more when they contract PLM software vendors for business consulting beyond the technical details of implementations. ”

“Several of Gartner’s manufacturing clients have commented that software vendors often recommend the purchase of more PLM software as a routine part of service engagements. ”

“Transitions to PLM platforms such as Dassault Enovia v.6, Oracle Fusion, PTC’s Creo [I think that's a typo, and he meant Windchill], SAP’s ECC 6 and Siemens Teamcenter Unified are stimulating more needs for services that these software vendors want to profit from. ”

In short, the big PLM vendors are charging a ton for services, and customers are still disappointed.

Halpern’s recommendations based on these findings center around the idea that manufacturers should not use PLM vendors service organizations for anything but the technical details of deploying their software. He suggests manufacturers either do business process re-engineering in house (which Gartner, in the past, has said most companies lack the skills to do), or engage professional service organizations (which presumably won’t try to load them up with extra software.)

I’m guessing that Halpern isn’t getting any Christmas cards from SAP, Oracle, Siemens, Dassault, or PTC.

The Autodesk Approach

I’m guessing that Autodesk wouldn’t have paid Gartner for the right to use their research if it didn’t play into their PLM strategy with 360 Nexus.

I’ve not seen 360 Nexus. The best I can say is that I’ve read articles about it, including Al Dean’s article at Develop 3D, and Ken Wong’s article at Desktop Engineering.

My impression is that, with 360 Nexus, Autodesk probably has a really interesting SaaS cloud-based BPM system bolted to a PDM.

But, even if I give it the benefit of the doubt for its technical chops (and, remember, it’s not shipping yet), I can’t see it changing “everything” (as promised by Autodesk’s teaser ad campaign.)

Why?

First, because PLM is traditionally enterprise software. Trying to compete in the PLM space is like trying to compete in the ERP space. It’s not going to be easy — the big vendors are nearly impossible to displace (think: competitive lock-out.)

And, second, because Autodesk isn’t the only company that’s thinking out of the PLM box.   There are others.

Last month, Siemens PLM announced that it had entered into a partnership with Local Motors, a company that uses crowdsourcing for collaborative design and development of cars.

As a part of the agreement, Local Motors has adopted Solid Edge as the design tool for its recently launched Open Electric Vehicle project. Beyond this, Siemens didn’t really give too many details.

Today, they’ve announced the rest of the story. And it’s really interesting.

As a start, let’s talk about Local Motors. There are a whole bunch of “social product innovation” companies out there now. Local Motors is also a “social product engineering” company. Its 13,000 member community not only contributes to the conceptual design of Local Motors cars, they contribute to the detail design and engineering of those cars.

One of the challenges that Local Motors has faced is that not everyone who wants to participate has access to professional level CAD software. John “Jay” Rogers, the CEO of Local Motors, talked to all the biggest CAD vendors, asking them for a CAD product that he could offer to his community at a price they could afford. All but one of those companies blew him off (or, at least didn’t take him seriously.)

The company that did take him seriously was Siemens PLM.

Siemens has developed two new products, and is making them available through Local-Motors.

The first is a browser-based version of its JT viewer. JT is the most widely used lightweight 3D file format in the automotive industry. With this viewer, a community member can view, section, and measure 3D models from directly within the Local Motors website. For free.

The second is a special version of Solid Edge, called Design1. Solid Edge has traditionally been a feature-based parametric solid modeling CAD system. Several years ago, Siemens added direct modeling to Solid Edge, in the form of Synchronous Technology. The new Solid Edge Design1 product is a Synchronous Technology only version. It has no feature-based parametric modeling tools. Which is to say, no “history-based” modeling. (You can read a bit more about Design1 at the Siemens PLM blog.)

Given the likely context of use by Local Motors community members, a direct modeling CAD system is probably a better choice than a history-based CAD system. With a direct modeling system, users can edit CAD files no matter how they were created. Design1, incidentally, has the capability to read and write a wide range of formats, including Parasolid, JT, NX, ACIS, Pro/E, IGES, Inventor, SolidWorks, STEP, STL, and PLM XML.

Because Design1 is based on the same Synchronous Technology as the full version of Solid Edge, it has a capability that few (if any) other direct modeling CAD systems have: model reparameterization. In short, a user can add driving dimensions to a dumb CAD model, and when saved (in native Solid Edge format), those dimensions are persistent.

The only thing (other than history) that I can find that’s not in Design1 is support for class-A surfaces. That would be a useful thing for people who want to design car bodies, but given the end-user price that Local Motors negotiated for Design1, it’s not surprising that Siemens didn’t include it.

That price, incidentally, is $19.95 per month, with no long-term contract.

For the immediate future, Siemens is offering Design1 only through Local Motors. To get it, you’ll need to go to the Local Motors website, and join the community. For the next couple of months, the software is being offered only to a limited number of users. After the beginning of the year, it will be opened up to anyone who wants it.

The bigger picture.

While I think the Local Motors deal is interesting, what I find more interesting is the potential Design1 might have in Siemens’ (and its competitors’) major accounts, as a low-cost interstitial CAD tool for use by engineers and others who are not full-time CAD users, or who simply don’t need history-based CAD. I could imagine some companies (particularly large automotive companies) signing up for literally thousands of copies.  It could make things pretty interesting in the CAD business.

UPDATE:  Solid Edge Design1 native files are not compatible with commercial Edge licenses, and it includes no rendering, or adjustable component design.  Thanks to Josh Mings and Al Dean for pointing these out. (I don’t think these limitations are significant problems for most use cases of Design1.)

SECOND UPDATE:  I checked with Mark Burhop, from Siemens.  Here’s what he said: ”Design1 can use any Solid Edge files. However, Solid Edge cannot read Design1 [native] files without conversion. Having said that, Local Motors will convert all Design1 files to regular Solid Edge files when uploaded to their server. So, within the context of Local Motors it is fully open.”

Oct 172011

Here’s a simple test for you: Use your CAD system to model a raw chicken egg.

It sounds pretty simple, but it can be maddeningly difficult, depending on how accurate you want your model to be.

As a start, the outside of the shell is a single class-A surface. Though eggs in general seem quite symmetrical, in specific, they aren’t. They have some variance. Pull one out of your refrigerator and measure it, and you’ll see. (I’m assuming that all good engineers keep fresh eggs in their refrigerator, and have calipers handy to measure them.)

If you’re going to model a real-world egg, you’ll need to account for its asymmetry. You may not find two diametrically opposite points of symmetry to use as a sweep axis–which may make using a parametric modeler a bit difficult.

But let’s say you pay attention, and use a parametric, direct, or surface modeler to model the egg shell accurately as a single NURBS surface. You’re still not done.

You need to model the inside of the egg. The whites and yolk are fluid structures, and you need to deal with their interfaces (which, I’m guessing, are non-manifolds), as well as their viscosity and surface tensions.

Suppose, though, I let you off the hook, and say you don’t need to model anything on the egg that you can’t see from the outside.

That doesn’t really let you off the hook.

There’s an old test to determine whether an egg is raw, or hard-boiled. You spin it on a hard surface. If it spins easily and quickly, it’s hard-boiled. If it spins with more difficulty, and slows down quickly (because the liquid yolk and whites are damping its motion), it’s raw. (Here’s a variant of this test that works even better.)

You may be able to model the outside surface of an egg shell, but it’s a lot harder to model the egg as a self-contained system.

There is a point to this exercise. It’s to get you thinking about abstraction. All CAD models are abstractions of the real world objects they model. The real issue with abstractions is their appropriateness for purpose.

NURBS-based b-rep surface models are appropriate abstractions for many purposes. But not for all purposes. Consider some examples: FEA, CFD, CNC, and RP. All of these require different abstractions. If you look outside the realm of purely geometric representations, there are many more useful abstractions.

Today’s CAD, CAM, CAE, and PLM systems have a difficult time managing multiple abstractions. I suspect this has a lot to do with their underlying object models. I believe it’s something that will change over time. But I don’t believe it’s something that can be easily patched onto old programs.

 

A couple of days ago, I saw a conversation thread on Twitter about geometric modeling kernels. It wasn’t much of a thread—just a few comments back and forth to the effect that modeling kernels are like car engines, and that if you can’t tell the difference without looking under the hood, it doesn’t matter which one you have.

CAD users don’t think too much about what kernel their software uses. I suppose most of them can’t tell anyway. But that doesn’t mean kernels don’t matter.

There are all kinds of potential problems that can crop up with modeling kernels. A while back, I published a couple of articles about interoperability problems (which are inherently related to kernels), one from an academic perspective, and one from the perspective of a kernel guru.

About a month ago, I wrote a series of articles on configuration modeling, pointing out that no modern CAD systems can really do this. A couple of days ago, I made an off-hand comment in an article that a picture I showed (of a sphere) was really a cube that had its edges blended (e.g., start with a 2” cube, and fillet all the edges at 1”.) I learned that trick 15 years ago with SolidWorks. Several readers wrote or commented that they were unable to do it with their modern CAD systems.

The most common sign of a kernel-based problem happens when a CAD user tries to create a geometric feature, and the result is a failure.

Think about that for a moment.  You’re working on a CAD system, trying to create a feature, and the system does something unexpected.  That’s a big red flag saying the modeling kernel can’t handle what you’re asking it to do.

As an aside, I think it’s mighty interesting that one of the signs of an expert CAD user is their ability to work around limitations in the kernels of their CAD programs that would otherwise create modeling failures.

So, yes, geometric modeling kernels matter. Even to CAD users who don’t realize it.

Yet, there is no best alternative when it comes to geometric modeling kernels. ACIS, Parasolid, CGM, Granite and the proprietary kernels out there each have their own kinks. None is so much better than its competitors that I want to jump up and down and say “everybody look at this!”

The spark that set off the Twitter thread that inspired this article was an announcement, from Siemens PLM, of a webinar, to be held on November 8. Here’s the description from the Siemens website:

At the core of your mechanical CAD software is the modeling kernel, an often overlooked tool. The kernel is key to your ability to compute 3D shapes and models and output 2D drawings from 3D geometry. In this webcast, learn the basics about kernels and what impacts a change in this core code can have on your company’s existing and future design data. Dan Staples, development director for Solid Edge at Siemens PLM Software, is joined by medical device designer Billy Oliver from Helena Laboratories to explore the issues facing hundreds of thousands designers and millions of CAD files.

    • The math inside your feature tree
    • Real-world lessons learned in changing kernels
    • Modeling loss, data protection, and reuse risks
    • Impact on hundreds of thousands designers and millions of CAD files
    • Case study: Helena Laboratories ensures data protection

You can register for the webinar here.

While I expect the webinar will be, by its nature, slanted towards Siemens PLM and its Parasolid kernel, I suspect that quite a lot of what will be discussed will be interesting to people who have no intention of changing their CAD tools. I’m planning on listening in.

I doubt that most CAD users will ever spend much energy thinking about their CAD programs’ modeling kernels. But CAD users should spend some energy thinking about broader issues, such as usability and interoperability, which are affected by modeling kernels.

Oct 112011

Remember the old folktale about stone soup? Here’s how Wikipedia relates it:

Some travellers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the hungry travellers. So the travellers go to the neck of the stream and fill the pot with water, drop a large stone in it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travellers answer that they are making “stone soup”, which tastes wonderful, although it still needs a little bit of garnish to improve the flavor, which they are missing. The villager does not mind parting with just a little bit of carrot to help them out, so it gets added to the soup. Another villager walks by, inquiring about the pot, and the travellers again mention their stone soup which has not reached its full potential yet. The villager hands them a little bit of seasoning to help them out. More and more villagers walk by, each adding another ingredient. Finally, a delicious and nourishing pot of soup is enjoyed by all.

Does the story remind you of anything? How about social product development?

There are quite a number of companies doing their own version of stone soup these days. Off the top of my head, I can think of GrabCAD, Local Motors, Quirky, The LEGO CL!CK Community, Innocentive, Instructables, and Thingiverse. I’ve probably missed a dozen or two other truly high profile projects, and hundreds of smaller projects.

Each of these projects has lessons to teach. But none of them cover the entire range of the new product development process—from the fuzzy front end to commercialization. They each start with a lot of cabbage in the soup. (Sorry about the strained metaphor there.)

Something I’ve been mulling over recently is this: What is the best way to make stone soup? That is, if you wanted to build a best-in-class hyper-social product development business—incorporating the best ideas in co-creation and open innovation—what people, processes and resources would you want to have?

Is reducing variability in the product development process a good idea, or a bad idea?

It’s a trick question. Reducing the economic impact of variability is good. Reducing variability itself can drive innovation out of the development process. Hardly the result you’d want.

Don Reinertsen, a thought leader in the field of product development for over 30 years, says that 65 percent of product developers he surveys consider it desirable to eliminate as much variability as possible in product development. He also says this view is completely disconnected from any deep understanding of product development economics.

In his 2009 book, The Principles of Product Development Flow, Reinertsen provides a compelling economic analysis of product development, and makes the case that today’s dominant paradigm for managing product development is fundamentally wrong—to its very core. You can download and read the first chapter of the book here. I think you ought to do so right now. (It’ll only take a short while, and the rest of this article can wait until you’re done.)

Let’s look at a few of Reinertsen’s key points on variability:

First, without variability, we cannot innovate. Product development produces the recipes for products, not the products themselves. If a design does not change, there can be no value-added. But, when we change a design, we introduce uncertainty and variability in outcomes. We cannot eliminate all variability without eliminating all value-added.

Second, variability is only a proxy variable. We are actually interested in influencing the economic cost of this variability.

Third… we can actually design development processes such that increases in variability will improve, rather than worsen, our economic performance.

Reinertsen provides a number of possible solutions for dealing with variability in his book. An important one is flexibility:

In pursuit of efficiency, product developers use specialized resources loaded to high levels of utilization. Our current orthodoxy accepts inflexibility in return for efficiency. But what happens when this inflexibility encounters variability? We get delays…

Flow-based Product Development suggests that our development processes can be both efficient and responsive in the presence of variability. To do this, we must make resources, people, and processes flexible.

Resources—data and tools—are an important area of interest for me. So, the question occurs to me: how can resources be made flexible?

That’s not really a question I can answer in a short article. Maybe over the next couple of years on this blog, I could start to do some justice to the question. But, as a beginning, let me suggest these concepts:

  • Data must be consumable. What matters most is that you’re able to use your data, with the tools of your choice, to get your work done. The key thing to look for is the capability of your core tools to save data accurately, at the proper level of abstraction, in formats that can be consumed by many other tools.
  • Monoculture may sometimes boost efficiency, but it often kills flexibility. Figure on using engineering software tools from more than one vendor.
  • One size does not fit all. Different people and processes need different tools. You may need more than one CAD, CAM, or CAE program.

I’d be very interested in hearing your thoughts on efficiency vs. flexibility in engineering software.

Are you curious what the next-generation platform for social product development might look like? How about who it might come from?

Michael Fauscette is the lead analyst in IDC’s Software Business Solutions Group, and writes often about software ecosystems and emerging software business models. His thinking is that the next generation enterprise plaftorm has to be built on a foundation of people-centric collaboration:

New “social” collaboration tools must connect people inside and outside the enterprise but do it in a way that provides real time communications and real time access to supporting content, data and systems in the context of the activity. More over this tool (or tools) must support ad hoc work groups that need to reach beyond traditional enterprise boundaries and at times include customers, partners and suppliers, which protecting enterprise intellectual property and providing flexible security. Contextual collaboration also implies that the tool resides inside employees workflow and thus inside current enterprise applications. Embedded, contextual, real time, ad hoc, people-centric collaboration.

To date, I’ve not seen any PLM or engineering software vendors provide a toolset that meets these criteria. But that’s not to say I haven’t seen flashes of bits and pieces of it:

  • PTC’s Windchill SocialLink, built on Microsoft SharePoint, provides a more product development-centric social graph than other enterprise microblogging platforms (e.g., SocialCast, SocialText, Novell Vibe, Salesforce Chatter.) You’d expect that, since it is, after all, integrated with WindChill. PTC also put their money where their mouth is with SocialLink, and used it as the social backbone for the development of their Creo products. Yet, it’s still a young product. A new version will be coming out soon, so it’ll likely grow quite a bit in capabilities.
  • Dassault Systemes has a number of tools that fit in the realm of social product development. In the V6 porfolio of products, 3DLive is a 3D search/viewing and collaboration tool that’s integrated with Microsoft Communication Server. It serves as a foundation for a number of other “Live” products, including Live Collaborative Review, Live Fastener Review, Live Process Review, and Live Simulation Review.
  • Siemens PLM’s Active Workspace isn’t out just yet, but, based on previews, looks to be a seriously interesting tool.
  • SpaceClaim, though not explicitly focusing on social product development, has found that their software is getting regularly used by customers (in conjunction with gotomeeting and similar streaming tools) for digital mockup and design review.

I could probably go on for a long time talking about interesting tools that support social product development in one way or another. But what I can’t do is talk about tools that meet Fauscette’s criteria of providing embedded, contextual, real time, ad hoc, people-centric collaboration. Such tools don’t seem to exist yet.

One problem I see with existing PLM tools, in the context of social product development, is that they distinguish too sharply between first-class users, and those who are stuck in economy-class. While they provide an optimal set of capabilities for people inside the enterprise boundaries, they provide a far more limited set of capabilities for people outside the enterprise boundaries. They don’t do a very good job of connecting the voice of the customer with the voice of the process.

I do wonder whether the “next-generation enterprise platforms” for social product development are going to come from the traditional PLM vendors, or from new players—companies which have been built, from the ground up, as socially integrated enterprises.

“I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write…

“I, Pencil, simple though I appear to be, merit your wonder and awe, a claim I shall attempt to prove. In fact, if you can understand me—no, that’s too much to ask of anyone—if you can become aware of the miraculousness which I symbolize, you can help save the freedom mankind is so unhappily losing. I have a profound lesson to teach. And I can teach this lesson better than can an automobile or an airplane or a mechanical dishwasher because—well, because I am seemingly so simple.

“Simple? Yet, not a single person on the face of this earth knows how to make me.”

Leonard E. Read wrote this essay, entitled I, Pencil, in 1958, a month after I was born. In it, he described the complexity of something so seemingly simple, yet requiring the knowledge and effort of thousands of minds to create.  It is an essay you must read.

Product development is not the realm of the lone genius. It is an inherently social and collaborative process. Commenting on Read’s essay, Milton Friedman said:

None of the thousands of persons involved in producing the pencil performed his task because he wanted a pencil. Some among them never saw a pencil and would not know what it is for. Each saw his work as a way to get the goods and services he wanted—goods and services we produced in order to get the pencil we wanted…

It is even more astounding that the pencil was ever produced. No one sitting in a central office gave orders to these thousands of people… These people live in many lands, speak different languages, practice different religions, may even hate one another—yet none of these differences prevented them from cooperating to produce a pencil.

Friedman was an economist, and the lessons he drew from I, Pencil were within this realm. I’m an engineer, so the lessons I draw from Read’s essay are different. Though I first read I, Pencil years ago, the question it raised in my mind has never really changed:

How can we give people better tools to help them work together, and create better products?

No, my question is not “how can we give enterprises better tools.” It is “how can we give people better tools.” Product development may be practiced within enterprises, but it is a people-centric process.

 

 

 

 

 

I’ve been following the concept of “social product development” for awhile now. Seems different people have widely varying definitions of what the term comprises.

One company that’s become high-profile in this space is Quirky, a developer of consumer products. Quirky’s development process begins with crowd-sourced ideas, which are voted on by a jury of community members, then developed by an in-house team (again, with input from community members.)

Quirky gives amateur “inventors” a way to see their ideas become real, and potentially earn money from the result. The company has a fast product development process (days, not months), and has captured the imagination of many people—including producers at the Sundance Channel, who are producing a documentary series on the company.

Quirky’s take on social product development is intriguing, in that it rewards community participation. Influencers—people who contribute to the development process—are paid from a royalty pool generated from sales of products (which are available on the Quirky website, as well as through retailers such as Bed Bath and Beyond.) Top influencers have earned literally tens of thousands of dollars.

Consider Jake Zien, for example. While Jake has contributed to 11 projects, his greatest influence was on the design of an innovative power strip. Mostly from this idea, he has earned $33,395.62 from Quirky. Not bad.

I wish I could be more enthusiastic about Quirky.

Most of Quirky’s products are banal exercises in industrial design. Lots of kitchen and bathroom gadgets. Very few products that require any serious engineering.

As much as I like Quirky’s social model, I’m just not impressed with its actual product development process. Certainly the community has the opportunity to influence product development (by submitting and/or voting on concepts, features and ideas), but they aren’t actually brought into the heart of the design or engineering processes.

In any serious product development process, hundreds of decisions must be made, with thoughtful rationale for each. Consider Jake’s power strip: How many questions can you come up with that would be important in its design process? I can think of a bunch off the top of my head: fault current, contact tension, wiping patterns, dielectric constant of the plastic, and many more. Then there are CAD, CAE, and CAM related issues.

As a practical matter, it probably makes sense for Quirky to handle serious product development issues internally. I can’t see many community members getting enthusiastic about progressive die design or mold flow analysis.

Yet, I can’t help but think: There are people out in the community who have tremendous domain knowledge. Why not design a social product development process that can capture and use all the expertise you need for a project, no matter where that expertise may be found?

Why not think much bigger?

Last Friday, Prostep iViP held a webinar on long-term archiving (LTA). It was worth being up at 4:00 AM to listen in.

While the presenter, Andreas Trautheim, covered quite a bit of information in less than an hour, the thing that especially caught my attention was the part where he described why long-term archiving is important.

Back in 2006, the US Federal Rules of Civil Procedure were changed to require “eDiscovery.” That is, in the event your company ends up in litigation, it will be required to provide the opposing party with electronically stored information—including, potentially, CAD files. Other jurisdictions, including the European Community, have similar evidentiary rules.

While time periods vary depending on jurisdiction, you can generally count on a statute of repose lasting about 10 years after a product has been sold or put into service. Producer liability is typically much longer (Trautheim cited a 30 year period in his presentation.) Your company must maintain its CAD files, in readable condition, for at least those periods.

The following, from VDA4958-1, speaks to archiving formats:

There are no special archiving formats proscribed by law. However, storing documents in proprietary data formats for periods of 12 years and longer could prove to be exteremely difficult technically and/or cost intensive. Any loss of data, among other things, could be interpreted in a way detrimental to the manufacturer. To ensure LTA capability, the archiving of proprietary data formats and/or binary data should therefore be avoided.

Both VDA4958 and LOTAR are standards-based initiatives addressing long-term archiving. (You can find a good presentation on them here.)

A number of years ago, it occurred to me that if anything would ultimately drive the support of interoperable CAD data formats (which is essentially what archiving formats are), it would be legal requirements. It appears that’s what’s happening.

The important question is this: Is LTA something you need to pay attention to now, or can you afford to wait?

Here are the reasons Trautheim thinks the answer is “now”:

  • It reduces the risks of legal demands, potentially saving a lot of money, as well as your good reputation.
  • You get synergy effects of LTA and drawing-less processes.
  • You spend process time and resources in design, communication of engineering (3D/2D/LDM) data and collaboration with your customers/OEM’s and partners.
  • You get documentation that is independent of native CAD/PDM systems/releases over years, without migrations.
  • It makes you innovative, and puts you out in front of your competitors.