Is reducing variability in the product development process a good idea, or a bad idea?

It’s a trick question. Reducing the economic impact of variability is good. Reducing variability itself can drive innovation out of the development process. Hardly the result you’d want.

Don Reinertsen, a thought leader in the field of product development for over 30 years, says that 65 percent of product developers he surveys consider it desirable to eliminate as much variability as possible in product development. He also says this view is completely disconnected from any deep understanding of product development economics.

In his 2009 book, The Principles of Product Development Flow, Reinertsen provides a compelling economic analysis of product development, and makes the case that today’s dominant paradigm for managing product development is fundamentally wrong—to its very core. You can download and read the first chapter of the book here. I think you ought to do so right now. (It’ll only take a short while, and the rest of this article can wait until you’re done.)

Let’s look at a few of Reinertsen’s key points on variability:

First, without variability, we cannot innovate. Product development produces the recipes for products, not the products themselves. If a design does not change, there can be no value-added. But, when we change a design, we introduce uncertainty and variability in outcomes. We cannot eliminate all variability without eliminating all value-added.

Second, variability is only a proxy variable. We are actually interested in influencing the economic cost of this variability.

Third… we can actually design development processes such that increases in variability will improve, rather than worsen, our economic performance.

Reinertsen provides a number of possible solutions for dealing with variability in his book. An important one is flexibility:

In pursuit of efficiency, product developers use specialized resources loaded to high levels of utilization. Our current orthodoxy accepts inflexibility in return for efficiency. But what happens when this inflexibility encounters variability? We get delays…

Flow-based Product Development suggests that our development processes can be both efficient and responsive in the presence of variability. To do this, we must make resources, people, and processes flexible.

Resources—data and tools—are an important area of interest for me. So, the question occurs to me: how can resources be made flexible?

That’s not really a question I can answer in a short article. Maybe over the next couple of years on this blog, I could start to do some justice to the question. But, as a beginning, let me suggest these concepts:

  • Data must be consumable. What matters most is that you’re able to use your data, with the tools of your choice, to get your work done. The key thing to look for is the capability of your core tools to save data accurately, at the proper level of abstraction, in formats that can be consumed by many other tools.
  • Monoculture may sometimes boost efficiency, but it often kills flexibility. Figure on using engineering software tools from more than one vendor.
  • One size does not fit all. Different people and processes need different tools. You may need more than one CAD, CAM, or CAE program.

I’d be very interested in hearing your thoughts on efficiency vs. flexibility in engineering software.

Are you curious what the next-generation platform for social product development might look like? How about who it might come from?

Michael Fauscette is the lead analyst in IDC’s Software Business Solutions Group, and writes often about software ecosystems and emerging software business models. His thinking is that the next generation enterprise plaftorm has to be built on a foundation of people-centric collaboration:

New “social” collaboration tools must connect people inside and outside the enterprise but do it in a way that provides real time communications and real time access to supporting content, data and systems in the context of the activity. More over this tool (or tools) must support ad hoc work groups that need to reach beyond traditional enterprise boundaries and at times include customers, partners and suppliers, which protecting enterprise intellectual property and providing flexible security. Contextual collaboration also implies that the tool resides inside employees workflow and thus inside current enterprise applications. Embedded, contextual, real time, ad hoc, people-centric collaboration.

To date, I’ve not seen any PLM or engineering software vendors provide a toolset that meets these criteria. But that’s not to say I haven’t seen flashes of bits and pieces of it:

  • PTC’s Windchill SocialLink, built on Microsoft SharePoint, provides a more product development-centric social graph than other enterprise microblogging platforms (e.g., SocialCast, SocialText, Novell Vibe, Salesforce Chatter.) You’d expect that, since it is, after all, integrated with WindChill. PTC also put their money where their mouth is with SocialLink, and used it as the social backbone for the development of their Creo products. Yet, it’s still a young product. A new version will be coming out soon, so it’ll likely grow quite a bit in capabilities.
  • Dassault Systemes has a number of tools that fit in the realm of social product development. In the V6 porfolio of products, 3DLive is a 3D search/viewing and collaboration tool that’s integrated with Microsoft Communication Server. It serves as a foundation for a number of other “Live” products, including Live Collaborative Review, Live Fastener Review, Live Process Review, and Live Simulation Review.
  • Siemens PLM’s Active Workspace isn’t out just yet, but, based on previews, looks to be a seriously interesting tool.
  • SpaceClaim, though not explicitly focusing on social product development, has found that their software is getting regularly used by customers (in conjunction with gotomeeting and similar streaming tools) for digital mockup and design review.

I could probably go on for a long time talking about interesting tools that support social product development in one way or another. But what I can’t do is talk about tools that meet Fauscette’s criteria of providing embedded, contextual, real time, ad hoc, people-centric collaboration. Such tools don’t seem to exist yet.

One problem I see with existing PLM tools, in the context of social product development, is that they distinguish too sharply between first-class users, and those who are stuck in economy-class. While they provide an optimal set of capabilities for people inside the enterprise boundaries, they provide a far more limited set of capabilities for people outside the enterprise boundaries. They don’t do a very good job of connecting the voice of the customer with the voice of the process.

I do wonder whether the “next-generation enterprise platforms” for social product development are going to come from the traditional PLM vendors, or from new players—companies which have been built, from the ground up, as socially integrated enterprises.

“I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write…

“I, Pencil, simple though I appear to be, merit your wonder and awe, a claim I shall attempt to prove. In fact, if you can understand me—no, that’s too much to ask of anyone—if you can become aware of the miraculousness which I symbolize, you can help save the freedom mankind is so unhappily losing. I have a profound lesson to teach. And I can teach this lesson better than can an automobile or an airplane or a mechanical dishwasher because—well, because I am seemingly so simple.

“Simple? Yet, not a single person on the face of this earth knows how to make me.”

Leonard E. Read wrote this essay, entitled I, Pencil, in 1958, a month after I was born. In it, he described the complexity of something so seemingly simple, yet requiring the knowledge and effort of thousands of minds to create.  It is an essay you must read.

Product development is not the realm of the lone genius. It is an inherently social and collaborative process. Commenting on Read’s essay, Milton Friedman said:

None of the thousands of persons involved in producing the pencil performed his task because he wanted a pencil. Some among them never saw a pencil and would not know what it is for. Each saw his work as a way to get the goods and services he wanted—goods and services we produced in order to get the pencil we wanted…

It is even more astounding that the pencil was ever produced. No one sitting in a central office gave orders to these thousands of people… These people live in many lands, speak different languages, practice different religions, may even hate one another—yet none of these differences prevented them from cooperating to produce a pencil.

Friedman was an economist, and the lessons he drew from I, Pencil were within this realm. I’m an engineer, so the lessons I draw from Read’s essay are different. Though I first read I, Pencil years ago, the question it raised in my mind has never really changed:

How can we give people better tools to help them work together, and create better products?

No, my question is not “how can we give enterprises better tools.” It is “how can we give people better tools.” Product development may be practiced within enterprises, but it is a people-centric process.

 

 

 

 

 

I’ve been following the concept of “social product development” for awhile now. Seems different people have widely varying definitions of what the term comprises.

One company that’s become high-profile in this space is Quirky, a developer of consumer products. Quirky’s development process begins with crowd-sourced ideas, which are voted on by a jury of community members, then developed by an in-house team (again, with input from community members.)

Quirky gives amateur “inventors” a way to see their ideas become real, and potentially earn money from the result. The company has a fast product development process (days, not months), and has captured the imagination of many people—including producers at the Sundance Channel, who are producing a documentary series on the company.

Quirky’s take on social product development is intriguing, in that it rewards community participation. Influencers—people who contribute to the development process—are paid from a royalty pool generated from sales of products (which are available on the Quirky website, as well as through retailers such as Bed Bath and Beyond.) Top influencers have earned literally tens of thousands of dollars.

Consider Jake Zien, for example. While Jake has contributed to 11 projects, his greatest influence was on the design of an innovative power strip. Mostly from this idea, he has earned $33,395.62 from Quirky. Not bad.

I wish I could be more enthusiastic about Quirky.

Most of Quirky’s products are banal exercises in industrial design. Lots of kitchen and bathroom gadgets. Very few products that require any serious engineering.

As much as I like Quirky’s social model, I’m just not impressed with its actual product development process. Certainly the community has the opportunity to influence product development (by submitting and/or voting on concepts, features and ideas), but they aren’t actually brought into the heart of the design or engineering processes.

In any serious product development process, hundreds of decisions must be made, with thoughtful rationale for each. Consider Jake’s power strip: How many questions can you come up with that would be important in its design process? I can think of a bunch off the top of my head: fault current, contact tension, wiping patterns, dielectric constant of the plastic, and many more. Then there are CAD, CAE, and CAM related issues.

As a practical matter, it probably makes sense for Quirky to handle serious product development issues internally. I can’t see many community members getting enthusiastic about progressive die design or mold flow analysis.

Yet, I can’t help but think: There are people out in the community who have tremendous domain knowledge. Why not design a social product development process that can capture and use all the expertise you need for a project, no matter where that expertise may be found?

Why not think much bigger?

Imagine that you were unemployed, and wanted to learn how to use a new Mechanical CAD program. What would you do?

The first challenge would be to get your hands on the software. In some cases, that’d be easy. In other cases, not so easy.

Autodesk makes over 30 of their products available for free through the Autodesk Assistance Program website.

PTC makes a free version of Creo Elements/Direct Modeling Express available to anyone who wants it. The software has some limits (I’m particularly irritated by the lack of a shell function), but it is a decent program for learning Creo Elements/Direct (aka CoCreate.)

SolidWorks used to offer free software through their Engineering Stimulus Package website. No more. Their website says “Having achieved the goal of helping retrain the unemployed workforce, this program has officially ended.” (I’m certainly glad they achieved that goal.)

There are not too many programs tailored to get CAD software in the hands of unemployed engineers. Unless you happen to be a student:

The requirements to get student versions of software vary. I some cases, you need to provide a copy of a student ID. In other cases, you need to show proof of enrollment. In other cases (such as with Autodesk) you need to have a .edu email account. Beyond this, the software availability varies country by country.  And, as you might imagine, the software is not licensed for commercial use.  Only for personal learning.

In the United States, it seems that, in most cases, enrolling to take a single course at a local community college might be enough to make you eligible to get student software.

I’d like to argue that the bar should be even lower to get access to student versions of MCAD software, but vendors might find quibbling over $100 or $150 is a bit precious.  And it is–unless you’re unemployed, and don’t have that kind of money to spare.

Because I’m not a student, and haven’t tried to get these programs, I can’t say how the verification requirements are enforced. (If you have some experience with student MCAD software, though, I’d be interested in hearing about it.)

Ultimately, the important question I have for CAD vendors is this: Why make it harder than necessary for people to learn to use your software?

 

Last Friday, Prostep iViP held a webinar on long-term archiving (LTA). It was worth being up at 4:00 AM to listen in.

While the presenter, Andreas Trautheim, covered quite a bit of information in less than an hour, the thing that especially caught my attention was the part where he described why long-term archiving is important.

Back in 2006, the US Federal Rules of Civil Procedure were changed to require “eDiscovery.” That is, in the event your company ends up in litigation, it will be required to provide the opposing party with electronically stored information—including, potentially, CAD files. Other jurisdictions, including the European Community, have similar evidentiary rules.

While time periods vary depending on jurisdiction, you can generally count on a statute of repose lasting about 10 years after a product has been sold or put into service. Producer liability is typically much longer (Trautheim cited a 30 year period in his presentation.) Your company must maintain its CAD files, in readable condition, for at least those periods.

The following, from VDA4958-1, speaks to archiving formats:

There are no special archiving formats proscribed by law. However, storing documents in proprietary data formats for periods of 12 years and longer could prove to be exteremely difficult technically and/or cost intensive. Any loss of data, among other things, could be interpreted in a way detrimental to the manufacturer. To ensure LTA capability, the archiving of proprietary data formats and/or binary data should therefore be avoided.

Both VDA4958 and LOTAR are standards-based initiatives addressing long-term archiving. (You can find a good presentation on them here.)

A number of years ago, it occurred to me that if anything would ultimately drive the support of interoperable CAD data formats (which is essentially what archiving formats are), it would be legal requirements. It appears that’s what’s happening.

The important question is this: Is LTA something you need to pay attention to now, or can you afford to wait?

Here are the reasons Trautheim thinks the answer is “now”:

  • It reduces the risks of legal demands, potentially saving a lot of money, as well as your good reputation.
  • You get synergy effects of LTA and drawing-less processes.
  • You spend process time and resources in design, communication of engineering (3D/2D/LDM) data and collaboration with your customers/OEM’s and partners.
  • You get documentation that is independent of native CAD/PDM systems/releases over years, without migrations.
  • It makes you innovative, and puts you out in front of your competitors.

Suppose you were to take everyone in the engineering department of your company, and line them up based on CAD proficiency. You might end up with something like this:

A few experts, a good number of average users, a bunch of beginners, and a whole lot of people who simply can’t use CAD at all.

Next, suppose you were able to make CAD easier for “normal” people to use, lowering the threshold of entry.  Here’s a guess about how your line-up might change:

It probably doesn’t look like much of a change. Here’s a version that shows what the change is:

It probably still doesn’t look like much of a change. Except for one thing: The people who went from being non-users and beginners to being average CAD users are most likely domain experts.

CAD is a force multiplier.  Give it to a person who has no domain knowledge and it multiples nothing.  Give it to a domain expert, and the result can be powerful.

I’ll admit the scenario I’m painting here is simplistic.  Individual and enterprise productivity with CAD is not a simple subject, and the research in the field is sparse and dated.  But feel free to throw away my scenario, and paint your own.

How do you think lowering the barrier to entry on CAD could affect your company?

 

 

Over the last week or so, I’ve talked about cognitive load, and how it affects CAD usability. It’s time to talk more about how user interface plays into this.

A few years ago, researchers from Yonsei University, Georgia Institute of Technology, and National Cheng Kung University (Ghang Lee, Charles M. Eastman, Tarang Taunk, and Chun-Heng Ho) published a research study titled Usability principles and best practices for the user interface design of complex 3D architectural design and engineering tools, in the International Journal of Human-Computer Studies.

The reason they undertook the research study was that there was plenty of research on user interface design for generic desktop and web applications, but none for complex 3D parametric architectural design and engineering software.

Here is a summary of the user interface principles recommended by the authors:

Principles for general system design

  • Consistency: Uniformity of system semantics across similar situations.
  • Visibility: Making relevant information conspicuous and easily detectable to the user.
  • Feedback: Response of the system to the user’s actions in order to provide information regarding the internal state of the system.
  • Recoverability: Providing the user with options to recognize and recover from errors.

Principles specific to 3D parametric design

  • Maximization of Workspace: Providing maximum screen space for carrying out the primary functions of the CAD system.
  • Graphical Richness: Replacing textual information with graphical information like imagery or animation to enhance user comprehension where appropriate.
  • Direct Manipulation: Providing interaction that is perceived by the user as directly operating on an object or entity within the system.

Principles for user support

  • Familiarity: Leveraging user’s knowledge and experience in other real-world or computer-based domains when interacting with a new system.
  • Customizability: Support to explicitly modify the interface or operability of the system based on the user’s preference.
  • Assistance: Providing support to the user both explicitly, by tutoring, and implicitly, by prompting the user in the right direction.
  • Minimalist design: Keeping the design simple and minimizing redundancy of information when it threatens to be the cause of confusion to the user.
  • Context recognition: Automatic adjustment of the interface or operability of the system based on user mode and, system context.

Here’s a significant comment from the study’s summary:

Complex 3D design and engineering systems are usually composed of several hundred menu items. If options for each menu item are considered, the combination of possible operations grows exponentially. Since this number exceeds the cognitive load that a person can handle, an efficient and user-friendly UI is critical to the users of these systems.

Cognitive Load. Just like I’ve been talking about for the last week or so of posts here.

If you’re interested in CAD user interface issues, you should read the study from the link above. It’s well worth the time.

 

Archimedes PalimpsestHow long is “Long-term?” Over 2,000 years, in the case of Archimedes’ The Method. This palimpsest is the only known copy of it–and it was almost lost. The story of its discovery and conservation reads like it was made for the movie theater. You can read about it here.

The need for long-term archival storage of CAD data varies, depending on its use.  For the Long Now clock, it might be 10,000 years.  For a nuclear waste repository, it could be far more than 10,000 years.  Being realistic, for many consumer products, CAD data that’s more than a couple of years old isn’t of much use anymore. For Automotive companies, product lifecycles are longer, but still not interminable. For aerospace and defense products, lifecycles can stretch on for many decades. Consider the joke among US Air Force pilots: “it isn’t your father’s Air Force, but it is your father’s plane.” (If it’s a B-56, it might even be your grandfather’s plane.)

How can 3D CAD data, with product lifecycles of sometimes more than 30 years, be reliably documented, communicated, stored and retrieved? And how can users access that data, when the CAD systems that generated it have long been obsolete?

The answer is LOTAR.

LOTAR International is developing standards for long-term archiving of 3D CAD and PDM data. These standards will define auditable archiving and retrieval processes, and are harmonized with the German Association of the Automotive Industry (VDA), and the Open Archival Information System (OAIS) Reference Model. The LOTAR International project is conducted by leading OEMs and suppliers in the aerospace and defense industry under the joint auspices of ASD-STAN, AIA, PDES Inc. and the ProSTEP iViP Association. (A shout out to Bob Bean, of Kubotek USA, who was the first person to tell me about LOTAR.)

This Friday, September 30th, 2011 at 2p.m. (CET – Central European Time), ProStep iViP is hosting a 45 minute webinar on LOTAR. And, unusually for this sort of thing, its available to the public. (Most of their webinars are for members only.) I’ve asked for, and received, permission from ProStep iViP to tell others about the webinar, so that’s what I’m doing right here.

If having long-term access to your CAD data might be important to you at some point in time, consider listening in on this webinar. To register, send an email to nora.tazir@prostep.org. Participation is free of charge and you will receive access information back via email. (Don’t wait too long — I suspect that Nora has to manually respond to all the emails.)

“Entities must not be multiplied beyond necessity.”
- William of Ockham
“Whenever possible, substitute constructions out of known entities for inferences to unknown entities.”
- Bertrand Russell

CAD is a complex cognitive skill, comprising a large set of interrelated constituent skills with different characteristics and different learning processes underlying their acquisition.

One of the the most effective ways of making CAD more usable is to reduce the number of constituent skills which it comprises.

I’ve never seen any even reasonably complete listing of the constituent skills required for CAD. It might be interesting to try and put together such a list, but, for the moment, let’s look at just one of the important constitutuent skills:

Knowing how to deconstruct models, assemblies and drawings in order to modify them.

This is not a trivial skill. Even when working with 2D AutoCAD drawings, it can be a challenge to make changes without knowing ahead of time how the drawings are structured. When it comes to history-based 3D (what we’ve commonly, if not a little dismissively, come to call parametric feature-based solid modeling), the problem sometimes becomes intractable.

Not a Sphere

Bet you can't edit this. (See end of article.)

There is plenty of research showing that editing history-based models is a big problem for CAD users. This is primarily because the task requires not just the skill of deconstructing model geometry (e.g.,figuring out how the geometry should be changed), but also the skill of deconstructing the history of how that geometry was originally created.

The history trees of typical models can have from dozens to hundreds of entries. In order to effectively edit one of these models, you need to dig through all (or many) of these entries, to find their dependencies—which are often unobvious. The process is no easier than trying to read through the source code of a complex computer program, to figure out how it works.

The challenge is to find a way of modifying CAD models without needing to deconstruct their history trees. Work on this has been ongoing in academia for about 20 years. In the commerical CAD industry it’s taken a bit longer to get right.

Direct (or explicit) modeling CAD systems have been around far longer than history-based systems. Ivan Sutherland’s 1963 Sketchpad was an incredibly intelligent CAD system (don’t miss watching this discussion of SketchPad by Alan Kay.), most commercial CAD systems developed from that time until the late 1980s were direct modeling systems, in which you directly edited the geometry of the model. Though Pro/E ushered in the era of history-based modeling (or, rather parametric feature-based solid modeling), it did not kill the direct-modeling business. Direct modeling CAD programs such as CADKEY, Autodesk Mechanical Desktop, ME/30 (from HP and then CoCreate) and many others continued selling in significant, if not dwindling, quantities.

IronCAD and CoCreate started to introduce intelligent editing capabilities to their direct modeling CAD programs in the mid to late 1980s, but it wasn’t until a few years ago that the game really changed, with a number of CAD programs adding feature-inference on top of direct modeling.

These products, from companies such as Siemens PLM, SpaceClaim, Kubotek USA, PTC and IronCAD are now commonly called “direct modeling” CAD programs. (I’ve pointed out that direct modeling has been around since the mid-1960s far too many times in the past, so I’ll just go with the flow for now, and use the same term everyone else does.)

What makes today’s direct-modeling CAD programs significant is their usability. With one of these programs, a CAD user doesn’t need to learn the skill of deconstructing model history to be both effective and efficient.

I’ve seen a lot of discussion about whether direct modeling or history-based modeling is “better.” That’s not a discussion I really want to get into yet. It’s reasonable to mention that major aerospace and automotive companies use direct modeling software for growing number of applications. PTC, which sells both direct and history-based tools, has major customers using both types on different product development programs, apparently with great success.

What’s really interesting to me is the potential of comparing the effectiveness and efficiency direct modeling versus history-based tools. While there’s a lot of anecdotal information about this floating around, to my knowledge, there are no carefully constructed research studies available.

If you dig into Google Scholar to look for academic articles on learning CAD, you’ll find one name comes up more than any other:  Professor Ramsey F. Hamade, of the American University of Beirut.  Dr. Hamade’s research on CAD learning is published in a number of academic and technical journals, is cited by nearly all researchers in the field, and makes for really interesting reading.

I exchanged email with Dr. Hamade recently. Here’s what he had to say on the subject:

[Direct modeling] comes across as more natural and less restrictive. Therefore, I would tend to think that such modeling should be faster and less complex perhaps resulting in shifting the learning components, both declarative and procedural to faster and ‘simpler’, respectively. Unfortunately, I have not had the opportunity to perform experiments on Creo (or the like) in order to evaluate whether these ‘logical’ expectations will hold water. I teach the CAD course (where I collect data) in the Spring so it may be a while before we can make a determination.

Dr. Hamade’s research to date supports the notion that CAD is a complex cognitive skill, and points to significant differences in usability between different systems. It’ll be interesting to see what he finds when he gets a chance to formally compare the learning processes for direct modeling versus history-based modeling systems.


Note on the sphere image: It’s not a sphere. It’s a filleted cube. Here’s a challenge for you: Make a 3D model, with as many convoluted features as possible, that looks just like a sphere and has a class-A surface (G2, I think. G3 continuity wouldn’t apply to a fixed radius curve.)