A couple of days ago, I saw a conversation thread on Twitter about geometric modeling kernels. It wasn’t much of a thread—just a few comments back and forth to the effect that modeling kernels are like car engines, and that if you can’t tell the difference without looking under the hood, it doesn’t matter which one you have.

CAD users don’t think too much about what kernel their software uses. I suppose most of them can’t tell anyway. But that doesn’t mean kernels don’t matter.

There are all kinds of potential problems that can crop up with modeling kernels. A while back, I published a couple of articles about interoperability problems (which are inherently related to kernels), one from an academic perspective, and one from the perspective of a kernel guru.

About a month ago, I wrote a series of articles on configuration modeling, pointing out that no modern CAD systems can really do this. A couple of days ago, I made an off-hand comment in an article that a picture I showed (of a sphere) was really a cube that had its edges blended (e.g., start with a 2” cube, and fillet all the edges at 1”.) I learned that trick 15 years ago with SolidWorks. Several readers wrote or commented that they were unable to do it with their modern CAD systems.

The most common sign of a kernel-based problem happens when a CAD user tries to create a geometric feature, and the result is a failure.

Think about that for a moment.  You’re working on a CAD system, trying to create a feature, and the system does something unexpected.  That’s a big red flag saying the modeling kernel can’t handle what you’re asking it to do.

As an aside, I think it’s mighty interesting that one of the signs of an expert CAD user is their ability to work around limitations in the kernels of their CAD programs that would otherwise create modeling failures.

So, yes, geometric modeling kernels matter. Even to CAD users who don’t realize it.

Yet, there is no best alternative when it comes to geometric modeling kernels. ACIS, Parasolid, CGM, Granite and the proprietary kernels out there each have their own kinks. None is so much better than its competitors that I want to jump up and down and say “everybody look at this!”

The spark that set off the Twitter thread that inspired this article was an announcement, from Siemens PLM, of a webinar, to be held on November 8. Here’s the description from the Siemens website:

At the core of your mechanical CAD software is the modeling kernel, an often overlooked tool. The kernel is key to your ability to compute 3D shapes and models and output 2D drawings from 3D geometry. In this webcast, learn the basics about kernels and what impacts a change in this core code can have on your company’s existing and future design data. Dan Staples, development director for Solid Edge at Siemens PLM Software, is joined by medical device designer Billy Oliver from Helena Laboratories to explore the issues facing hundreds of thousands designers and millions of CAD files.

    • The math inside your feature tree
    • Real-world lessons learned in changing kernels
    • Modeling loss, data protection, and reuse risks
    • Impact on hundreds of thousands designers and millions of CAD files
    • Case study: Helena Laboratories ensures data protection

You can register for the webinar here.

While I expect the webinar will be, by its nature, slanted towards Siemens PLM and its Parasolid kernel, I suspect that quite a lot of what will be discussed will be interesting to people who have no intention of changing their CAD tools. I’m planning on listening in.

I doubt that most CAD users will ever spend much energy thinking about their CAD programs’ modeling kernels. But CAD users should spend some energy thinking about broader issues, such as usability and interoperability, which are affected by modeling kernels.

Is reducing variability in the product development process a good idea, or a bad idea?

It’s a trick question. Reducing the economic impact of variability is good. Reducing variability itself can drive innovation out of the development process. Hardly the result you’d want.

Don Reinertsen, a thought leader in the field of product development for over 30 years, says that 65 percent of product developers he surveys consider it desirable to eliminate as much variability as possible in product development. He also says this view is completely disconnected from any deep understanding of product development economics.

In his 2009 book, The Principles of Product Development Flow, Reinertsen provides a compelling economic analysis of product development, and makes the case that today’s dominant paradigm for managing product development is fundamentally wrong—to its very core. You can download and read the first chapter of the book here. I think you ought to do so right now. (It’ll only take a short while, and the rest of this article can wait until you’re done.)

Let’s look at a few of Reinertsen’s key points on variability:

First, without variability, we cannot innovate. Product development produces the recipes for products, not the products themselves. If a design does not change, there can be no value-added. But, when we change a design, we introduce uncertainty and variability in outcomes. We cannot eliminate all variability without eliminating all value-added.

Second, variability is only a proxy variable. We are actually interested in influencing the economic cost of this variability.

Third… we can actually design development processes such that increases in variability will improve, rather than worsen, our economic performance.

Reinertsen provides a number of possible solutions for dealing with variability in his book. An important one is flexibility:

In pursuit of efficiency, product developers use specialized resources loaded to high levels of utilization. Our current orthodoxy accepts inflexibility in return for efficiency. But what happens when this inflexibility encounters variability? We get delays…

Flow-based Product Development suggests that our development processes can be both efficient and responsive in the presence of variability. To do this, we must make resources, people, and processes flexible.

Resources—data and tools—are an important area of interest for me. So, the question occurs to me: how can resources be made flexible?

That’s not really a question I can answer in a short article. Maybe over the next couple of years on this blog, I could start to do some justice to the question. But, as a beginning, let me suggest these concepts:

  • Data must be consumable. What matters most is that you’re able to use your data, with the tools of your choice, to get your work done. The key thing to look for is the capability of your core tools to save data accurately, at the proper level of abstraction, in formats that can be consumed by many other tools.
  • Monoculture may sometimes boost efficiency, but it often kills flexibility. Figure on using engineering software tools from more than one vendor.
  • One size does not fit all. Different people and processes need different tools. You may need more than one CAD, CAM, or CAE program.

I’d be very interested in hearing your thoughts on efficiency vs. flexibility in engineering software.

Suppose you were to take everyone in the engineering department of your company, and line them up based on CAD proficiency. You might end up with something like this:

A few experts, a good number of average users, a bunch of beginners, and a whole lot of people who simply can’t use CAD at all.

Next, suppose you were able to make CAD easier for “normal” people to use, lowering the threshold of entry.  Here’s a guess about how your line-up might change:

It probably doesn’t look like much of a change. Here’s a version that shows what the change is:

It probably still doesn’t look like much of a change. Except for one thing: The people who went from being non-users and beginners to being average CAD users are most likely domain experts.

CAD is a force multiplier.  Give it to a person who has no domain knowledge and it multiples nothing.  Give it to a domain expert, and the result can be powerful.

I’ll admit the scenario I’m painting here is simplistic.  Individual and enterprise productivity with CAD is not a simple subject, and the research in the field is sparse and dated.  But feel free to throw away my scenario, and paint your own.

How do you think lowering the barrier to entry on CAD could affect your company?

 

 

Over the last week or so, I’ve talked about cognitive load, and how it affects CAD usability. It’s time to talk more about how user interface plays into this.

A few years ago, researchers from Yonsei University, Georgia Institute of Technology, and National Cheng Kung University (Ghang Lee, Charles M. Eastman, Tarang Taunk, and Chun-Heng Ho) published a research study titled Usability principles and best practices for the user interface design of complex 3D architectural design and engineering tools, in the International Journal of Human-Computer Studies.

The reason they undertook the research study was that there was plenty of research on user interface design for generic desktop and web applications, but none for complex 3D parametric architectural design and engineering software.

Here is a summary of the user interface principles recommended by the authors:

Principles for general system design

  • Consistency: Uniformity of system semantics across similar situations.
  • Visibility: Making relevant information conspicuous and easily detectable to the user.
  • Feedback: Response of the system to the user’s actions in order to provide information regarding the internal state of the system.
  • Recoverability: Providing the user with options to recognize and recover from errors.

Principles specific to 3D parametric design

  • Maximization of Workspace: Providing maximum screen space for carrying out the primary functions of the CAD system.
  • Graphical Richness: Replacing textual information with graphical information like imagery or animation to enhance user comprehension where appropriate.
  • Direct Manipulation: Providing interaction that is perceived by the user as directly operating on an object or entity within the system.

Principles for user support

  • Familiarity: Leveraging user’s knowledge and experience in other real-world or computer-based domains when interacting with a new system.
  • Customizability: Support to explicitly modify the interface or operability of the system based on the user’s preference.
  • Assistance: Providing support to the user both explicitly, by tutoring, and implicitly, by prompting the user in the right direction.
  • Minimalist design: Keeping the design simple and minimizing redundancy of information when it threatens to be the cause of confusion to the user.
  • Context recognition: Automatic adjustment of the interface or operability of the system based on user mode and, system context.

Here’s a significant comment from the study’s summary:

Complex 3D design and engineering systems are usually composed of several hundred menu items. If options for each menu item are considered, the combination of possible operations grows exponentially. Since this number exceeds the cognitive load that a person can handle, an efficient and user-friendly UI is critical to the users of these systems.

Cognitive Load. Just like I’ve been talking about for the last week or so of posts here.

If you’re interested in CAD user interface issues, you should read the study from the link above. It’s well worth the time.

 

“Entities must not be multiplied beyond necessity.”
- William of Ockham
“Whenever possible, substitute constructions out of known entities for inferences to unknown entities.”
- Bertrand Russell

CAD is a complex cognitive skill, comprising a large set of interrelated constituent skills with different characteristics and different learning processes underlying their acquisition.

One of the the most effective ways of making CAD more usable is to reduce the number of constituent skills which it comprises.

I’ve never seen any even reasonably complete listing of the constituent skills required for CAD. It might be interesting to try and put together such a list, but, for the moment, let’s look at just one of the important constitutuent skills:

Knowing how to deconstruct models, assemblies and drawings in order to modify them.

This is not a trivial skill. Even when working with 2D AutoCAD drawings, it can be a challenge to make changes without knowing ahead of time how the drawings are structured. When it comes to history-based 3D (what we’ve commonly, if not a little dismissively, come to call parametric feature-based solid modeling), the problem sometimes becomes intractable.

Not a Sphere

Bet you can't edit this. (See end of article.)

There is plenty of research showing that editing history-based models is a big problem for CAD users. This is primarily because the task requires not just the skill of deconstructing model geometry (e.g.,figuring out how the geometry should be changed), but also the skill of deconstructing the history of how that geometry was originally created.

The history trees of typical models can have from dozens to hundreds of entries. In order to effectively edit one of these models, you need to dig through all (or many) of these entries, to find their dependencies—which are often unobvious. The process is no easier than trying to read through the source code of a complex computer program, to figure out how it works.

The challenge is to find a way of modifying CAD models without needing to deconstruct their history trees. Work on this has been ongoing in academia for about 20 years. In the commerical CAD industry it’s taken a bit longer to get right.

Direct (or explicit) modeling CAD systems have been around far longer than history-based systems. Ivan Sutherland’s 1963 Sketchpad was an incredibly intelligent CAD system (don’t miss watching this discussion of SketchPad by Alan Kay.), most commercial CAD systems developed from that time until the late 1980s were direct modeling systems, in which you directly edited the geometry of the model. Though Pro/E ushered in the era of history-based modeling (or, rather parametric feature-based solid modeling), it did not kill the direct-modeling business. Direct modeling CAD programs such as CADKEY, Autodesk Mechanical Desktop, ME/30 (from HP and then CoCreate) and many others continued selling in significant, if not dwindling, quantities.

IronCAD and CoCreate started to introduce intelligent editing capabilities to their direct modeling CAD programs in the mid to late 1980s, but it wasn’t until a few years ago that the game really changed, with a number of CAD programs adding feature-inference on top of direct modeling.

These products, from companies such as Siemens PLM, SpaceClaim, Kubotek USA, PTC and IronCAD are now commonly called “direct modeling” CAD programs. (I’ve pointed out that direct modeling has been around since the mid-1960s far too many times in the past, so I’ll just go with the flow for now, and use the same term everyone else does.)

What makes today’s direct-modeling CAD programs significant is their usability. With one of these programs, a CAD user doesn’t need to learn the skill of deconstructing model history to be both effective and efficient.

I’ve seen a lot of discussion about whether direct modeling or history-based modeling is “better.” That’s not a discussion I really want to get into yet. It’s reasonable to mention that major aerospace and automotive companies use direct modeling software for growing number of applications. PTC, which sells both direct and history-based tools, has major customers using both types on different product development programs, apparently with great success.

What’s really interesting to me is the potential of comparing the effectiveness and efficiency direct modeling versus history-based tools. While there’s a lot of anecdotal information about this floating around, to my knowledge, there are no carefully constructed research studies available.

If you dig into Google Scholar to look for academic articles on learning CAD, you’ll find one name comes up more than any other:  Professor Ramsey F. Hamade, of the American University of Beirut.  Dr. Hamade’s research on CAD learning is published in a number of academic and technical journals, is cited by nearly all researchers in the field, and makes for really interesting reading.

I exchanged email with Dr. Hamade recently. Here’s what he had to say on the subject:

[Direct modeling] comes across as more natural and less restrictive. Therefore, I would tend to think that such modeling should be faster and less complex perhaps resulting in shifting the learning components, both declarative and procedural to faster and ‘simpler’, respectively. Unfortunately, I have not had the opportunity to perform experiments on Creo (or the like) in order to evaluate whether these ‘logical’ expectations will hold water. I teach the CAD course (where I collect data) in the Spring so it may be a while before we can make a determination.

Dr. Hamade’s research to date supports the notion that CAD is a complex cognitive skill, and points to significant differences in usability between different systems. It’ll be interesting to see what he finds when he gets a chance to formally compare the learning processes for direct modeling versus history-based modeling systems.


Note on the sphere image: It’s not a sphere. It’s a filleted cube. Here’s a challenge for you: Make a 3D model, with as many convoluted features as possible, that looks just like a sphere and has a class-A surface (G2, I think. G3 continuity wouldn’t apply to a fixed radius curve.)

Chui the draftsman
Could draw more perfect circles freehand
Than with a compass.
His fingers brought forth
Spontaneous forms from nowhere. His mind
Was meanwhile free and without concern
With what he was doing.
No application was needed
His mind was perfectly simple
And knew no obstacle.

-From The Way of Chuang Tzu, by Thomas Merton

What makes someone fast at using CAD?

I’d argue that it’s a combination of aptitudes (things such as fine motor skills, and good working memory), coupled with finely tuned rule automation and schema acquisition (I talked about these last week, in http://www.evanyares.com/cad-usability-sucks-part-3-its-not-cad-its-you/)

But, not to get too complicated, the things that make a CAD user fast are analogous to the things that make writers fast: They can type fast, and think fast.

In the case of CAD, typing fast is not so critical, but being fast at interacting with the system, by navigating around models, selecting objects, and entering commands and parameters is.

CAD systems vary greatly in how easy they make this human-computer interaction.  It’s a major factor in usability.

Last Friday, I said “possibly the most effective way of reducing extraneous cognitive load in CAD is by converting recurrent skills from controlled processes to automatic processes. Make the low-level tedious things simple enough to do that you don’t have to think about them, and you’re most of the way there.”

I was talking about human-computer interaction (or user interface, if you prefer.)

Rather than talking about how software developers can improve their user interfaces (a subject I’ll get to in another post), I’d like to talk about how users can improve them, by using a real-world example: Brett Graffin, the fastest AutoCAD user in the world.

I met Brett 25 years ago or so, when he was an AutoCAD user, and I was an AutoCAD dealer.  Even then, Brett was incredibly fast.  He was already a two-time national drafting champion.  But he was always trying to figure out how to be more effective and efficient.

One of the common tricks of the day, with AutoCAD, was to program keyboard macros for the most common commands.  ”L” for line, “C” for circle, and so on.  Brett extended this concept in an interesting way.  He analyzed how he used AutoCAD and refined his use of commands down to a core set with which he could accomplish any drafting task he needed. He then mapped those commands to a set of 2-digit macros, which could be entered in via keypad. Of course, he memorized all the 2-digit macros, which was actually quite a bit easier than memorizing normal CAD command sequences. Eventually, he developed muscle memory, so he didn’t even need to think about what the macro number was.

But that wasn’t quite good enough for him—because the requirement to hit the enter key on the keypad cost him an extra keystroke for each command, and slowed him down. So, he got ahold of some standalone numeric keypads (the original ones he had were for old Apple Macintosh computers), and modified them so that they’d feed the 2-digit macros to the computer without the need to hit the enter key. The standalone keypad also let him mouse with his right hand, and keypad with his left hand.

The result of Brett’s keypad hack was electrifying. Watching him (and the drafters who worked for him) tear through work could make an observer a little queasy. (He’s refined his keypad over the years. You can check it out at www.launchpadoffice.com.)

Brett never claimed to be the fastest AutoCAD drafter in the world, but I suspect that, at the time, he was.  And even today, he’s probably faster than you, or anyone you know. His experience, over 20 years, is that aggressive users of his keypad (called a Launchpad) are around 300% faster than normal.

My goal isn’t to try and sell Launchpads for Brett, but to point out that sometimes seemingly simple changes in interface can make a big difference in the usability (particularly the efficiency, or speed) of a CAD package.

In the case of the Launchpad, the improvements come from two areas:

  • Converting controlled cognitive processes (things you have to think about, such as finding and selecting commands) to automatic cognitive processes (things you don’t have to think about.)
  • Reducing “split attention,” where you have to shift focus to different parts of the screen (for command entry.)

I think it’s worthwhile for CAD developers to think more about how user interfaces can be improved, to support these concepts.  Pretty icons and menus are great, but fast interfaces that don’t get in the way of productivity are what serious users really need.

It also makes sense for users to take a look at how they’re interacting with their software. Minimally, it’s wise to optimize the use of existing user interfaces, through the use of hot-keys and macros, and thoughtful configuration of tool palettes and other menu objects. But, beyond this, there are relatively low-cost options, such as the Space Navigator that help streamline the recurrent processes of using CAD.

Fix CAD UsabilityThis week, I’ve been writing about CAD usability, and particularly about its human side.

If you want to make CAD more usable, it’s better to try and understand how the mind works first, then tune the software based on that understanding.

Here are some of the things I’ve discussed over the last few days:

  • Individual users have a single, limited cognitive resource (working memory.)
  • This resource is used to process intrinsic, germane, and extraneous cognitive load.
  • If working memory is used for processing extraneous cognitive load, it’s not available for intrinsic or germane cognitive load.
  • Individuals systematically differ in their processing capacity.
  • Using CAD is a complex cognitive skill, comprising a large set of interrelated constituent skills with different characteristics and different learning processes underlying their acquisition.
  • The constituent skills that make up a complex cognitive skill may be classified as rule-based (recurrent skills), or schema-based (non-recurrent skills.)
  • Constituent skills may be performed as automatic processes, which primarily occur with little or no attention required, or as controlled processes, which primarily require focused attention, and are easily overloaded and prone to errors.

It’s a lot to take in, but here’s the most important part:

The secret to improving CAD usability is to reduce extraneous cognitive load.

Possibly the most effective way of reducing extraneous cognitive load in CAD is by converting recurrent skills from controlled processes to automatic processes. Make the low-level tedious things simple enough to do that you don’t have to think about them, and you’re most of the way there.

The next most effective way of reducing extraneous cognitive load in CAD is to reduce the number of constituent skills which it comprises.

Do these two things, and you’ve gone a long way towards making CAD more usable, not just for experts, but for average people too.

On Monday, I’ll explore the concept of converting controlled processes to automatic processes. I’ll also tell you the story of a man who figured out how to do this, and who, as a result, became the fastest AutoCAD drafter in the world.

On Tuesday, I’ll explore the concept of reducing constituent skills, and give you examples of where it is revolutionizing the CAD industry.

The road to wisdom? – Well, it’s plain
and simple to express:
Err
and err
and err again
but less
and less
and less.
 
- Piet Hein, Danish inventor and poet.

 

 

When talking about CAD usability, it’s easy to focus on failings in the software. There are plenty of good bad examples out there that help prove the point. But, let’s be fair: even an ideal CAD program would be difficult for an average person to master.

That is because using CAD is a complex cognitive skill.

It can’t be made easy. Only easier.

To explain what I’m talking about, I’d like to start with a little cognitive science background:

Complex cognitive skills are goal-directed, and comprise a large set of interrelated constituent skills with different characteristics and different learning processes underlying their acquisition. In the context of CAD, those constituent skills range from simple drag-and-dropping to advanced surface modeling, to serious engineering problem solving.

The constituent skills that make up a complex cognitive skill have these charateristics:

  • Some are performed as automatic processes, which primarily occur with little or no attention required, and others performed as controlled processes, which primarily require focused attention, and are easily overloaded and prone to errors.
  • They may be classified as rule-based (recurrent skills), schema-based (non-recurrent skills), or, for complete novices, knowledge-based.

The process of learning constituent skills includes rule automation for recurrent skills, and schema acquisition for non-recurrent skills.

Rule automation is a concept that refers to learning processes that, with practice, allow a person to solve familiar aspects of problems with little or no conscious control. In the context of CAD, rule automation is important for constituent skills that must be performed accurately, quickly, and simultaneously to other constituent skills.

The term schema refers to a mental model or representation. Schema include cognitive maps (mental representations of familiar parts of one’s world), images, concept schema (categories of objects, events, or ideas with common properties), event scripts (schema about familiar sequences of events or activities) and mental models (clusters of relationships between objects or processes). In the context of CAD, schema acquisition is important for constituent skills related to unfamiliar or difficult problems, which require conscious thought and problem solving.

Most symbolic cognitive models of human information processing describe it in terms of data structures (representations), and the processes that operate on those representations. This is similar to the distinction in computer science between data and instructions. The term declarative knowledge refers to representations of objects and their relationships to other objects (e.g., the propositions we believe to be true.) It is “knowing what.” The term procedural knowledge refers to processes that operate on representations. It is “knowing how.”

How Does This Apply to CAD?

Automatic processes, controlled processes, recurrent skills, non-recurrent skills, rule automation, schema acquisition, declarative knowledge, procedural knowledge—you won’t find these terms discussed in the CAD for Dummies book. But they provide a foundation for discussing, and understanding, CAD usability.

Let me put them all into one paragraph:

When you sit down in front of your CAD system, you use a complex mixture of declarative knowledge (knowing what you want to do), and procedural knowledge (knowing how to do it.) You do recurrent processes automatically, without much thought, relying upon rote (rule automation). You do non-recurrent processes with control (conscious thought), sometimes solving problems you’ve never seen before by using your experience and domain knowledge (schemas).

A few days ago, I posted an article that discussed a broader definition of usability. While I went pretty metaphysical in my definition of usability (Usability is a measure of a tool’s ability capability to use what you have, to help you get what you want), I was trying to explain that usability isn’t just about user interface. It’s about your ability to get your work done.

But, let’s talk about user interface for a moment. How does user interface really affect usability?

Let’s try a thought exercise: Imagine you run a large aerospace company that designs and manufactures airliners. You have a choice between two different CAD systems: One with an exceedingly intuitive user interface (I’ll call this hypothetical CAD system “SpaceClaim”), and one with a modern, but arguably less intuitive, user interface (I’ll call this hypothetical CAD system “CATIA V6.”) Which CAD system is more usable?

Ignoring the false dichotomy in this scenario, you have to look at usability in the context of using what you have (a few thousand engineers and designers, among other things), to get what you want (high-quality airliners, delivered on schedule, on budget.)

Intuition in a user interface is primarily related to rule automation, and comes into play with recurrent processes that ought to be automatic (that is, you shouldn’t have to think too much about them.)

The not-so-hypothetical SpaceClaim, it can be argued, has a usability advantage in recurrent processes—including things such as navigation, and basic geometric modeling and editing. CATIA V6 has a usability advantage in many non-recurrent processes—including arcana such as knowledge-based engineering and aerospace surface design.

Most CAD industry analysts and pundits would argue that CATIA V6 is the more usable tool, when it comes to designing and manufacturing airlines. I’d tend to agree with them (however, if I were being ornery, I’d point out that a lot of aircraft flying today were designed with CATIA V4—a CAD program that is primitive compared to today’s best programs.  They’d them point out that airplanes used to be designed on paper, and I’d point out that they had a point there.)

CAD Usability Doesn’t Really Suck

I admit—I went for the dramatic title in this series of articles. Maybe I should have called it “here are some reasons why CAD usability is a challenge.” That title wouldn’t have sounded as good on Twitter.

CAD usability is actually pretty good, compared to what it used to be. Yet, learning a complex cognitive skill such as CAD (and I’m talking about serious expert-level CAD, not just pretty-picture CAD) is inherently a lengthy process that requires high amounts of effort, and is constrained by human cognitive processing capacity.

Mastering CAD is never going to be easy, but there are a lot of things CAD developers can do to make it easier. I’ll talk about some of those in an upcoming post.

For further reading: Training Complex Cognitive Skills: A Four-Component Instructional Design, by Jeroen J.G. Van Merriënboer

 

Historically, developers of CAD software have focused most of their energies on improving the intrinsic capabilities of the tools they create, rather than upon making those tools fundamentally more usable. It’s not that developers have ignored usability. Rather, given a choice between making a program more capable versus making it more usable, most software developers have understandably chosen the former.

Over the last decade or so, CAD programs have become powerful enough that they’re equal to the task of even very tough applications. The problem now is that people have to be too smart to use them effectively.

As I pointed out in last Friday’s post, the term “usability” actually has an ISO definition: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.”

By that definition, CAD programs are highly usable–but only when the “specified users” are experts in the “specified context of use.”

Individual conversations I’ve had with CAD users (and the few surveys that have been published) support the notion that only a small fraction of engineers are able to use CAD programs proficiently, and only a much smaller fraction are able to use those tools at an expert level. At best, CAD proficiency might fit in a Pareto distribution, but reality is probably far bleaker.

Differences in proficiency spring both from differences among CAD programs (both in inherent capabilities, and in human computer interface design), and differences in the aptitudes among users.

There is quite a bit of research available on improving usability of computer interfaces, in general. Yet, the research seems to be targeted to the “average” users. Within the context of CAD software, there don’t seem to be that many average users. There are a handful of really proficient users, who seem to have an aptitude for CAD, and there are a large number of people who seem not to have that aptitude, and who barely get by.

Here’s a question that I think is important: How do you improve CAD usability while broadening the scope of “specified users” to include all peoplewith their various aptitudeswho might benefit from using it?

And, a related question: What would benefit a company more: Improving the productivity of its handful of very proficient CAD users, or improving the productivity of the much larger number of CAD users who barely get by?