A couple of days ago, I saw a conversation thread on Twitter about geometric modeling kernels. It wasn’t much of a thread—just a few comments back and forth to the effect that modeling kernels are like car engines, and that if you can’t tell the difference without looking under the hood, it doesn’t matter which one you have.

CAD users don’t think too much about what kernel their software uses. I suppose most of them can’t tell anyway. But that doesn’t mean kernels don’t matter.

There are all kinds of potential problems that can crop up with modeling kernels. A while back, I published a couple of articles about interoperability problems (which are inherently related to kernels), one from an academic perspective, and one from the perspective of a kernel guru.

About a month ago, I wrote a series of articles on configuration modeling, pointing out that no modern CAD systems can really do this. A couple of days ago, I made an off-hand comment in an article that a picture I showed (of a sphere) was really a cube that had its edges blended (e.g., start with a 2” cube, and fillet all the edges at 1”.) I learned that trick 15 years ago with SolidWorks. Several readers wrote or commented that they were unable to do it with their modern CAD systems.

The most common sign of a kernel-based problem happens when a CAD user tries to create a geometric feature, and the result is a failure.

Think about that for a moment.  You’re working on a CAD system, trying to create a feature, and the system does something unexpected.  That’s a big red flag saying the modeling kernel can’t handle what you’re asking it to do.

As an aside, I think it’s mighty interesting that one of the signs of an expert CAD user is their ability to work around limitations in the kernels of their CAD programs that would otherwise create modeling failures.

So, yes, geometric modeling kernels matter. Even to CAD users who don’t realize it.

Yet, there is no best alternative when it comes to geometric modeling kernels. ACIS, Parasolid, CGM, Granite and the proprietary kernels out there each have their own kinks. None is so much better than its competitors that I want to jump up and down and say “everybody look at this!”

The spark that set off the Twitter thread that inspired this article was an announcement, from Siemens PLM, of a webinar, to be held on November 8. Here’s the description from the Siemens website:

At the core of your mechanical CAD software is the modeling kernel, an often overlooked tool. The kernel is key to your ability to compute 3D shapes and models and output 2D drawings from 3D geometry. In this webcast, learn the basics about kernels and what impacts a change in this core code can have on your company’s existing and future design data. Dan Staples, development director for Solid Edge at Siemens PLM Software, is joined by medical device designer Billy Oliver from Helena Laboratories to explore the issues facing hundreds of thousands designers and millions of CAD files.

    • The math inside your feature tree
    • Real-world lessons learned in changing kernels
    • Modeling loss, data protection, and reuse risks
    • Impact on hundreds of thousands designers and millions of CAD files
    • Case study: Helena Laboratories ensures data protection

You can register for the webinar here.

While I expect the webinar will be, by its nature, slanted towards Siemens PLM and its Parasolid kernel, I suspect that quite a lot of what will be discussed will be interesting to people who have no intention of changing their CAD tools. I’m planning on listening in.

I doubt that most CAD users will ever spend much energy thinking about their CAD programs’ modeling kernels. But CAD users should spend some energy thinking about broader issues, such as usability and interoperability, which are affected by modeling kernels.

Is reducing variability in the product development process a good idea, or a bad idea?

It’s a trick question. Reducing the economic impact of variability is good. Reducing variability itself can drive innovation out of the development process. Hardly the result you’d want.

Don Reinertsen, a thought leader in the field of product development for over 30 years, says that 65 percent of product developers he surveys consider it desirable to eliminate as much variability as possible in product development. He also says this view is completely disconnected from any deep understanding of product development economics.

In his 2009 book, The Principles of Product Development Flow, Reinertsen provides a compelling economic analysis of product development, and makes the case that today’s dominant paradigm for managing product development is fundamentally wrong—to its very core. You can download and read the first chapter of the book here. I think you ought to do so right now. (It’ll only take a short while, and the rest of this article can wait until you’re done.)

Let’s look at a few of Reinertsen’s key points on variability:

First, without variability, we cannot innovate. Product development produces the recipes for products, not the products themselves. If a design does not change, there can be no value-added. But, when we change a design, we introduce uncertainty and variability in outcomes. We cannot eliminate all variability without eliminating all value-added.

Second, variability is only a proxy variable. We are actually interested in influencing the economic cost of this variability.

Third… we can actually design development processes such that increases in variability will improve, rather than worsen, our economic performance.

Reinertsen provides a number of possible solutions for dealing with variability in his book. An important one is flexibility:

In pursuit of efficiency, product developers use specialized resources loaded to high levels of utilization. Our current orthodoxy accepts inflexibility in return for efficiency. But what happens when this inflexibility encounters variability? We get delays…

Flow-based Product Development suggests that our development processes can be both efficient and responsive in the presence of variability. To do this, we must make resources, people, and processes flexible.

Resources—data and tools—are an important area of interest for me. So, the question occurs to me: how can resources be made flexible?

That’s not really a question I can answer in a short article. Maybe over the next couple of years on this blog, I could start to do some justice to the question. But, as a beginning, let me suggest these concepts:

  • Data must be consumable. What matters most is that you’re able to use your data, with the tools of your choice, to get your work done. The key thing to look for is the capability of your core tools to save data accurately, at the proper level of abstraction, in formats that can be consumed by many other tools.
  • Monoculture may sometimes boost efficiency, but it often kills flexibility. Figure on using engineering software tools from more than one vendor.
  • One size does not fit all. Different people and processes need different tools. You may need more than one CAD, CAM, or CAE program.

I’d be very interested in hearing your thoughts on efficiency vs. flexibility in engineering software.

Last Friday, Prostep iViP held a webinar on long-term archiving (LTA). It was worth being up at 4:00 AM to listen in.

While the presenter, Andreas Trautheim, covered quite a bit of information in less than an hour, the thing that especially caught my attention was the part where he described why long-term archiving is important.

Back in 2006, the US Federal Rules of Civil Procedure were changed to require “eDiscovery.” That is, in the event your company ends up in litigation, it will be required to provide the opposing party with electronically stored information—including, potentially, CAD files. Other jurisdictions, including the European Community, have similar evidentiary rules.

While time periods vary depending on jurisdiction, you can generally count on a statute of repose lasting about 10 years after a product has been sold or put into service. Producer liability is typically much longer (Trautheim cited a 30 year period in his presentation.) Your company must maintain its CAD files, in readable condition, for at least those periods.

The following, from VDA4958-1, speaks to archiving formats:

There are no special archiving formats proscribed by law. However, storing documents in proprietary data formats for periods of 12 years and longer could prove to be exteremely difficult technically and/or cost intensive. Any loss of data, among other things, could be interpreted in a way detrimental to the manufacturer. To ensure LTA capability, the archiving of proprietary data formats and/or binary data should therefore be avoided.

Both VDA4958 and LOTAR are standards-based initiatives addressing long-term archiving. (You can find a good presentation on them here.)

A number of years ago, it occurred to me that if anything would ultimately drive the support of interoperable CAD data formats (which is essentially what archiving formats are), it would be legal requirements. It appears that’s what’s happening.

The important question is this: Is LTA something you need to pay attention to now, or can you afford to wait?

Here are the reasons Trautheim thinks the answer is “now”:

  • It reduces the risks of legal demands, potentially saving a lot of money, as well as your good reputation.
  • You get synergy effects of LTA and drawing-less processes.
  • You spend process time and resources in design, communication of engineering (3D/2D/LDM) data and collaboration with your customers/OEM’s and partners.
  • You get documentation that is independent of native CAD/PDM systems/releases over years, without migrations.
  • It makes you innovative, and puts you out in front of your competitors.

Archimedes PalimpsestHow long is “Long-term?” Over 2,000 years, in the case of Archimedes’ The Method. This palimpsest is the only known copy of it–and it was almost lost. The story of its discovery and conservation reads like it was made for the movie theater. You can read about it here.

The need for long-term archival storage of CAD data varies, depending on its use.  For the Long Now clock, it might be 10,000 years.  For a nuclear waste repository, it could be far more than 10,000 years.  Being realistic, for many consumer products, CAD data that’s more than a couple of years old isn’t of much use anymore. For Automotive companies, product lifecycles are longer, but still not interminable. For aerospace and defense products, lifecycles can stretch on for many decades. Consider the joke among US Air Force pilots: “it isn’t your father’s Air Force, but it is your father’s plane.” (If it’s a B-56, it might even be your grandfather’s plane.)

How can 3D CAD data, with product lifecycles of sometimes more than 30 years, be reliably documented, communicated, stored and retrieved? And how can users access that data, when the CAD systems that generated it have long been obsolete?

The answer is LOTAR.

LOTAR International is developing standards for long-term archiving of 3D CAD and PDM data. These standards will define auditable archiving and retrieval processes, and are harmonized with the German Association of the Automotive Industry (VDA), and the Open Archival Information System (OAIS) Reference Model. The LOTAR International project is conducted by leading OEMs and suppliers in the aerospace and defense industry under the joint auspices of ASD-STAN, AIA, PDES Inc. and the ProSTEP iViP Association. (A shout out to Bob Bean, of Kubotek USA, who was the first person to tell me about LOTAR.)

This Friday, September 30th, 2011 at 2p.m. (CET – Central European Time), ProStep iViP is hosting a 45 minute webinar on LOTAR. And, unusually for this sort of thing, its available to the public. (Most of their webinars are for members only.) I’ve asked for, and received, permission from ProStep iViP to tell others about the webinar, so that’s what I’m doing right here.

If having long-term access to your CAD data might be important to you at some point in time, consider listening in on this webinar. To register, send an email to nora.tazir@prostep.org. Participation is free of charge and you will receive access information back via email. (Don’t wait too long — I suspect that Nora has to manually respond to all the emails.)

3D ElephantO how they cling and wrangle, some who claim
For preacher and monk the honored name!
For, quarreling, each to his view they cling.
Such folk see only one side of a thing.

 

For the last couple of weeks, I’ve written mostly about general themes affecting users of engineering software. Interoperability and usability are common (and related) themes I’m interested in—because they touch so many users, and are so important.

Today, I’d like to be a bit more specific, and talk about the Third Boeing/Northrop Grumman Global Product Data Interoperability Summit, which will be held this November 7-10, in Arizona.

Ken Tashiro of Elysium writes about the summit, and the need for it, in this month’s Aerospace Manufacturing and Design, in an article titled Interoperability is Still the 3D Elephant in the Room. (Please do click on the link, and read the article.)

The bottom line is that data interoperability is still a big problem, not just in aerospace, but in nearly any industry that uses CAD, CAM, CAE, or PLM software.

Ken makes a strong case for the summit, saying “In our rapidly evolving world, data interoperability is too important to be left to the vendor community. It is everyone’s problem and we all need to be part of the solution.”

Boeing and Northrop Grumman also make a strong case for the summit, because they’ve insisted that it be open and agnostic—an opportunity to share ideas and solutions about data interoperability challenges.

Still, open and agnostic doesn’t mean “free for all.” It’s an invitation-only conference, where you must be “sponsored” by someone from Boeing or Northrop Grumman. But I think, if you’re serious about interoperability, it’s worth the effort to find a sponsor. You can find out more information here.

 

Dr. Paul Stallings, VP of R&D for Kubotek USA, looks like a typical guy.  When I first met him at COFES, he seemed like a typical guy. Then I got to have a little conversation with him, and discovered that he’s not a typical guy.  You can do your own research on him, if you like, but suffice it to say that he’s a heavy-hitter in the world of geometric modeling kernels.  The kind of guy who doesn’t need to brag, because he’s done it.

Last week, I sent Dr. Stallings a copy of a paper that I’d found, titled Geometric Interoperability with Epsilon Solidity. I wrote about it here yesterday.  He kindly replied to me, and, surprisingly, took the time to tell me how he views interoperability. With his permission, I’m reprinting his email here, almost in its entirety.


Hi Evan,

Thanks for the paper on interoperability. It is interesting to see how the paper writing academia views the problems from time to time. While reading it I was left with the thought of “if only it were that easy”. I find that the tolerance problems, near tangent intersections, and topology mismatches are the smallest and easiest part of the problem.

The biggest problem for me is that the formats are constantly changing, and in some cases are intentionally encrypted. Ironically the intended solution to this problem—the creating of translated standards such as IGES, STEP and such—creates a new problem that accounts for the other half of all the problems that I see; that with an open standard, anyone—competent or not—can attempt to write files in the format. All too often we will see all types of mistakes made in simply making the file that range from small things like how a line is ended to larger things like missing data.

The other big problem is that geometry and topology are defined in radically different ways in different systems. One of my favorite examples is that in ACIS a cylinder is a cone and in ProE a sphere is a torus. Not a big problem but just one of many things that needs to be taken into account. Larger differences include such things as how the surfaces are parameterized. In some systems a sphere is parameterized as (latitude, longitude) in other systems (longitude, latitude). However, that is a simple flip; in the case of cones how the lateral parameterization is scaled, shifted, and flipped is more difficult. Nevertheless, the most difficult case is when it comes to advanced procedural surfaces such as blends, lofts, and such.

In the case of advanced procedural surfaces there can be many undocumented, cryptic, even unimplemented options. To add to the problem these surfaces are quite often near tangent with their neighboring surfaces or difficult to fit very accurately with general surface types such as NURBs. However, the largest problem of all is the existence of multiple flipping flags, flags to flip faces, edges, curves… get one of them wrong and all the understanding of topology, geometry, and tolerances is irrelevant.

However, I am digress. The most difficult problem with tolerances is not that one system uses one tolerance and another system uses another tolerance. The biggest problem is that some systems depend on curves in three dimensional space, and other systems depend on curves in the different two dimensional parameter space of the surfaces that they are on. The mis-match between parameter space and three dimensional space is a very big problem with ACIS and Parasolid using three dimensional space and Catia and ProE using parameter space. IGES and STEP punt by including one or both of the formats. The problem quite often comes from when both formats are included. All too often I will see a file that contains both 2D and 3D curves and the curves that were not used by the writing system are bad. IGES tries to fix this problem by actually providing a flag for the writer to set that tells which curves to trust. However, the existence of such a flag is a near admission of guilt, in that if both curves were always good, then it would not be needed.

However, the largest interoperability problem is not the format or tolerance, but the marketplace. When files are translated from expensive systems people buy less seats of that system and just rely on the ability to translate the files to less expensive systems. Hence, there is market pressure to not make the process easy. Nevertheless, no one wants to look like their files are impossible to translate because that could also decrease sales. Hence, we are left with the current situation, where fleas might be a problem until one considers how many people are employed in the flea collar industry.

I heard from a old-timer that there was a time when CAD Interoperability wasn’t a problem. He said it didn’t become a problem until the second CAD program was written.

I’m always interested to learn more about the underpinnings and sources of interoperability problems, especially in the realm of 3D solids. Recently, I came across a paper by Jianchang Qi and Vadim Shapiro, published in the Journal of Computing and Information Science in Engineering, entitled Geometric Interoperability with Epsilon Solidity. The abstract caught my attention:

Geometric data interoperability is critical in industrial applications where geometric data are transferred (translated) among multiple modeling systems for data sharing and reuse. A big obstacle in data translation lies in that geometric data are usually imprecise and geometric algorithm precisions vary from system to system. In the absence of common formal principles, both industry and academia embraced ad hoc solutions, costing billions of dollars in lost time and productivity. This paper explains how the problem of interoperability, and data translation in particular, may be formulated and studied in terms of a recently developed theory of epsilon-solidity. Furthermore, a systematic classification of problems in data translation shows that in most cases epsilon-solids can be maintained without expensive and arbitrary geometric repairs.

I will tell you, at the outset, that epsilon-solidity is not something that a the average engineer is likely to understand. No worries—it’s the background information on interoperability issues that makes this article really interesting to non-PhDs. While you may download a copy of the article yourself (it’s available for at the link above), I’m going to excerpt some of the interesting bits here.


A typical geometric data translation problem between two systems is illustrated in Fig. 1. A geometric representation can be thought of as a composition of geometric primitives by rules specific to a given representation scheme. In data translation, such a representation is transferred explicitly by various translators. However, the meaning of any representation is determined by the corresponding evaluation algorithms that usually also differ from system to system.

Perhaps the most widespread difficulty arises from the mismatch between the accuracy of the geometric representation and the precision of the evaluation algorithms used in a modeling system. For example, if the sending and receiving systems rely on different precisions, the points on surface intersections may classify differently (ON or OFF) in the two systems. As a result of such data translation, many design, manufacturing, and analysis tasks cannot be performed in the receiving system until the geometric models are either corrected “healed” or remodeled.


Many references have illustrated various data translation problems. We will not attempt to add to the long list of well-known difficulties, but rather consider a few carefully chosen but real examples that provide important insights into the nature and intrinsic sources of the general translation problem. The choice of commercial systems in the following examples is not important, because the problems are generic. The described difficulties are representative of the current state of the art and do not indicate inferiority of any specific systems.

Example 1. The first example illustrates the well-known fact that even minor changes in geometric representation may invalidate the model, causing irreparable difficulties in data translation. In this case, the model shown in Fig. 2(a) is created in SolidWorks and saved in the STEP (STandard for the Exchange of Product model data) neutral data exchange file format. Then the STEP model is reloaded into SolidWorks, but was found to be invalid. The built-in healing algorithm attempted but was unable to recover a valid solid, generating instead the model shown in Fig. 2(b). The above situation is common when geometric representations are archived in another non-native format. For example, saving the same model in ACIS format instead of STEP leads to similar difficulties. This double data translation corresponds to a situation in Fig. 1 where no new errors are introduced in the evaluation algorithm by the receiving system (because it is the same as the sending system.) The problem arises because primitives in the boundary representation—in this case, filleting surfaces and intersection spline curves in the original model—are mapped approximately into the STEP format by the translator. (Similar translation problems are common whenever tangent surfaces are approximated in the course of translation.)

Example 2. The second example Fig. 3(a) is intended to show that even when geometric healing is successful in repairing the received model, the result may not be always acceptable. The double translation procedure is identical to the first example, except, in this case, the healing algorithm is successful and generates the model shown in Fig. 3(b). The smooth blends near the corner have been replaced by sharp corners in the translated model; such drastic changes are not acceptable for engineering applications where blend radius is an important parameter.

Example 3. The third example shows that differences in precision of evaluating algorithms are also key ingredients of the translation difficulties, even when the changes in geometric representations are negligible. The solid model in Fig. 4(a) was created in SolidWorks using only planar and cylindrical primitives with integer and fixed-precision coordinates. The dimensions of the model range from 0.001 mm (the minimum thickness of the part) to 1000 mm (the length of the part.) The model is translated into Pro-Engineer through the STEP format, and both formats support exact representation of the original primitives. Therefore, it is reasonable to assume that the changes in geometric representation during the translation process remain negligible. Figure 4(b) shows the translated model after it is evaluated in Pro-Engineer. It is certainly a valid solid, but with a drastically different shape that is not likely to be consistent with the intended use of the original solid.

This last example demonstrates clearly that a geometric representation alone does not uniquely define a set of points. Rather, the set of points, and therefore all its properties, are also determined by the properties (in particular, precision) of the evaluation algorithm. In this case, Solidworks relies on incidence testing algorithms with a default tolerance of 10E-6 mm, while ProEngineer uses a relative tolerance of 10E-6 times the maximum size of the bounding box of the model measured in meters. The latter effectively determines the smallest feature size to be 10E-6 m, matching the minimum thickness of the model in Fig. 4(a). The evaluation algorithm includes the process of merging what Pro-Engineer now considers coincident geometric entities, and results in the “repaired” model shown in Fig. 4(b).


It is generally accepted that modern mass production and most of the manufacturing technologies of the past century would not be possible without the concept of interchangeable parts. The doctrine of interchangeability dictates that a mechanical part may be replaced by another “equivalent” component without affecting the overall function of the product….

With the emergence of computer-aided design and manufacturing over the last 40 years, most engineering tasks today are performed virtually, by simulating them on computer representations in place of physical parts and processes. One could argue that virtual engineering has become an enterprise for manufacturing virtual components themselves. The object of manufacturing, in this case, is the computer model of a physical artifact, and the manufacturing processes are the above computer transformations involved throughout the life cycle of this model. It is our belief that tolerancing and metrology of interchangeable virtual components is as important to the future of virtual engineering, as interchangeability of mechanical components was critical for emergence of mass production and modern manufacturing practice.


It’s worthwhile to note that the research presented in this paper was supported in part by UG PLM Solutions (now known as Siemens PLM Solutions.)

After I discovered this paper, I passed along a copy of it to Dr. Paul Stallings, VP of R&D for Kubotek USA. Dr. Stallings is one of the heavy-hitters in the geometric modeling kernel world. He wrote back to me, and his comments were far more than just interesting. I’ve gotten his permission to share them with you, and will do so. Tomorrow.

This is what I looked like before I'd heard of interoperability or licensing.

When I started writing this blog, back in 2005, I talked a lot about CAD interoperability and licensing issues.  Those issues were really central to my interests, and no one else (well, hardly anyone else) wanted to talk about them.

I thought I did a pretty good job of shining a light on those issues.  Over time, though, I really got burnt-out talking about things that users considered  fait accompli, and vendors were unwilling to change without a proverbial gun to their virtual heads.

So, I’m going to stop talking about interoperability and licensing.

Psych.

Yea, I’ll stop talking about them as soon as they’re no longer great big festering problems.

But, in the mean time, I’d like to welcome you to my rebooted blog, and tell you why you should follow me.

My goal is to go beyond parroting the press release of the day, or retweeting Martyn Day (though I shouldn’t rule it out, because often enough he impresses the hell out of me.) I’d like to make you think, and help bring clarity to the issues that users and creators of engineering tools face.

Whether you’re a user (newbie or journeyman), manager, or CAD programmer (rocket scientist), I want you to come away from reading this blog feeling that you got your money’s worth. (Since I’m not charging for the blog, that’ll be pretty easy.)

Before you go, be sure to sign up to follow me (see the box at the upper right corner of this page?)

You don’t want to miss next Monday’s post: I’m going to kick the butts of nearly every CAD vendor in the world, by pointing out how their software is incapable of solving a large class of problems that many of their users face.  I’m predicting two things will happen:  First, someone will say “but… FooCAD can solve that problem.”  And second, they’ll go back to FooCAD’s developers, and be told “no… it really can’t.”

Stay tuned.

 

 

 

 

 

 

 

Deelip Menezes, a well-known CAD blogger asks “Do the Creo 1.0 apps store their data in proprietary file formats or not?”

It seems like a rather good question.

Creo is the “reboot” of the PTC product line. There are a variety of Creo products, including Sketch, Layout, Parametric, Direct, Simulate, Schematics, Illustrate, and more. Each is derived from earlier PTC products, such as Pro/E and CoCreate.

PTC says that Creo solves four previously unaddressed problems for MCAD users: usability, interoperability, assembly management, and technology-lock in.
Deelip points out that, in his experience as a developer of data exchange tools, the term “proprietary file formats” cannot be used in the same sentence as “interoperability” and “lock-in.” And, as a result, he wants to know whether the Creo 1.0 apps do actually use proprietary file formats.

I have a hard time arguing with his asking this question. Yet, I strongly suspect, having observed PTC since nearly their entrance to the market, that the answer is going to be “yes, the Creo 1.0 apps do store their data in proprietary file formats.”

Now what? Does the use of proprietary file formats put a lie to PTC’s statements about interoperability and technology lock-in?

I think not.  PDF was, for many years, a proprietary Adobe file format, and it ended up being pretty interoperable. McNeel’s Rhino3D uses a proprietary file format, but McNeel also provides a file format specification and C++ source code libraries for reading and writing those files.

Wikipedia says “A proprietary format is a file format where the mode of presentation of its data is the intellectual property of an individual or organization which asserts ownership over the format.”

Even if a vendor such as PTC uses a proprietary format, there’s nothing to stop them from doing something similar to what McNeel did, and giving users the tools they need to access their data independently.

I don’t know what plans PTC has in this realm. But, rather than asking them whether they’re using proprietary file formats, I’d prefer to ask them something where the answer might be a bit more enlightening:

What, specifically, will PTC do to solve interoperability and technology lock-in problems for their customers?

(By the way: Interoperability involves both importing and exporting customer data.  Any answer that addresses only data import is B.S.)

Were I at the PlanetPTC conference this week in Las Vegas, I could ask the question in person. Since I’m not there, I’ll cross my fingers, and hope someone else asks it.

Snowflake Software is in the geospacial software business, focusing on data exchange.  Recently, when browsing CADwire.net, I ran across their Open Data Manifesto. I’m going to reprint it here, in whole, because I believe it sends a powerful message.

 


 

Here at Snowflake, we’re completely committed to Open Data. But to truly achieve Open Data, you need to embrace Open Standards. With Open Standards being at the heart of our technology, and the importance of community being key to driving Open Data, you can be sure that Snowflake ticks all the

boxes.

 

Ian Painter

Managing Director, Snowflake Software

 

An Open Future for all

Our commitment to Open Data means that Snowflake will:

  • Embrace Open Standards at the heart of its technology

  • Support your legal obligations to Open Data (e.g. INSPIRE)

  • Champion an Open Data architecture

  • Maintain interoperability of data

  • Provide OS OpenData in Open Standards (GML) to every customer

  • Supply products that let you open up your data

  • Listen to you

  • Feed your requirements for Open Data into our product development

  • Encourage re-use of existing infrastructure to support Open Data

  • Never knowingly over sell – we’re open and honest too – no bull

  • Offer Open Data knowledge as part of our customer support

  • Continue to innovate at the forefront of technology

  • Have some fun with you along the way!

By choosing Snowflake, you’re voting for a future proof, truly open based

innovation. Choose an open standards company.

 


 

Were I a GIS user, I’d put Snowflake Software at the top of my list of potential suppliers. Why? Because, by embracing this manifesto, they’ve made it clear that they have their priorities in order.

 

The frustration I have is that, within the engineering software market as a whole, there are few companies (or rather, company leaders) that really recognize the importance of a commitment to open data (and, more generally, open interoperability) as a foundation to a trust relationship with their customers.

 

Open data/open interoperability is not about doing “good things” for customers, or even about not “being evil” (as Google puts it.)  It’s about good business. 

 

Are you the leader of an engineering software company?  Do you have your own open data/open interoperability manifesto?  If so, I’d love to hear about it.