spacer
spacer
spacer

A Developer’s Perspective on Agile

By Eric

spacer Gregg and Stefanie have described some management perspective on Agile programming.  As a participant in several of the team rooms they mentioned, I would like to make a few comments.

Likes:           

  • Active involvement with customers:  the more the developers know about what end users will want, the better.
  • Emphasis on refactoring: rearranging code to avoid duplication while adding capabilities is critical for any developer.
  • Retrospective: with any important activity it is important to take some time to think about what you are doing right and what could be done better.

Mixed:

  • Pair programming.

When done correctly, pair programming is exhausting.  One especially rewarding session (where Mayank and I coded up how quad trees are intersected with face boundaries) produced very good code, but left us hoarse every day for a week!  What made that session work is that we both challenged each other’s intuitions freely.  The end result was reasonably well tested and reliable.  We had enough clash to write code better than either of us would have alone.

Unfortunately, a pace like that cannot be sustained for long.  It is much easier to develop a hierarchy, where someone “knows the most” about a particular area of the code, and the other partner either watches for typos, or is supervised by the more knowledgeable person.  Even this mode of pairing is tiring.

Anything that makes a team’s efforts better than the sum of individual efforts (if you just divide the work by N and give everyone something to do) is good.  But pairing requires continuous effort, and won’t improve the code without everyone’s sustained efforts.  There is a lot of middle ground between pairing and not.  Code reviews and having people frequently bounce ideas off each other gets a lot of the benefit with less stress.

  • Unit testing

If tools are set up correctly, most of the errors are semantic (i.e., the code looks good and compiles, but it’s not doing what it is supposed to).  Unit testing only helps when you know the right tests to apply.  It can’t catch poor scaling (e.g, using an n-squared algorithm where an O(n) or O(nln(n)) would work. )  I have become a big fan of writing out the contract for a function before I write the code, then placing assertions to specify pre and post conditions.

I think the big take away should be: if you are testing your code correctly, mistakes should be obvious.  When you do something wrong,

  • Your code should fail to compile
  • Half a dozen or more unit tests should fail
  • Assertions should be going off all over the place, etc.

Long undiscovered bugs cost more than those found early in testing.  Good programming demands a high level of focus on details: the more time you have to forget the code you wrote, the harder it is to fix.

  • Thin Vertical Slices/Design as you go.

Positives: Thin vertical slices make sense because there is business value in quickly getting small but usable pieces functionality to customers.  If they like it, you follow up and develop it further until it meets their needs fully.  If no one buys it, the project stops and you haven’t really lost that much (because you didn’t develop more than you needed).

Negatives: The notion that software can be redesigned on the fly is only partially true.   The more people are using something the more risk there is in changing it.  No amount of testing eliminates regression risk.  If customers aren’t on board with iterative development, having a new drop every few weeks could cost you some credibility (why didn’t they do it right the first time?).  Finally, it takes a lot of skill and good judgment to balance the goal of refactoring to get better code quality with regression risks.

What do you think?  I was reading a recent survey on Agile and the results seemed largely positive.  Does this fit with your experience?

Posted: May 9th, 2012   |
  • Eric's blog
  • Add new comment
Tags:
  • Agile

Subtleties of B-rep Translation (Part 2); Why Form Matters

By Gregg

'Form' in mathematics manifests itself in all manners of perspective and discussion. From your earliest mathematics courses, professors drilled home the discipline; “return all answers in simplest form”. My youthful efforts to dismiss the need yielded discussions as such; “OK, please graph this equationspacer " . In a quick second I would naively suggest that at x = -1 the equation is undefined and then I would start plotting points. But alas, this is why form is important. spacer  gets factored to spacer which is simplified to spacer   to x - 1, whenever x isn’t -1. Wait, that’s a line. My eighth grade algebra teacher, Mr. Sower, was right, simplest form is important.

As you advanced in your course work, you start to define forms of equations by their mathematical representation and to understand advantages and disadvantages of each. Farin, in his book, Practical Linear Algebra, does a nice job outlining the three main forms of an equation of a line and advantages of each in computer graphics:

  • Explicit Form: y = mx + b This is the form in every basic algebra book. It’s very conceptual; the coefficients have clear geometric meaning. In computer graphics it’s the preferred form for Bresenham’s line drawing algorithm and scan line fill algorithms.
  • Implicit Form:   spacer (Given a point p and a vector a that is perpendicular to the line.) The implicit form is very useful for determining if an arbitrary point is on a line.
  • Parametric Form: spacer . The scalar value t is a parameter. In this form, we can easily calculate points along the line by use of the parameter, t.

I’m not certain when I internalized that inherent in mathematics is the art, strategy and beauty of 'form'.  (I’m a slow learner, it wasn’t Mr. Sower’s fault.) But as my career developed into the commercial implementation of B-rep modeling kernels their translation technologies, 'form' again, became a principal view.

So, for the purpose of this discussion we define 'form' of geometric curves and surfaces in three ways: analytic, B-spline and procedural representations. All three of the major solid modeling kernels, ACIS, CGM, and Parasolid, maintain their geometry in either of these three forms, or sometimes as dual representations. [1]

  • Analytic: geometry which can be represented explicitly by an equation (algebraic formula), for example: planes, spheres, and cylinders. These equations are very light weight and they intrinsically hold characteristics of the surface, for example the centroid of a sphere. 
  • B-spline: geometry represented by smooth polynomial functions (in parametric form) that are piece-wise defined. Generally used to represent free-form surfaces. Advantages of B-splines are their ability to represent many types of geometry and bounding boxes are easy to calculate.
  • Procedural: geometry represented as an implicit equation or algorithm. For example, the IGES specification has tabulated cylinders and offset surfaces as defined procedural surfaces. The advantages are precision and the knowledge of progenitor data to understand design intent.

From this perspective, each of the major kernels has thought long and hard about the best form for each geometry type. In some cases it’s self-evident and easy. If you are modeling a sphere, the analytic form is clearly best. It’s a lighter weight representation and the full extents of the surface are defined. Even more, imagine doing a calculation requiring the distance between a point and a sphere. In this “special” case, you simply compute the distance between the point and the centroid of the sphere, subtract the radius and you’re done. If the sphere was in the form of a b-spline it’s much more computationally expensive. Despite this translation solutions still don’t get this preferred form right. Now imagine you’re a CMM application and you purchased the solution that translates spheres to B-splines?  Your app is horribly slow.

Although spheres are a trivial example, more complex geometries become intriguing. In what form should you prefer a helical surface? Or an offset surface?  ACIS has preferred multiple versions of helical surfaces over the years. Early on the preferred version was a procedural sweep surface with either a procedural rail curve or a b-spline rail curve. (The rail curve is what defines the helical nature of the surface).  If the surface was translated in from Parasolid it came in as a generic b-spline surface. But the need to understand different characteristics of the helical surface soon became apparent. For example, the hidden line removal algorithm and intersectors all needed to understand the pitch and handedness to efficiently work with the geometry. To that end, ACIS moved to a procedural surface definition with an analytical representation of the helical rail curve. 

The offset surface is an excellent example where the CGM developers and the ACIS developers came to different conclusions. In ACIS the offset surface is procedural; evaluate the progenitor and shoot up the surface normal the offset distance. ACIS choose this representation for preciseness and compactness of data. In addition, in ACIS, if you offset an offset surface the progenitor of the original offset becomes the progenitor for the second or third or fourth offset and more geometry sharing is possible. But all of this comes at a cost. Procedural surfaces, although exact, may have a performance penalty and may introduce unwanted discontinuities. The CGM developers decided the best strategy here was to create b-splines for offsets.

So what does this all have to do with translation? The key point here is; you need to understand what the preferred forms are for each of the modeling kernels. In each of these systems you can easily slip in geometry in non-optimal forms causing immense grief when doing downstream modeling operations. I spoke earlier about the translator solution that goofed up even a simple conversion of spheres. And the CMM application that purchased that translation solution? In short, don’t let that be you.



[1] For this discussion I’m going to leave off polyhedral form.

Posted: May 4th, 2012   |
  • Gregg's blog
  • Add new comment
Tags:
  • ACIS
  • CGM

Everything You Wanted To Know About Scanning But Were Afraid To Ask

By John S.

For this post, I thought I would talk about SPAR 2012, which I attended in Houston last week.   

For those of you where are not familiar with it, SPAR is a conference/trade show for the medium and long-range scanning industries.  From a geometry perspective, this means dealing with point clouds.  Lots of point clouds.  Lots of really big point clouds.  For example, at one of the talks I attended, the speaker was discussing dealing with thousands of terabytes of data.  Another speaker discussed the fact that developing the systems just to manage and archive the huge amount of data being produced is going to be a major challenge, let alone the need for processing it all.

spacer As an example of this, the very first thing I noticed when I walked into the exhibit hall was the huge number of companies selling mobile terrestrial scanners.  These are laser scanning units that you strap onto the back of a van, or an SUV, or an ultra-light airplane, or a UAV - there was even a lidar-equipped Cooper Mini on display. You then drive (or fly) along the path you want to scan, acquiring huge amounts of data.  The data is then processed to tell you about e.g. potholes or highway signs or lane lines on roads (the scanners are often coupled with photographic systems) or vegetation incursions on power lines (typically from aerial scans).  When I attended two years ago, this was a fairly specialized industry; there were only a few vans on display, the companies that made the hardware tended to be the ones doing the scans, and they also wrote the software to display and interpret the data. 

This year, it seemed like this sector had commoditized:  there were at least eight vehicles on display, the starting price of scanning units had come down to about $100K, and it seemed that there were vendors everywhere you looked on the display floor (and yes, it does sound odd to me that I’m calling a $100K price point “commoditized”).  Another thing that I was looking for, and think I saw, was a bifurcation into hardware and software vendors.  I asked several of the hardware vendors about analysis; this year they uniformly told me that they spat their data out into one of several formats that could be read by standard software packages.  I view this specialization as a sign of growing maturity in the scanning industry; it shows that it is moving past the pioneer days when a hardware manufacturer has to do it all.

On the software side, I saw a LOT of fitting pipes to point clouds.  This is because a large part of the medium range scanning market (at least as represented at SPAR) is capturing “as built” models of oil platforms and chemical plants, especially the piping.  The workflow is to spend a few days scanning the facility, and then send the data to a contractor who spends a few months building a CAD model of the piping, from which renovation work on the facility can be planned.  One of the sub-themes that ran through many of the talks at the conference was “be careful of your data – even though the scanner says it’s accurate to 1mm, you’re probably only getting ½ inch of accuracy”.  This was driven home to us at Spatial a few years ago when we bought a low-end scanner to play around with and we discovered that a sharp black/white transition caused a “tear” in the surface mesh spit out from the scanner (due to differential systematic errors between white and black).  A practical example of this was discussed in a talk by one of the service providers; he gave a case study of a company that tried to refit a plant using the workflow described above.  Early on they discovered that the purported “as built” model (obtained by humans building models from scanned data) weren’t accurate enough to do the work – new piping that should fit correctly from the model wouldn’t fit in reality (for example all the small-diameter pipes had been left out of the model completely).  This is because a real-world plant isn’t a bunch of clean vertical and horizontal cylinders; pipes sag, they’re stressed (bent) to make them fit pieces of equipment and so on.  The company went back and had the job re-done, this time keeping a close tie between the original scans and the model at all stages.  I really appreciated the attention to detail shown in this and other talks; in my opinion it’s just good engineering to understand and control for the systematic errors that are introduced at every stage of the process.

Two more quick observations:

  • Several people mentioned the MS Kinect scanner (for the gaming console) as disruptive technology.  My gut is telling me that there’s a good chance that this will truly commoditize the scanning world, and that photogrammetry might take over from laser scanning.
  • I didn’t expect my former life as a particle physicist to be relevant at a scanning conference.  Imagine my surprise when I saw not one but TWO pictures of particle accelerators show up in talks (and one of them a plenary session!)

Next year’s SPAR conference is in Colorado Springs – I hope to see you there!

Posted: April 27th, 2012   |
  • John S.'s blog
  • Add new comment
Tags:
  • Point Clouds

How to Train Eight People Without a Trainer

By Stefanie

John's recent post on

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.