Three Rules Of All Testing

There are three rules applicable to all testing environments.  I’ve articulated one of the rules before but now I’ll add two more.

Rule 1 – Dead Simple

Tests must be dead simple to run.  It doesn’t matter what you’re testing.  It doesn’t matter if you’ve got 100% code coverage. It doesn’t matter if you’ve tested everything from the smallest unit to the largest UI.  If the tests aren’t easy to run the people who most need to run them that is the developers and the analysts will figure out ways to game the system and avoid running the tests.  The rule is that running tests must be easier than any workaround the developers and analysts can come up with to avoid running the tests.

Rule 2 – Some Tests Are Better Than No Tests

I can’t count the number of times we’ve been discussing testing and someone will say “well, we can’t test X” with the implication being that since we can’t test everything there’s nothing to be gained from any testing.  This is analogous to saying the only way we can engineer a rocket is to build the whole thing and then fire it off.  Never mind building prototypes or doing any of that silly math to check our assumptions.

Now, granted there can be some cases where certain properties are so essential to a system that if they can’t be tested it really isn’t worth testing at all.  But those are extremely rare cases.  Even if you can only run tests on 50% of your code, that’s still 50% that’s tested that would not get tested otherwise.

Rule 3 – Don’t Test the Language or the Libraries

I’ve seen unit tests where someone will set some property of a class.  Then they will immediately read the property to check that is hasn’t changed.  While it’s a valid test to check an invariant on a class, it’s not valid to test it immediately after its set.  If you set a value and then read it and it’s different, you’ve got problems more substantial than a unit test will ever solve.

These are just my big three rules for testing.  I’d be interested to hear the thoughts of others on this subject.

Why I Don’t Care If You Think Functional Programming Matters

Jon Harrop, who is a strong advocate for functional programming, recently tweeted some links to questions on programmers.stackexchange.com (here and here) I’m generalizing a bit (not much though) but the basic substance of these questions amounted to “Why Should I Learn Functional Programming?”  

It’s a fair question.  I’ve been a developer and around developers long enough that I’ve long ago learned that most of them won’t take anything on faith—and justifiably so.  I’ve even coined a term—“Missouri Developers.”  For those who wonder what that means, the state of Missouri in the United States has the nickname “The ‘Show Me’ State”.  A lot of developers, when they feel comfortable with the subject they’re discussing are very much in the “Show Me” crowd; skeptics in the most pejorative sense of that word.

But, honestly, I’m sort of tired of hearing the question because I hear a common subtext; an unstated question.  “If I learn functional programming will I be able to work on cool new technology?”  I’ve been developing software long enough to remember when there was serious debate as to whether or not Java would take off.  I remember reading articles about how foolish Sun was to try to build a VM because Microsoft had so much expertise in building VM’s (VB6 anyone)? The same kind of question came up about Java—”Why Should I Learn Java?”—with the same subtext.  And before that I can recall reading debates in software development magazines about whether or not it was worthwhile to learn C++.

Some people who learned Java back in the early 90’s got to work on some cool tech—some did not. 

The fact of the matter is that any cutting edge technology has certain risks built into learning it.  If you’re trying to decide whether or not you should learn a new technology based on whether or not you might be able to land a cool job if you learn it—do us both a favor and find another career.  The developers I’ve known over the years who I have the deepest respect for are those developers who love to learn.  They don’t need anyone to tell them to learn new technologies and new ideas; they do so because they love to do it.  And they don’t worry about what kind of cool new jobs might open up to them if they learn a new technology—they learn because they want to know. 

I’ve been playing with functional for about two years now.  I can empathize with those developers with years of experience in C and Pascal about the time of the early 90’s—seeing the large shift from procedural to OO had to have been hard to adjust to.  But I can also see some real strides forward that you can get by adopting functional programming—just as the adoption of OO brought about a lot of strides forward in development. 

Now I know some of you will read this and think to yourselves—wow, he drank the whole pitcher of the kool-aid.  I don’t think functional programming is a panacea.  On the other hand I never thought of OO as a panacea either but it seems that many of the Java and C# programmers in the world cannot conceive of software without objects.  And I don’t think of improving software engineering as a zero-sum game either; that is, I don’t believe that if we improve one area of software engineering that another area must necessarily suffer.  Default mutability in a language is an accidental complexity.  Mutability has its place but it should not be the default.

Either way, if you want to learn and expand your mind, then I’ll be glad to share anything I’ve learned with you.  I love talking tech with others who are intellectually curious.  Otherwise, if you aren’t willing to play with functional until you’re convinced there will be plentiful jobs in cool technologies for those who learn functional—then don’t play with functional; I don’t care.  But stop asking me why you should take the time to learn functional.  If you don’t possess any intellectual curiosity feel free to sit on the sidelines and watch the revolution.

You Might As Well Make All Your Class Members Public

So recently some of us were discussing the fact that F# 3.0 is going to add a feature to make it more amenable to OO programmers.  F# 3.0 will add automatic “getters” and “setters” for members of a class. A small digression; I prefer the terms “inspector” and “mutator” because they seem more precise.

If it were up to me, though, most OO code would not use inspectors and mutators.  Why? Because the use of inspectors and mutators can defeat information hiding which is one of the principle benefits of object orientation. I’ll explain.  Let’s pretend we have a class C which contains a data member _m1.  This data member is an integer.  Something like this:

public class C
{
    private integer _m1;
    public integer m1 {
        get
        {
            return _m1;
        }
        set
        {
            _m1 = value;
        }
}

Now let’s say you need to reference m1 from other places in your code.  Everywhere you need to reference m1, you need to specify that m1 is an integer.  Every place in code that you need to set m1, you have to set an integer.  Now later on we may find that m1 needs to be a floating point.  Because I’ve got a public getter and setter, I now need to know about code which is referencing my class and I may need to modify that code.  This is violating the whole notion of encapsulation that we adopted OO to get.  And what if I’m sharing my code to other teams?  I might break their code and never even know about it.  At this point I might as well dispense with the private _m1 member because it’s not helping anything and in fact it’s just more work to maintain. I might as well not bother using private members.

Now some might say—well, what are you suggesting?  No inspectors or mutators?  I’d suggest we don’t need them as much as I’ve seen some OO programmers abuse this facility.  I’d also suggest another way to get back the encapsulation is to hide the actual type of the member behind something that indicates the significance of the value.   Like this example:

using EmployeeCount=System.Int32;

public class C
{
    private EmployeeCount _m1;
    public EmployeeCount m1 {
        get
        {
            return _m1;
        }
        set
        {
            _m1 = value;
        }
}

If I hide the actual type of _m1 behind EmployeeCount if I ever need to change the type of _m1 I only need to change it in one place and I don’t have to worry about code which accesses my class.  I’ve regained the encapsulation that I lost via opening part of the class’ internal details.

Programming Rules Of Thumb

I have a few quick rules of thumb which I follow when writing code.  Nothing earth-shaking or non-obvious; just simple ways that I avoid making the worst and most obvious mistakes.

  1. If you use a number or a string more than once, make it a constant.  Maybe even if you only use it once.
  2. Do not check in commented out code.  You’ve got version control to show you what your code used to look like.
  3. Double check your code before checking it in to insure that you’re only checking in the change you intended–not code experiments you were playing with to try to figure out a bug.
  4. If you find something in the code or application behavior which is not a bug but which seems like a bug,  document it when you figure it out.  The developer time you save later may be your own. (Corollary:If there’s not already a knowledge base of some sort in existence at your work place, create one.  A wiki usually works best but even a text file is better than nothing.  Make it easy for everyone to get to and make it easy for anyone to update or add to.)
  5. If a routine gets longer than one screen, look at what you’re doing in the routine very closely.
  6. If you can identify a potential point of failure chances are very good that it’s not an exception.
  7. Almost always, explicit is better than implicit.  If you can spell things out explicitly, it’s almost always better to do so.