piclist 2009\02\03\085555a >
Thread: Agile programming (was Re: [P|C] Banksel)
www.piclist.com/techref/microchip/devprogs.htm?key=programming
picon face BY : Gerhard Fiedler email (remove spam text)



Vitaliy wrote:

>>> I think the nature of the project is irrelevant. [...]
>>
>> IMO this is exactly one of the major management fallacies. The
>> nature of the project does matter, [...]
>
> Consider the possibility that we're both right. :)

I'd like to. Can you explain this WRT these two statements?


>> I'm not sure, but the way I read the principles in the Agile
>> Manifesto, this is something you could read into it: that the nature
>> of the project /is/ relevant, and needs to be considered when
>> creating the practices for that project.
>
> I think of it as "adapting" the practices to a given project. All
> agile projects share some basic features, it is what makes them
> "agile". They follow from the principles, and do not change [...]

This is exactly my problem with /any/ school of management (be that
Agile or waterfall or whatever): once people subscribe to one, they tend
to apply the limited set of practices of that school to any given
project, rather than choose from whatever tools are available to
approach a given problem. Like e.g. using a Gantt chart to make complex
task dependencies visible. Like finding out how the transmission I'm
supposed to control actually works before I have damaged five of them by
iterating through a learning process.


{Quote hidden}

I am experiencing this, on an ongoing basis. Try to write a really
useful test for a non-trivial application, and you'll experience it,
too.

> It seems to me that the test function is almost always simpler than
> the function being tested. The classical example is a function that
> uses a complex formula, and returns a value. You don't reproduce the
> formula in the test function. It simply calls the user function, gets
> a value, and compares it to a hand-calculated value. If they don't
> match, the test fails.

This sort of test exists, but it is rarely useful. Running such a test
on a single value in a single-threaded calculation is very likely to
only catch the defects where the whole application would act "weird" and
the defect would be rather obvious, even without the test.

The tests that are really useful as tests do much more. First, a single
value usually doesn't cut it; one needs a distribution of possible
values across the possible inputs. This means that either you need to
reproduce the formula (that is, create a second implementation, ideally
done by a programmer that's not the one who did the target
implementation, and who doesn't know the target implementation) --
typically done for technical problems --, simulate the environment the
program interacts with or create a sufficient large case base with all
inputs and outputs defined (e.g. a billing system). All three are
considerable work.

However, this is only the start; it tests the "normal" operation. Then
come all the limit cases. Then come the out-of-bounds cases. And if the
target (sub)system happens to be multi-threaded or has any form of
real-time behavior, this all needs to be embedded in a test harness that
creates and checks different timings; again the "normal" cases, the
limit cases, the out-of-bounds cases.

You get into many combinations here, and running an exhaustive test is
impossible for all but the most trivial programs. So you need to create
something like a Monte Carlo test strategy.

Writing a good test (that actually tests what you need, not a small
subset of it) is in many cases as much or more work than writing the
target. And it can only be done after a good collection of requirements;
I don't see a way around this.

Test yourself: take a useful program, and try to design a test, thinking
like this: would the program be useful (that is, would I pay money for
it) if it /only/ satisfied my test -- everything else being undefined
(that is, with everything that's not part of the test the program could
do or not do whatever the programmer wants)? Thinking like this, testing
something simple like Windows Notepad becomes a major undertaking.


>> To get out of this recursion, you need to have a means to write
>> target programs without writing test programs first. Otherwise,
>> you'd first need a test program for the test program, and a test
>> program for that test program, and so on... :) And if you have a
>> means to write a program without writing a test program first... why
>> not use it in the first place?
>
> I think you just created a straw man. :) I've never heard anyone
> recommend writing a test program for the test program.

No straw man. Think about this for a minute -- why did you never hear
anyone recommend this? Because it's obviously recursive, of course, and
nobody wants to state something obviously recursive. So they don't say
it this way, but this doesn't mean that it is real.

Given that a useful test is a quite complex program, and if we assume
that complex programs should be created by writing the tests first...
How do you distinguish a complex program that is a test program from
another complex program that is not a test program? How can someone
propose a coding strategy that doesn't work for half the programs I have
to write? (If I take the "write the test before the target" seriously,
at least half of the programs I write are tests.)

Also, think about this... I've been contracted to write a test program
-- only the test program. And I did write a test program for the test
program :)


> What you have is the program under test (the part that the users use),
> and the tests.

Where do the tests come from? The ones I write are usually complex
programs that I need to code, using my normal coding procedures and
paradigms. I wouldn't want to use a coding paradigm that I can't use for
writing test programs, given the fact that this is part of my normal
coding life. I write these in the same way I write the target programs.
I can (and do) do both, and my coding paradigms work for both. I don't
have to have a "coding mode" for "normal" programs and another one for
test programs.

In some ways, the test program for the test program is the target
program. But this means that you need to debug both, observe both, have
a means to verify results that is outside of both. (This may be data
files produced that you manually verify occasionally, this may be a
scope hooked up to the inputs and outputs where you manually verify the
timing of certain events, etc.)

This all is not to say that tests are not a good thing. This is to say
that they are a complex and potentially expensive undertaking. And
again, there are tradeoffs... while you spend all the time writing an
exhaustive test, you could do other work. What produces more business
value depends on many factors (of which the definition of "business
value" is not the least important).

If (or where) customers put more emphasis on stability and reliability
than features, more tests are being written. Unluckily, though, there
are few areas where the balance is not strongly tending towards
features. IME this is in most cases not programmer or method driven,
this is customer driven.

Gerhard
<142zopnllt6w6$.dlg@connectionbrazil.com> 7bit

In reply to: <2070AEB7C2FF4E7EB7B0408996F22373@ws11>
See also: www.piclist.com/techref/microchip/devprogs.htm?key=programming
Reply You must be a member of the piclist mailing list (not only a www.piclist.com member) to post to the piclist. This form requires JavaScript and a browser/email client that can handle form mailto: posts.
Subject (change) Agile programming (was Re: [P|C] Banksel)

month overview.

new search...