Agile programming (was Re: [P|C] Banksel)
Vitaliy email (remove spam text)
Gerhard Fiedler wrote:
>>>> I think the nature of the project is irrelevant. [...]
>>> IMO this is exactly one of the major management fallacies. The
>>> nature of the project does matter, [...]
>> Consider the possibility that we're both right. :)
> I'd like to. Can you explain this WRT these two statements?
Projects have similarities and differences. There are things that will
always be true (or at least, most of the time). There are other things that
will be different.
I was saying that for the things that are always true, the nature of the
project is irrelevant.
First of all, I thought we established that the waterfall model doesn't work
in the real world.
Second, Agile does not preclude one from using any tools or techniques, as
long as they don't contradict the principles. And I thought you said you
agree with the Manifesto? :)
> Like e.g. using a Gantt chart to make complex
> task dependencies visible.
As long as you don't waste time projecting the deadlines six months into the
future, and creating a monster chart that has every single trivial task
listed, I don't have a problem with it. Like I said, I found Gantt charts to
be a waste of time. There are other, more effective ways to make
> Like finding out how the transmission I'm
> supposed to control actually works before I have damaged five of them by
> iterating through a learning process.
I think that's a given.
> I am experiencing this, on an ongoing basis. Try to write a really
> useful test for a non-trivial application, and you'll experience it,
I'm seriously considering it. On a few occasions, I got bit by "small"
changes that I made that broke the code in subtle ways. I'm also tired of
running manual tests.
Today I fixed a function with a small bug which only manifested itself under
certain conditions. I can remember several other similar instances where
small bugs resulted in intermittent problems, and were not entirely obvious.
> Test yourself: take a useful program, and try to design a test, thinking
> like this: would the program be useful (that is, would I pay money for
> it) if it /only/ satisfied my test -- everything else being undefined
> (that is, with everything that's not part of the test the program could
> do or not do whatever the programmer wants)? Thinking like this, testing
> something simple like Windows Notepad becomes a major undertaking.
Sure, writing a suite of tests that tests *everything* is a waste of time.
But it probably makes sense to automate the testing of some functions. The
law of diminishing returns says that at some point writing more tests would
have a net negative impact on productivity.
>> I think you just created a straw man. :) I've never heard anyone
>> recommend writing a test program for the test program.
> No straw man. Think about this for a minute -- why did you never hear
> anyone recommend this? Because it's obviously recursive, of course, and
> nobody wants to state something obviously recursive. So they don't say
> it this way, but this doesn't mean that it is real.
You lost me. Of course it's not real, nobody does it this way. People
understand that you don't write tests to test the tests.
> Given that a useful test is a quite complex program, and if we assume
> that complex programs should be created by writing the tests first...
> How do you distinguish a complex program that is a test program from
> another complex program that is not a test program? How can someone
> propose a coding strategy that doesn't work for half the programs I have
> to write? (If I take the "write the test before the target" seriously,
> at least half of the programs I write are tests.)
Gerhard, are you playing the devil's advocate? :) You write tests for your
software, so you must know the answer to this question.
The only difference is the order: traditionally, tests are written after the
user functions have been implemented. TDD advocates writing the tests before
the user functions. It's the same amount of work, with the following
- It establishes clear requirements for the target function
- Programmer can't "forget" to write a test
- The user function will be designed for testability
> Also, think about this... I've been contracted to write a test program
> -- only the test program. And I did write a test program for the test
> program :)
You mean, you wrote a program that the test program was testing? Kind of
like creating test data?
>> What you have is the program under test (the part that the users use),
>> and the tests.
> Where do the tests come from?
They are based on the requirements. For example, you could have a test
function that tests the return of an encryption function. You write the test
first, that calls the encryption function, provides the plain text argument,
and compares the return value with expected value (obtained using other
means -- hand calculated, using a calculator, or another program). .
The tests I'm talking about are internal, often they reside in the same
modules as the functions under test.
I have also written external test programs, for example there is a Delphi
program I wrote that checks that the PIC inside a device was correctly
programmed, and functions properly. It automated what was previously a
manual process, saving time and eliminating human error. Its operation is
extremely simple, it sends out messages to the PIC, and compares the
responses to a list of expected responses. Same idea, "distributed"
> This all is not to say that tests are not a good thing. This is to say
> that they are a complex and potentially expensive undertaking.
Simple tests don't cost much, and can be useful.
You once again prove that we agree more often than not. :)
See also: www.piclist.com/techref/microchip/devprogs.htm?key=programming
You must be a member of the
piclist mailing list
(not only a www.piclist.com member) to post to the