piclist 2009\02\05\164546a >
Thread: Agile programming (was Re: [P|C] Banksel)
www.piclist.com/techref/microchip/devprogs.htm?key=programming
picon face BY : Gerhard Fiedler email (remove spam text)



Vitaliy wrote:

{Quote hidden}

What are the things that are always true? Really, I haven't found much
in this category. People come and go, and schools of thought, too --
this is probably one :)


> First of all, I thought we established that the waterfall model
> doesn't work in the real world.

You may have done that, but not me. I agree that the "pure" waterfall
model usually doesn't work well. But I also think that there are
projects where it is good to have an almost formal requirements
gathering phase before doing any coding, and there are situations in
general where completing one thing before starting another is highly
recommended. Which is the basic principle of the waterfall model
(according to me :).

> Second, Agile does not preclude one from using any tools or
> techniques, as long as they don't contradict the principles.

See, this is what I have a problem with. I could care less whether a
tool or technique contradicts the principles a few guys came up with. If
I have good reason to think it works (and I wouldn't think that it does
if I didn't think I had good reason :), I'm going to use it -- and not
waste a second of thought whether or not this contradicts somebody's
principles.

> And I thought you said you agree with the Manifesto? :)

I said I can agree with most that's written in the Manifesto. But the
Manifesto is rather generic; you can bring it in alignment with a lot;
it seems with more than most would consider "Agile".


>> Like e.g. using a Gantt chart to make complex task dependencies
>> visible.
>
> As long as you don't waste time projecting the deadlines six months
> into the future, and creating a monster chart that has every single
> trivial task listed, I don't have a problem with it.

I'm glad to hear that you don't have a problem with me using Gantt
charts. Seriously... :)

> Like I said, I found Gantt charts to be a waste of time.

That may be due to the way you used them, or just a matter of personal
preference.

> There are other, more effective ways to make dependencies visible.

Like for example?


>> Like finding out how the transmission I'm supposed to control
>> actually works before I have damaged five of them by iterating
>> through a learning process.
>
> I think that's a given.

This is what I'm talking about. This looks like a requirement gathering
phase, before even starting to think about code.

>> I am experiencing this, on an ongoing basis. Try to write a really
>> useful test for a non-trivial application, and you'll experience it,
>> too.
>
> I'm seriously considering it. On a few occasions, I got bit by
> "small" changes that I made that broke the code in subtle ways. I'm
> also tired of running manual tests.

Yes, I know what you're talking about. But there is a reason that it is
not so common: it takes a lot of time and effort to do it in a useful
way. Think about it... every time you add a feature to or change one in
your target, you need to update two applications -- and the test apps
are by no means simpler than the target apps. While they add to the
overall stability, they also add a lot to the schedule.


> Sure, writing a suite of tests that tests *everything* is a waste of
> time. But it probably makes sense to automate the testing of some
> functions. The law of diminishing returns says that at some point
> writing more tests would have a net negative impact on productivity.

Exactly. But you also need to put some considerable effort into it
before you start getting a productivity result. I don't know where the
balance is, and it probably depends a lot on the specific project and
the team. (You're getting tired of hearing this, right? :) But neither
testing nor finding the balance is generally trivial IMO.


{Quote hidden}

(Side note: Some do, some don't. As I said, I've been contracted before
to write a test software, and I did write tests for my test software. So
it is being done, in certain cases. Olin also provided an example.)

Just explain this to me:

Let's say that it can be shown (scientifically nonetheless) that it
results in better software when the software (of any kind) is written so
that the test software is written before the target software. Since this
applies to software of any kind, it also apply to test software. So if I
don't write test software with tests for it first, does it mean that the
test software is of inferior quality? If it isn't, what does this mean
for the premise? If it is, what does this mean for the reliability of
the tests?


> The only difference is the order: traditionally, tests are written
> after the user functions have been implemented.

Again this "traditionally". You seem to know an awful lot about what is
tradition in the business. I don't seem to know a small percentage of
this.

For example, IME tests are "traditionally" rarely written in an
organized, complete manner. Most complex software has some test
harnesses, some test cases, but rarely a complete unit test/overall
function test environment.

IMO it is more important to have the tests written by a different team
than the target than when exactly this happens. Think about it... in
both cases, what you need is a clear picture of the requirements.
Whether you get this writing a test or writing the target doesn't really
matter all that much. What you have to do in the end is debugging /both/
when you bring them together... there's nothing that guarantees that
your test application will be correct. (And believe me, in all but the
most trivial cases it won't be.)


>> Also, think about this... I've been contracted to write a test
>> program -- only the test program. And I did write a test program for
>> the test program :)
>
> You mean, you wrote a program that the test program was testing? Kind
> of like creating test data?

No, I wrote a program to test some other program. I was the "other" guy;
they didn't want the test program to be written by the same guy who did
the target program. So even though my program was a test program, I
treated it like any other program: I wrote tests for parts of it. But
this, together with what I wrote above, to illustrate that test programs
are not any different from other programs -- so whether writing them
first or second doesn't really make much of a difference.


>>> What you have is the program under test (the part that the users
>>> use), and the tests.
>>
>> Where do the tests come from?
>
> They are based on the requirements.

So we do gather requirements before we write code? Aren't we now getting
dangerously close to a (modified) waterfall model: gather requirements
before writing tests, then writing tests before writing target code,
then writing target code?


> Simple tests don't cost much, and can be useful.

Of course. This is a perfect application of the "it depends" principle
:)

Gerhard
<tr68rtng0w4v.dlg@connectionbrazil.com> 7bit

In reply to: <9F85D7F286764FC48E19F35E00F35010@ws11>
See also: www.piclist.com/techref/microchip/devprogs.htm?key=programming
Reply You must be a member of the piclist mailing list (not only a www.piclist.com member) to post to the piclist. This form requires JavaScript and a browser/email client that can handle form mailto: posts.
Subject (change) Agile programming (was Re: [P|C] Banksel)

month overview.

new search...