Differences between revisions 1 and 2
Revision 1 as of 2004-01-12 19:53:12
Size: 76
Editor: anonymous
Comment:
Revision 2 as of 2004-01-12 19:56:00
Size: 1631
Editor: anonymous
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
Describe the new page here. This discussion is meant for new ideas
on how to optimize the Debian development process.

I think that it is control theory which states
that you optimize a chaotic process by fast feedback loops.

So, I think it would be best if we identify areas
where we can speed up the feedback.

=== Coverage test ===
At the moment of writing the top ranking
[http://all-technology.com/eigenpolls/bpfsd/ Best Practise for Software Development] is
[100% test coverage http://all-technology.com/eigenpolls/bpfsd/index.php?it=10 ]

Therefor I suggest that every deb should be able to run a coverage test
on itself and return the test coverage procent.

In this way we could automatic insure that the debs is well test
before it enters unstable.

I have been wondering if one could hack something up with
valgrind to make a general coverage tester.

Then we could have a automatic build machine which
build and run the coverage test if the coverage procent
is below a threshold the packets is rejected
else the is forward to machines with different architectures
where the deb is build a test if it succeed it is entered into unstable.

 
=== Statistical Debugging ===
When the debs enters unstable more fun will happen,
here it should be subject to [http://www.cs.berkeley.edu/~liblit/nips-2003/ statistical debugging] i have
been wondering if it ts possible to make a kernel module
which monitors the running program and then dumping
to a log which is emailed to a debugging server ones a day.

Then when a packets is over a threshold of successful runs
it is migrated into testing.

This discussion is meant for new ideas on how to optimize the Debian development process.

I think that it is control theory which states that you optimize a chaotic process by fast feedback loops.

So, I think it would be best if we identify areas where we can speed up the feedback.

Coverage test

At the moment of writing the top ranking [http://all-technology.com/eigenpolls/bpfsd/ Best Practise for Software Development] is [100% test coverage http://all-technology.com/eigenpolls/bpfsd/index.php?it=10 ]

Therefor I suggest that every deb should be able to run a coverage test on itself and return the test coverage procent.

In this way we could automatic insure that the debs is well test before it enters unstable.

I have been wondering if one could hack something up with valgrind to make a general coverage tester.

Then we could have a automatic build machine which build and run the coverage test if the coverage procent is below a threshold the packets is rejected else the is forward to machines with different architectures where the deb is build a test if it succeed it is entered into unstable.

Statistical Debugging

When the debs enters unstable more fun will happen, here it should be subject to [http://www.cs.berkeley.edu/~liblit/nips-2003/ statistical debugging] i have been wondering if it ts possible to make a kernel module which monitors the running program and then dumping to a log which is emailed to a debugging server ones a day.

Then when a packets is over a threshold of successful runs it is migrated into testing.