This discussion is meant for new ideas on how to optimize the Debian development process.

I think that it is control theory which states that you optimize a chaotic process by fast feedback loops.

So, I think it would be best if we identify areas where we can speed up the feedback.

Coverage test

At the moment of writing the top ranking Best Practise for Software Development is [100% test coverage http://all-technology.com/eigenpolls/bpfsd/index.php?it=10 ]

Therefor I suggest that every deb should be able to run a coverage test on itself and return the test coverage procent.

In this way we could automatic insure that the debs is well test before it enters unstable.

I have been wondering if one could hack something up with valgrind to make a general coverage tester.

Then we could have a automatic build machine which build and run the coverage test if the coverage procent is below a threshold the packets is rejected else the is forward to machines with different architectures where the deb is build a test if it succeed it is entered into unstable.

Statistical Debugging

When the debs enters unstable more fun will happen, here it should be subject to statistical debugging i have been wondering if it ts possible to make a kernel module which monitors the running program and then dumping to a log which is emailed to a debugging server ones a day.

Then when a packets is over a threshold of successful runs it is migrated into testing.