converted to 1.6 markup
|Deletions are marked like this.||Additions are marked like this.|
|Line 13:||Line 13:|
|[http://all-technology.com/eigenpolls/bpfsd/ Best Practise for Software Development] is||[[http://all-technology.com/eigenpolls/bpfsd/|Best Practise for Software Development]] is|
|Line 34:||Line 34:|
|here it should be subject to [http://www.cs.berkeley.edu/~liblit/nips-2003/ statistical debugging] i have||here it should be subject to [[http://www.cs.berkeley.edu/~liblit/nips-2003/|statistical debugging]] i have|
This discussion is meant for new ideas on how to optimize the Debian development process.
I think that it is control theory which states that you optimize a chaotic process by fast feedback loops.
So, I think it would be best if we identify areas where we can speed up the feedback.
At the moment of writing the top ranking Best Practise for Software Development is [100% test coverage http://all-technology.com/eigenpolls/bpfsd/index.php?it=10 ]
Therefor I suggest that every deb should be able to run a coverage test on itself and return the test coverage procent.
In this way we could automatic insure that the debs is well test before it enters unstable.
I have been wondering if one could hack something up with valgrind to make a general coverage tester.
Then we could have a automatic build machine which build and run the coverage test if the coverage procent is below a threshold the packets is rejected else the is forward to machines with different architectures where the deb is build a test if it succeed it is entered into unstable.
When the debs enters unstable more fun will happen, here it should be subject to statistical debugging i have been wondering if it ts possible to make a kernel module which monitors the running program and then dumping to a log which is emailed to a debugging server ones a day.
Then when a packets is over a threshold of successful runs it is migrated into testing.