This is based on HighQualityStableWithFixedReleaseDate, combined with division to separate trees by architecture.
Consider the "distribution" as couple of authonome distributions, each for one architecture. Thus, there would be in fact "unstable/i386", "unstable/hppa", "testing/i386", "testing/s390" etc.
Unstable would be seen as a pool of semi stable packages, not as a distribution.
Testing would start as a branch of the previous stable, immediately after stable has been released. Packages and subsystems would "land" on testing when a testing maintainer agrees to the package maintainers request for landing.
If an RC bug would crop up in testing, this would mean testing would become broken and no further landings could happen until the rc bug was fixed, or the breaking change rolled back. This would be combined by dinstalls every hour, some automated testing and an assignment of who is to blame in big *bold* letters.
The release would be time based. We would freeze testing in late June, meaning no more landings. Then in August we would "copy" testing in to stable, and unfreeze testing to start landing stuff again.
Developer gets new version into the tree separately for each architecture, although technically he could get it by one query (by selecting several architectures, or even all). This allows him to separate the functional and non-functional packages by architecture. E.Q.: If package is functional on all excepting -whatever- architecture, he would simply get the package into all architectures expecting -whatever-. The -whatever- architecture package could get in later, after the architecture-based problem is fixed.
If a developer would fail to get a new version of a package in to shape, we would simply release the old version for the affected architecture(s).
Large toolchain or library changes likely to entirely break testing would only happen in the beginning of the new cycle. Any package that would become broken by this would cause testing to breake until the issues are fixed (since the large landing will not get rolled back).
Each cycle would start with the landing of large changes that will most likely break testing (like GCC 4.x or SELinux). Then the rest of the cycle would be fixing packages and making landings that will (hopefully) not break testing. Any large change would thus get almost a full year of testing.
We could land non-critical and clean subsystems (like KDE or Gnome) in mid testing, but if testing would go bust, the landing would quickly get rolled back.
- Testing should contain no RC bugs and hence be releasable at any given point (after toolcahin/library ladings in the beginning of the cycle).
- No accidental migrations from unstable to testing since the pkg-maintainer needs to request and a release manager needs to agree.
- High quality stable releases.
- Development could be planned since release date is fixed.
- We wouldn't need to completely drop packages if the maintainer is lazy.
- We couldn't need to completely drop packages or block all architectures if an author is not able to resolve architecture-related bug in timely manner.
- Architecture-related bugs wouldn't block packages for other architectures from getting in.
This creates more breathing space and time reserve to test the packages -that wouldn't be possible if they were blocked for all architectures because of some architecture-related bug in single package they sadly depend on, or sub-depend or sub-sub-depend,...
- Eventually, it is fault of an author if the package dosen't work with some architecture, not Debian's..
- Architecture related problems would display themselves more transparently.
- More popular architectures will receive faster turnover.
- Thus, the more popular architecture is, the newer/better_tested/better packages will its Debian Stable contain.
- Thus, the greater majority of people would be happy with Debian's stability as well as freshness.
- Even if the Debian Stable release of less popular architecture would contain some packages older than other architectures, the packages will still be stable and well-tested.
- Possibly a lot of work for the release managers since they would need to approve each package before it could enter testing.
- Package maintainers would risk public humiliation when their package breaks testing.
- Risk of having pkg-maintainers push their packages in the last minute, failing, and being "shit out of luck" and stuck for an old version for a year (not sure if this is a bad thing).
- Rolling stuff back ie downgrading is something we aren't all that familiar with.
- Stable might have older packages if maintainers aren't active (this is transparent and logical, so not all-that-bad).
- More complicated package management, since archive must handle the whole dependency tree separately for each architecture. This is done today to some degree too, however.
- If some package contains architecture-dependent bug that blocks it from getting into stable, and the problem is not resolved before freeze, the stable release for affected architecture would contain different package version that other architectures. This is transparent, and IMO much better than current state of "all architectures waiting for package because of -whatever- architecture related bug". If Debian 2006/i386 contains package of different version than Debian 2006/-whatever-, so what? Nobody should feel doomed, since in todays model, nobody in fact would get the newer package anyway. This way, the package will at least get into architectures where it is functional.
Other words, if the package is important, somebody finally will fix it. If it isn't, and/or architecture is niche and nobody cares, than why should everyone suffer because of that?
- Theoretically, more complicated work for security team in case when some package gets into Stable in different versions for different architectures. However, if the package is really important, this is not very likely. If it is not so important, might not be so important in security means too.
If the package is likely to be highly security-exposed, the maintainer has always the freedom to choose to stay with the older (functional) version for all architectures, especially if this is obviously less evil than few simultaneous versions.