You can deploy OpenVZ containers inside an existing Xen guest, using Lenny's stock openvz kernel. Two notes on this, though:
1. The initial ramdisk specified in your Xen guest's config file (e.g. /etc/xen/domU.cfg) must be updated so that it loads xen_blkfront on boot. Do this (from whichever guest/host has the openvz kernel you're using) by updating /etc/initramfs-tools/modules so that it includes:
And then do update-initramfs -u -k 2.6.26-2-openvz-686, where 2.6.26-2-openvz-686 is the current kernel version.
2. Your Xen guest's config file (and corresponding /etc/fstab) must use block device names like xvda or xvda1 instead of hda1, since xen_blkfront doesn't seem to rename devices from xvda to hda for you. (OpenVZ's RHEL 5 kernel, which also can be run as a Debian Xen guest, uses the older 2.6.18 kernel, and didn't have this problem. Maybe this is a persistent storage device rule in udev?).
- compile a custom kernel yourself. (May save some of the initrd headache).