个人工具

UbuntuHelp:OpenVZ

来自Ubuntu中文

跳转至: 导航, 搜索

Introduction

This page describes the installation of OpenVZ on "Ubuntu Server" as a host. In the Hardy release of Ubuntu, the OpenVZ packages are in the "universe" component, which does not have guarantees of support. Note that KVM is the main virtualization technology supported in Ubuntu. To properly implement the practical steps found in this guide, the reader should be a user of Ubuntu who is comfortable with the use of command-line applications, using the Bourne Again SHell (bash) environment, and editing system configuration files with their preferred text editor application.

About OpenVZ

OpenVZ is a server virtualization solution for Linux. It enables one to create multiple virtual Linux servers which are isolated from the host and from each other, based on a technique called "Operating System Virtualization". Similar techniques are used in Solaris Zones, Linux-VServer and FreeBSD jails. This technique does not use hardware virtualization like KVM, XEN or VMware. The so called "Virtual Servers" or VPSs behave like stand alone servers. They consume less resources than their hardware virtualized counterparts, but must use the same kernel as the host. Therefor you can only have Linux VPSs on a Linux host. The original documentation can be found here: [1]

Alternates to OpenVZ

LXC will eventually replace OpenVZ.

Installing OpenVZ

OpenVZ is supported on Ubuntu 8.04 , but not on 9.10 (karmic) or 10.04 (lucid). If you are looking for a host node more recent then Ubuntu 8.04 try Proxmox (Proxmox is Debian), Debian, or Centos. In general, OpenVZ support is better on .rpm systems first, Debian second. If you are interested in seeing OpenVZ on .deb systems, please consider working with the OpenVZ project as the OpenVZ kernel patch is not maintained by the Ubuntu developers.

8.04 Hardy

  • Install the kernel and tools
$ sudo apt-get install linux-openvz vzctl

Important! Please, make sure that you are using at least the linux-image-2.6.24-19-openvz kernel which is the first really stable kernel without basic usability issues.

  • Reboot into the openvz kernel
  • Remove the `-server` kernel or the `-generic` if you are on a desktop machine
$ sudo apt-get remove --purge --auto-remove linux-image-.*server
  • Change the sysctl variables in `/etc/sysctl.conf`

This step might not be necessary once the vzctl package is going to be updated

 # On Hardware Node we generally need
 # packet forwarding enabled and proxy arp disabled
 
 net.ipv4.conf.default.forwarding=1
 net.ipv4.conf.default.proxy_arp=1
 net.ipv4.ip_forward=1
 
 # Enables source route verification
 net.ipv4.conf.all.rp_filter = 1
 
 # Enables the magic-sysrq key
 kernel.sysrq = 1
 
 # TCP Explict Congestion Notification
 #net.ipv4.tcp_ecn = 0
 
 # we do not want all our interfaces to send redirects
 net.ipv4.conf.default.send_redirects = 1
 net.ipv4.conf.all.send_redirects = 0
  • Apply the sysctl changes
$ sudo sysctl -p
  • Create a symlink to /vz because most of the vz tools expects the OpenVZ folders to reside there. This step is not necessary, but can eliminate further problems when other vz related components are installed.
$ sudo ln -s /var/lib/vz /vz

Download Template(s)

Before we can create a new Virtual Private Server, we first have to either download or create a template of the distro we want to use. OpenVZ uses "templates" or "cached templates". The difference is that "templates" are a sort of cookbook for "cached templates" A package manager is then used to download and create the cached template of the chosen distribution. Because most cached versions of popular distro's are already created and not that big, it is easiest to download the cached version and place it in the "/var/lib/vz/template/cache" directory (or the path you have chosen in the "/etc/vz/vz.conf" file).

  • "Official" cached templates can be found here [2]
  • "Community" or "contrib" templates can be found here [3]
  • BodhiZazen|'s - OpenVZ templates - These templates were submitted to the OpenVZ "contrib" set of templates.

Once you have downloaded a template (for example ubuntu-8.04-i386-minimal.tar.gz) and placed it in "/var/lib/vz/template/cache" you can install it using the following command:

sudo vzctl create 777 --ostemplate ubuntu-8.04-i386-minimal

In the example below CT ID of 777 is used; of course any other non-allocated ID could be used. The section below explains how to create your own cached template. If you installed a default one as explained above, continue to [[UbuntuHelp:[Administration|[Administration]]] to learn how to start and enter your new node.

Create Template

For more updated instructions on Ubuntu OpenVZ template creation see: bodhi.zazen's blog, Ubuntu 10.04 OpenVZ Template Creation - Previous blog entries cover Ubuntu 9.10 - This section describes how to create an Ubuntu 8.04 Hardy minimal template. This information is somewhat dated and are biased on the Openvz wiki - Debian template creation. Documentation format:

  • Run the command on the OpenVZ host system
[HW] $ command
  • Run the command on the OpenVZ container
[VPS] $ command

Prerequisites

  • debootstrap
[HW] $ sudo apt-get install debootstrap

Creating template

Running debootstrap

  • Create a working directory:
[HW] $ mkdir hardy-chroot
  • Run debootstrap to install a minimal Hardy Heron system into that directory:
[HW] $ sudo debootstrap [--arch ''ARCH''] hardy hardy-chroot

If the ARCH of the host machine is equal to the one of the container, you can skip the --arch option, but if you need to build an OS template for another ARCH, specify it explicitly:

  • for AMD64/x86_64, use `amd64`
  • for i386 `i386`

Preparing/starting a container

Now you have an installation created by `debootstrap`, you can run it as a container. In the example below CT ID of 777 is used; of course any other non-allocated ID could be used.

  • Moving installation to container private area
[HW] $ sudo mv hardy-chroot /vz/private/777
  • All files needs to be owned by root
[HW] $ sudo chown -R root /vz/private/777
  • Setting initial container configuration
[HW] $ sudo vzctl set 777 --applyconfig vps.basic --save
  • Setting container's `OSTEMPLATE`
[HW] $ echo "OSTEMPLATE=ubuntu-8.04" | sudo tee -a /etc/vz/conf/777.conf >/dev/null
  • Setting container's IP address. (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --ipadd x.x.x.x --save
  • Setting DNS server for the container (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --nameserver x.x.x.x --save
  • Removing `udev` from the `/etc/rcS.d` and `klogd` from the `/etc/rc2.d` folders

If udev was in place the container might not start, it could be stuck and even `vzctl enter` would not be able to access the container's command line. If klogd was in place it might not let the runlevel 2 change finish.

[HW] $ sudo rm /vz/private/777/etc/rcS.d/S10udev /vz/private/777/etc/rc2.d/S11klogd
  • Starting the container
[HW] $ sudo vzctl start 777

Modify the installation

  • Enter a container:
[HW] $ vzctl enter 777

Warning!!! Do not run the commands below on the hardware node, they are only to be run within the container! Note: You will not need to use `sudo` within the container, you enter as root when you use `vzctl enter`.

  • Remove unnecessary packages:
[VPS] $ apt-get remove --purge busybox-initramfs console-setup dmidecode eject \
ethtool initramfs-tools klibc-utils laptop-detect libiw29 libklibc \
libvolume-id0 mii-diag module-init-tools ntpdate pciutils pcmciautils ubuntu-minimal \
udev usbutils wireless-tools wpasupplicant xkb-data tasksel tasksel-data

Note: If you want to use the `tasksel` tool, do not remove it — but then you have to let laptop-detect stay. Note: On removing the deb-package `module-init-tools`, a fake-modprobe is needed for IPv6 addresses, see below!

  • The DHCP client can be also removed if you know that you will not need it.
[VPS] $ apt-get remove --purge --auto-remove dhcp3-client dhcp3-common
  • Clean up after udev
[VPS] $ rm -fr /lib/udev
  • Disable getty

On a usual Linux system, getty is running on a virtual terminals, which a container does not have. So, having getty running doesn't make sense; more to say, it complains it can not open a terminal device and this clutters the logs.

[VPS] $ initctl stop tty1
[VPS] $ initctl stop tty2
[VPS] $ initctl stop tty3
[VPS] $ initctl stop tty4
[VPS] $ initctl stop tty5
[VPS] $ initctl stop tty6
[VPS] $ rm /etc/event.d/tty*
[VPS] $ rm /etc/init/tty*
  • Set sane permissions for /root directory
[VPS] $ chmod 700 /root
  • Disable root login
[VPS] $ usermod -p ‘!’ root
  • "fake-modprobe" needed for IPv6 addresses
[VPS] $ ln -s /bin/true /sbin/modprobe

On setup IPv6, the command "modprobe -Q IPv6" is called, which fails without the "fake-modprobe"

  • Set the default repositories for Hardy

Make sure that you replace the <YOURCOUNTRY> with the your country code

[VPS] $ COUNTRY=<YOURCOUNTRY>. cat >/etc/apt/sources.list <<EOF
# Binary
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse

# Binary Canonical
# deb http://archive.canonical.com/ubuntu hardy partner

# Binary backport
# deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse

# Source
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
# deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse

# Source backport
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse

# Source Canonical
# deb-src http://archive.canonical.com/ubuntu hardy partner
EOF

Note: Only the "main restricted universe multiverse" binary repositories are enabled. Change it if you need more.

  • Apply new security updates
[VPS] $ apt-get update && apt-get upgrade
  • Install some more packages
[VPS] $ apt-get install ssh quota
  • Fix SSH host keys

This is only useful if you installed SSH above. Each individual container should have its own pair of SSH host keys. The code below will wipe out the existing SSH keys and instruct the newly-created container to create new SSH keys on first boot.

[VPS] $ rm -f /etc/ssh/ssh_host_*
[VPS] $ cat << EOF > /etc/rc2.d/S15ssh_gen_host_keys
#!/bin/sh
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -N ''
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -N ''
rm -f \$0
EOF
[VPS] $ chmod a+x /etc/rc2.d/S15ssh_gen_host_keys
  • Link `/etc/mtab` to `/proc/mounts`, so `df` and friends will work:
[VPS] $ rm -f /etc/mtab
[VPS] $ ln -s /proc/mounts /etc/mtab
  • After that, it would make sense to disable `mtab.sh` script which messes with `/etc/mtab`
[VPS] $ update-rc.d -f mtab.sh remove
  • Disable some services

In most of the cases you don't want klogd to run -- the only exception is if you configure iptables to log some events -- so you can disable it.

[VPS] $ update-rc.d -f klogd remove
  • Set default hostname
[VPS] $ echo "localhost" > /etc/hostname
  • Set `/etc/hosts`
[VPS] $ echo "127.0.0.1 localhost.localdomain localhost" > /etc/hosts
  • Add `ptys` to `/dev`

This is needed in case `/dev/pts` will not be mounted after container start. In case `/dev/ttyp*` and `/dev/ptyp*` files are present, and LEGACY_PTYS support is enabled in the kernel, vzctl will still be able to enter the container.

[VPS] $ cd /dev && /sbin/MAKEDEV ptyp
  • Remove nameserver(s)

[VPS] $ > /etc/resolv.conf

  • Clean aptcahche
[VPS] $ apt-get clean
  • Cleaning up log files
[VPS] $ > /var/log/messages; > /var/log/auth.log; > /var/log/kern.log; > /var/log/bootstrap.log; \
> /var/log/dpkg.log; > /var/log/syslog; > /var/log/daemon.log; > /var/log/apt/term.log; rm -f /var/log/*.0 /var/log/*.1
  • Exit the container
[VPS] $ exit

Preparing for and packing template cache

The following commands should be run on the host system (i.e. not inside a container).

  • We don't need an IP for the container anymore, and we definitely do not need it in template cache, so remove it
[HW] $ sudo vzctl set 777 --ipdel all --save
  • Stop the container
[HW] $ sudo vzctl stop 777
  • Change dir to the container private
[HW] $ cd /vz/private/777
  • Now create a cached OS tarball. In the command below, you'll want to replace <arch> with your architecture (i386, amd64).

Note the space and the dot at the end of the command.

[HW] $ sudo tar czf /vz/template/cache/ubuntu-8.04-<arch>-minimal.tar.gz .
  • Cleanup
[HW] $ sudo vzctl destroy 777
[HW] $ sudo rm -f /etc/vz/conf/777.conf.destroyed

Testing template cache

  • We can now create a container based on the just-created template cache. Be sure to change `arch` to your architecture just like you did when you named the tarball above.
[HW] $ sudo vzctl create 123456 --ostemplate ubuntu-8.04-<arch>-minimal
  • Now make sure that your new container works
[HW] $ sudo vzctl start 123456
[HW] $ sudo vzctl exec 123456 ps axf

You should see that a few processes are running.

  • Cleanup
[HW] $ sudo vzctl stop 123456
[HW] $ sudo vzctl destroy 123456
[HW] $ sudo rm -f /etc/vz/conf/123456.conf.destroyed

Administration

When we create a VPS, we must give it a number. This number must be unique and it is used to control the VPS during it's existence. A good guideline is to use the last three digits of the ip address you are going to use for this VPS. i.e.: 10.0.0.101 would be VPS 101!

Creating a container from OS template

  • Create a container
[HW] $ sudo vzctl create <VEID> --ostemplate <the name of your template>
  • Set the IP, nameserver, hostname and start the container as described below
  • Enter into the container (equivalent to chroot)
[HW] $ sudo vzctl enter [VEID]
  • Install the langauge support. language-pack-[LANGUAGE]-base, for english use language-pack-en-base
[VPS] $ apt-get install language-pack-en-base

You might need to run apt-get update first

  • Set timezone
[VPS] $ dpkg-reconfigure tzdata
  • Exit the container
[VPS] $ exit

Configuring a container

  • Adding IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipadd [IP_ADDRESS] --save
  • Deleting IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipdel [IP_ADDRESS] --save
  • Setting hostname
[HW] $ sudo vzctl set [VEID|VENAME] --hostname [HOSTNAME] --save
  • Setting nameserver
[HW] $ sudo vzctl set [VEID|VENAME] --nameserver [NAMESERVER_IP] --save
  • Setting virtual name
[HW] $ sudo vzctl set [VEID] --name [VENAME] --save
Start, stop, take snapshot or revert to snapshot
  • Start
[HW] $ sudo vzctl start [VEID|VENAME]

Important! As of 2008-06-04 there is a bug in the linux-image-2.6.24-18-openvz and earlier kernels. It prevents the network settings from being copied into the VE, and you cannot use the cp and mv command inside your VE.

  • Stop
[HW] $ sudo vzctl stop [VEID|VENAME]
  • Take snapshot
[HW] $ sudo vzctl chkpnt [VEID|VENAME] [--dumpfile <name>]
  • Revert to snapshot
[HW] $ sudo vzctl restore [VEID|VENAME] [--dumpfile <name>]
Destroying a container
[HW] $ sudo vzctl destroy [VEID|VENAME]
Monitoring
  • List running VPS
[HW] $ sudo vzlist
  • List all VPS
[HW] $ sudo vzlist -a
Networking
Networking, IPv6 with venet0 device
  • Set the NET_ADMIN:on capability.
[HW] $ sudo vzctl set VPSID --capability net_admin:on
  • Edit the `/etc/vz/dists/scripts/debian-add_ip.sh` script. Add the

`up route --inet6 add ::/0 venet0` line under the venet0 IPv6 configuration.

iface venet0 inet6 static
        address ::1
        netmask 128
        up route --inet6 add ::/0 venet0
  • Change the proxy_ndp and the ipv6 forwarding state to `1`.
[HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/eth0/proxy_ndp
[HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/eth0/forwarding
[HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/venet0/forwarding
  • Change the sysctl variables in `/etc/sysctl.conf`
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding     = 1
net.ipv6.conf.eth0.proxy_ndp     = 1
  • Add the IPv6 address for the virtual node.
[HW] $ sudo vzctl set VEID --ipadd fc00::01 --save
  • Restart the virtual node.
[HW] $ sudo vzctl restart VEID
  • Test Your configuration.
[HW] $ sudo vzctl enter VEID
[VPS] $ ifconfig
venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:127.0.0.1  P-t-P:127.0.0.1  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: ::1/128 Scope:Host
          inet6 addr: fc00::01/128 Scope:Global
[VPS] $ ping6 -n www.6bone.net
PING www.6bone.net(2001:5c0:0:2::24) 56 data bytes
64 bytes from 2001:5c0:0:2::24: icmp_seq=1 ttl=52 time=203 ms

Ubuntu 9.10 (Karmic) VPS

Create `openvz.conf` in `/etc/init` and fix init sequence to have OpenVZ working with upstart. Original reference.

[VPS] # cat << EOF >  /etc/init/openvz.conf
description "Fix OpenVZ"
start on startup

task
pre-start script
mount -t proc proc /proc
mount -t devpts devpts /dev/pts
mount -t sysfs sys /sys
mount -t tmpfs varrun /var/run
mount -t tmpfs varlock /var/lock
mkdir -p /var/run/network
touch /var/run/utmp
chmod 664 /var/run/utmp
chown root.utmp /var/run/utmp
if [ "$(find /etc/network/ -name upstart -type f)" ]; then
chmod -x /etc/network/*/upstart || true
fi
end script

script
start networking
initctl emit filesystem --no-wait
initctl emit local-filesystems --no-wait
initctl emit virtual-filesystems --no-wait
init 2
end script
EOF

Check `/bin/sh` symlinked to `bash`?:

# file /bin/sh
/bin/sh: symbolic link to `bash'

Fix the `"init: tty1 main process ended, respawning"` syslog message

[VPS] # find /etc/init/ -maxdepth 1 -type f -name tty\* -print0 | /usr/bin/xargs -r0 -i -t sed -i 's/respawn/#respawn/g' {}

Ubuntu 10.04 Lucid VPS

To run a 10.04 VPS (VE in OpenVZ-speech) you need to make serveral adjustments inside the VPS to make it boot. The steps are outlined at http://blog.bodhizazen.net/linux/ubuntu-10-04-openvz-templates/ .

See also