Virtualization using LXC on EC2posted on October 17, 2011
(Note: this is mirrored from the Phenona blog; Phenona was acquired by ActiveState, so the original post is no longer accessible. Update 10/27: now also available at the ActiveState blog.)
Update 6/2014: Just use Docker. (http://www.docker.com/)
EC2 is already a (para)virtualized environment, which means you can’t run your own virtualization (KVM/VirtualBox/qemu). As an alternative, Linux recently introduced a new system into the kernel, called
cgroups, which provides a way to isolate process groups from each other in the kernel. A project was soon formed around this new technology, which allows for very thin, fast, and secure quasi-virtualization. It’s called LXC. And it works in EC2 perfectly.
You’ll want a recent Linux AMI (preferrably kernel 2.6.35 or higher). I use Ubuntu 11.10, and the following instructions are meant for that OS. Can’t vouch for other distros, but the instructions should be easily portable. Ubuntu’s excellent for the LXC + EC2 combination because they already have pre-made AMI images, the kernel supports LXC out of the box, and they have software repositories hosted in the EC2 cloud, which makes for extremely fast system updates. Also, any instance type works, even a
t1.micro will suffice (my weapon of choice for testing purposes).
Start by SSH-ing into your EC2 server. You’ll need to run almost all of the following instructions as root, so let’s do:
to become root. Otherwise, you can prepend ‘sudo’ to the beginning of every command from now on (unless specified otherwise).
Now, we need to install a few packages:
apt-get update && apt-get install lxc debootstrap bridge-utils dnsmasq
lxc-checkconfig and make sure that the tests pass (all of them should if you’re using the AMI).
Keep in mind that the effects of most of the commands from here on out (specifically iptables, sysctl, mount, brctl and any edits to /etc/resolv.conf) will not persist over a reboot, even on a EBS-backed instance. These are in-memory changes which will go away as soon as you shut down the machine. If you bring the instance back up, you’ll need to run them again, otherwise things will be broken. There are several ways to get around this: iptables rules and /etc/resolv.conf can be set by an init script, sysctl can be set in sysctl.conf, mounts can be specified in /etc/fstab, and brctl can be set in /etc/network/interfaces (add the br0 interface); however, for the purposes of this guide (I don’t use EBS-backed instances, personally), we’ll assume instance storage (config is lost on reboot).
We’ll need to create a place on the system to hold cgroup information (required for LXC to work). I use /cgroup. Let’s mount a cgroup environment there.
mkdir /cgroup mount -t cgroup none /cgroup
Now, let’s create a network bridge for the containers to be able to connect to the network/Internet. Simply run:
brctl addbr br0 brctl setfd br0 0 ifconfig br0 192.168.3.1 up
Now we need to set up a few system rules for the containers to be able to reach the Internet:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE sysctl -w net.ipv4.ip_forward=1
Let’s set up DHCP/DNS on our new bridge. Open up
/etc/dnsmasq.conf for editing (vim/nano/ed/cat, your choice). Uncomment the necessary lines so that the conf file looks like the following:
domain-needed bogus-priv interface = br0 listen-address = 127.0.0.1 listen-address = 192.168.3.1 expand-hosts domain = containers dhcp-range = 192.168.3.50,192.168.3.200,1h
Now, you’ll need to edit
/etc/dhcp/dhclient.conf for DNS to properly resolve locally. Add the following lines to the beginning:
prepend domain-name-servers 127.0.0.1; prepend domain-search "containers.";
(Don’t forget the dot after
containers, that’s not a typo!)
Now we need to renew our DHCP lease so that dhclient will regenerate /etc/resolv.conf.
dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclient.eth0.leases eth0
Now, let’s restart dnsmasq so it’ll re-read the new configuration.
service dnsmasq restart
Next, we need to create the environment inside the container. There’s a script that comes with lxc called lxc-ubuntu, which will set up the container. However, it’ll require a bit of tweaking for the environment to work. I’ve done the tweaking for you, and put the new script up, so simply run: (updated 4/30/11 for Ubuntu Server 11.04)
wget -O lxc-ubuntu http://bit.ly/ec2ubuntulxc
chmod +x lxc-ubuntu
Now, let’s create a new container:
./lxc-ubuntu -p /mnt/vm0 -n vm0
Wait a while for the script to finish, and your container is set up in /mnt/vm0. Let’s try it out!
lxc-start -n vm0
Type in root for the username and root for the password. Try pinging Google:
If it works, your Internet is set up! Now let’s try another thing (make sure you run this from the VM, not from the host!!):
poweroff (this shuts down the VM, and puts you in the host again)
lxc-start -n vm0 -d (this runs the VM in daemon mode)
To check if a VM is running, type:
lxc-info -n vm0
(it should say RUNNING). To test network, try pinging the VM (this might not work right away, you might have to wait up to a minute):
ssh [email protected]
If those two work, the VM is now in your DNS and you can address it by its hostname. Cool, huh?
Creating a new VM
Creating another VM is as simple as:
./lxc-ubuntu -n vm1 -p /mnt/vm1
The packages won’t be redownloaded, and the command should complete quickly.
Clone existing VM
If you want to clone your existing VM, you’ll need to do a few things:
cp -r /mnt/vm0 /mnt/vm1
Now edit /mnt/vm1/config and replace all references of vm0 to vm1. Do the same with /mnt/vm1/fstab. Then go into /mnt/vm1/rootfs/etc/hostname and replace the hostname with vm1. Finally, run:
lxc-create -n vm1 -f /mnt/vm1/config
Upon starting the VM, you should be able to ping it/ssh to it:
ping vm1 ssh [email protected]
If not, lxc-console into the VM and check your connection. Keep in mind you only need one
br0 for all your instances, but you can create many, if you so desire.
Running services inside the container
At Phenona, we run Perl web servers and the like inside these containers. You may want them to be accessible from outside the VM (from the rest of EC2, or outside EC2). To do this, you’ll need to port forward from the host to the VM. Simply run:
iptables -t nat -A PREROUTING -p tcp --dport -j DNAT --to-destination :
Hibernating a container
To ‘hibernate’ a container (save the current state (running processes) of the VM for instant restoring later) do:
lxc-freeze -n vm0
lxc-unfreeze -n vm0
Installing additional packages into the container
Your container is just like any other Ubuntu system. Therefore,
apt-get update apt-get install
Setting resource limits
One of the benefits of LXC is that you can limit resource usage per-container. Let’s delve into the various resources you can limit:
There’s two ways of limiting CPU in LXC. On a multi-core system, you can assign different CPUs to different containers, as such (add this line to your container config file, /mnt/vm0/config or similar):
lxc.cgroup.cpuset.cpus = 0 (assigns the first CPU to the container)
lxc.cgroup.cpuset.cpus = 0,2,3 (assigns the first, third, and fourth CPU to the container)
The alternative (this one makes more sense to me) is to use the scheduler. You can use values to say ‘I want this container to get 3 times the CPU of this container’. For example, add:
lxc.cgroup.cpu.shares = 2048
to the config to give a container double the default (1024).
To limit RAM, simply set:
lxc.cgroup.memory.limit<em>in</em>bytes = 256M
(replacing 256M with however much RAM you want to allow).
To limit swap, set:
lxc.cgroup.memory.memsw.limit<em>in</em>bytes = 1G
There’s no official way to do this, it’s up to you. You can use LVM (in EC2? Good luck.), or you can create a filesystem in a file (something like
dd if=/dev/zero of=somefile.img bs=4GB count=1 && mkfs.ext3 somefile.img && mount -o loop somefile.img /mnt/vm0/rootfs) and mount it to /mnt/vm0/rootfs to limit space.
To limit network bandwidth per container, do some reading on the
tc utility. Keep in mind you’ll need to use separate bridges (br0, br1) for each container if you go this route. Don’t forget to edit the config of each VM to match your new bridge if you do so.
NOTE: When following other guides on LXC, be very careful with messing with the network in the EC2 environment (restarting networking services or altering /etc/network/interfaces on the host) because one wrong command, and the connection will drop between you and your instance (you’ll lose SSH), and therefore lose your instance completely. I did that many, many times while exploring LXC. The instructions I’ve provided here have been tested and will not drop your EC2 connection, but I can’t vouch for other methods.