Headlines | Linux | Apps | Coding | BSD | Admin | News
Information for Linux System Administration 

VPS: Xen vs. OpenVZ


This is a short overview of the key differences between OpenVZ and Xen that you might consider when choosing a VPS. Note that this article is based on my opinions and that you must do your own research to determine which, if either, technology is best for you and your application.

First, some terminology. OpenVZ isn't fully virtualized and could be more properly referred to as a 'container', not a VPS. That shouldn't affect your choice. It's the technical differences that matter.

Cheap VPS offers are everywhere lately, it seems. However, upon closer inspection I saw that almost all of the low-priced offers were for OpenVZ. While both Xen and OpenVZ offer their advantages, I chose Xen. So, there's my first bias, right up front. :)

OpenVZ advantages:

  • Efficient (fast)
OpenVZ disadvantages:
  • Shared kernel (no custom kernel)
  • Shared memory with other users
  • Vendor can easily oversell, killing performance
Xen advantages:
  • Dedicated memory
  • fully virtualized (can run other kernels or even OS's)
  • vendor more limited in overselling
Xen disadvantages
  • Less efficient (more overhead due to a kernel-per-VPS)

You'll notice I left price out of the above comparison. In theory, there should be a small price advantage for OpenVZ. I don't know how big it should be but it pertains to two things: 1) Xen uses more memory due to each VPS having its own kernel, and 2) Xen uses more CPU, also due to the additional software layer required to virtualize the kernel.

In practice, however, the price gap appears larger than the above technical differences suggest it should be. I think the remainder of OpenVZ's price advantage is based on 1) the ability for a vendor to easily oversell OpenVZ, and 2) The price competition that results from some vendors overselling OpenVZ.

OpenVZ doesn't encapsulate its containers into a fixed amount of memory, so it runs processes in the host environment to monitor memory usage and kill processes as a container allocates more than its assigned amount.

As a result of this difference, loading down an OpenVZ container is problematic. To partially offset this disadvantage, most OpenVZ vendors offer 'burst' memory in addition to 'dedicated' memory. That is, the monitor process is set to allow the container to use more than its allocated memory -- for a short period of time. This messy situation results in a potentially unreliable environment as some of your processes may be arbitrarily killed -- at the busiest times.

Xen, on the other hand, allows the use of a swap space and excess memory allocation results in (hopefully) idle segments being rolled out to the swap area. While this is good for the memory-hungry VPS user, it can consume significant I/O capacity when memory is overallocated to the point of busy segments getting swapped out. This is bad for everyone sharing the underlying hardware.

I see Xen as clearly the superior technology. A Xen VPS feels and behaves more like a dedicated server. However, I still would have purchased OpenVZ at some price difference. After a bit of research, however, I located Xen VPS's at practically the same price as the cheapest OpenVZ containers. That made my decision easy.

With that said, keep in mind that a bad hosting vendor can ruin either technology through various means. Both technologies share the disk drives and I/O paths as well as the processor cores. Hardware can be poorly configured and managed in any case. A reputable vendor is probably the single most important consideration in choosing a virtual server.

Lastly, carefully check the 'allowed use' policy. Make sure your application is allowed on the server you intend to purchase. Note that due to their different characteristics, the allowed use policy may differ between OpenVZ and Xen for the same host. Also, it's good to understand the memory usage characteristics of your applications. If you know how much memory/swap they require on a physical system, it'll probably work with that same amount of memory/swap on Xen.

[I'll post a review shortly of my current VPS vendor and I will then add a link to that article here.]

mail this link | permapage | score:9589 | -Ray, June 13, 2011

Set up Ubuntu PV DomU via xen-image-create at Xen 3.3 Ubuntu Dom0 with Novell kernel 2.6.27


I would also name this post xen-image-create&debootstrap vs python-vm-builder in regards of pre-building Xen Guests at Ubuntu Intrepid Server.View Bug #311943 at for details. Install Intrepid Server Dom0 with with Novell’s Xen-ified kernel, enabling Dom0 and DomU support at a time.Tune xen-tools scripts per [1] and create Intrepid PV DomU. Actually, images been created via xen-create-image allow to upgrade DomU to real Intrepid Server PV DomU.
I mean to perform at originally loaded DomU :-

# apt-get upgrade
# apt-get install linux-image-server

and switch DomU’s kernel to vmlinuz-2.6.27-9-server.
Afterward images may be scp’ed to any Xen 3.3.X Linux Dom0(64-bit) and corresponding Intrepid Server PV DomU loaded for instance at Xen 3.3.1-RC4 CentOS 5.2 Dom0 (64-bit).So, xen-image-create appears to be responsible to prebuild Xen Guest instead of the most recent release of python-vm-builder.

mail this link | permapage | score:9452 | -Boris Derzhavets, January 6, 2009

Set up Oneiric PVHVM at Xen 4.1.2 Ubuntu 11.10 Dom0


This post is sample of utilizing optimized paravirtualized PVHVM drivers (also called PV-on-HVM drivers) with Xen fully virtualized HVM guests running Ubuntu 3.1 kernels at Xen 4.1.2 Dom0. Xen PVHVM drivers completely bypass the Qemu emulation and provide much faster disk- and network IO performance. First thing I had to do it was rebuild the recent Ubuntu kernel for precise Ubuntu-3.1.0-3 with CONFIG_XEN_PLATFORM_PCI=y. As result following debian packages gets created. read more...
permapage | score:9214 | -Boris Derzhavets, November 3, 2011

Set up Oneiric PV DomU at Xen 4.1.2 Ubuntu Oneiric Dom0 (3.1.0-030100-generic)


Procedure is standard Debian’s network PV install. Download configuration file from following location. Debian and consequently Ubuntu still consider Libvirt and virtinst tools like virt-manager and command line utility virt-install as way to manage RH’s Xen domains like F15,F16,CentOS 6 either to be utilized with Qemu-kvm Hypervisor.

# wget
permapage | score:9124 | -Boris Derzhavets, October 31, 2011

Install HVM Solaris 08/07 DomU (32-bit) at Xen 3.2.1 CentOS 5.1 Dom0 (64-bit)


Solaris 08/07 (10U4) as usual hangs at Xen 3.2 (64-bit) (Xen 3.1) Linux Dom0s.This Xen build has been done with VMXASSIST disabled as advised for FreeBSD HVM DomUs at Xen 3.2 Dom0s. Clone Xen 3.2.1 from mercurial on xen-disabled CentOS 5.1 64-bit instance as follows:-

# cd /usr/src/
# hg clone
# cd xen-3.2-testing.hg
# make world vmxassist=n
# make install
permapage | score:9016 | -Boris Derzhavets, April 29, 2008

Tutorial: Xen on CentOS 6.2


This tutorial provides step-by-step instructions on how to install Xen (version 4.1.2) on a CentOS 6.2 (x86_64) system. Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other, but still use the same hardware. read more...
permapage | score:8987 | -falko, January 31, 2012

Tutorial: Xen on CentOS 6.3 (x86_64)


This tutorial provides step-by-step instructions on how to install Xen (version 4.1.x) on a CentOS 6.3 (x86_64) system. Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other. read more...
permapage | score:8928 | -falko, August 29, 2012

Tutorial: Xen, CentOS 5.3 Paravirtualization


This tutorial provides step-by-step instructions on how to install Xen (version 3.0.3) on a CentOS 5.3 (x86_64) system. Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one. read more...
mail this link | permapage | score:8921 | -falko, May 18, 2009

FireFox 3.0.1 and Java Web Start on Xen 3.3 and CentOS 5.2 Dom0


This posting follows up the recent one "Vncserver on SNV97 DomU at Xen 3.3 CentOS 5.2 Dom0 (64-bit)" and is targeting enabling Sun's Java Web Start in FireFox 3.0.1 Web Browser with 32-bit JRE 6.0 plugin installed on 64-bit Linux.

First of all install 32-bit Sun JRE 6.0 for Linux by running

Then install FireFox 3.0.1 into folder /usr/tmp/firefox for instance. Create following symlink to make firefox aware of JRE. read more...
permapage | score:8873 | -Boris Derzhavets, September 13, 2008

Tutorial: Upgrade Debian 5 to 6 on Xen VPS


This tutorial shows how to upgrade a Debian Lenny (Debian 5.0) installation on a Xen based Virtual Private Server (VPS) to Squeeze (Debian 6.0) including kernel update, dependency based boot sequencing and conversion to UUIDs. If you do it the usual Debian way just with apt-get dist-upgrade you will most likely end up with an unbootable system. This is mainly because the update of grub fails. read more...
permapage | score:8824 | -falko, March 9, 2011

Pygrub and loading Ubuntu 8.10 PV DomU via serial console at Xen 3.3 CentOS 5.2 Dom0


To load Ubuntu Intrepid Server PV DomU via serial console files vmlinuz-2.6.27-7-server,initrd.img-2.6.27-7-server usually gets copied to Xen 3.3 Dom0 and parameters root="/dev/xvda1 ro", extra="2 hvc0" are included into startup profile. At the same time "root" & "extra" may be specified via new entry into /boot/grub/menu.lst located at DomU.
File /etc/event.d/tty1 should be also modified to work for xen console instead of vfb.It's exec line has to reference hvc0 instead of tty1. All changes above been done at DomU allow to load DomU via pygrub and serial console avoiding using virtual frame buffer. read more...
mail this link | permapage | score:8821 | -Boris Derzhavets, November 1, 2008

Convert Debian systems and Xen VMs into OpenVZ containers


This guide explains how you can convert physical systems (running Debian Etch) or Xen domUs (also running Debian Etch) into an OpenVZ container. This procedure should also work for converting VMware VMs, VirtualBox VMs, or KVM VMs into OpenVZ containers, but I haven't tried this. It should work for other Linux distributions as well, with minor modifications (for example, the network configuration is not located in /etc/network/interfaces if you're not on Debian/Ubuntu). read more...
permapage | score:8750 | -falko, January 16, 2009

Virt-install Debian Squeeze PV DomU at Xen 4.1.2 Oneiric Dom0


RH’s opensource distros ISO images (Fedora,CentOS) have a nice feature,been loop mounted on apache folder /var/www/domain, they allow to create local mirror to virt-install corresponding paravirtual guest. Not customized Debian ISOs don’t have this feature, however virt-install still works for Debian Squeeze utilizing remote system HTTP source. Virt-install Debian PV DomU is possible via remote official HTTP source.Begin via virt-install command line read more...
permapage | score:8695 | -Boris Derzhavets, November 21, 2011

Tutorial: Two-server, load-balanced, high-availability Xen/Ubuntu cluster


In this howto we will build a load-balanced and high-availability web cluster on 2 real Ubuntu 8.04 servers with Xen, heartbeat and ldirectord. The cluster will do http, mail, DNS, MySQL database and will be completely monitored. This is currently used on a production server with a couple of websites. The goal of this tutorial is to achieve load balancing & high availability with as few real servers as possible and of course, with open-source software. More servers means more hardware & hosting cost. read more...
mail this link | permapage | score:8611 | -falko, October 13, 2008

Xen live migration of LVM virtual machines with iSCSI on Debian


This guide explains how you can do a live migration of an LVM-based virtual machine (domU) from one Xen host to the other. I will use iSCSI to provide shared storage for the virtual machines in this tutorial. Both Xen hosts and the iSCSI target are running on Debian Lenny in this article. read more...
permapage | score:8539 | -falko, May 1, 2009

Install attempt: Xen-Unstable Dom0 via 2.6.29-rc3 pv_ops kernel on Intel SATA(AHCI)


Seems like 2.6.29 will be the first vanilla kernel supporting “pv_ops” in Dom0. Base platform to start the test was Ubuntu Intrepid Server (64-bit) with Ubuntu Desktop installed via tasksel. Packages required by Xen have been installed: openssl,x11,gettext,python-devel.

# cd /usr/src
# hg clone
# cd xen-unstable.hg
# make xen
# make install-xen
# make tools
# make install-tools
permapage | score:8534 | -Boris Derzhavets, February 7, 2009

Set up OSOL PV Guests via virsh on Xen 3.4.3 Dom0 on Ubuntu 9.10


This posting is actually responding recent entry in Martin’s Blog OpenSolaris 2009.06 domU on opensuse 11.2 dom0
Martin states:-

Then I tried out a number of current linux distributions, but except for openSuSE none had a dom0 kernel out of the box which really is a shame. Seems I need to look more closely into KVM with virtio support.

Article bellow tries to explain that due to efforts of Jan Beulich and Andy Lyon xenified aka Suse Kernel may be built on any Linux and along with the most recent stable Xen Hypervisor (3.4.2 and higher) port provide Xen Environment supporting OpenSolaris PV Guests , including the most recent unstable builds like 129,130,131 ( i mean vncserver behavior on OSOL PV DomU and GDM/VNC setup ). I also choosed Ubuntu Karmic Koala Server with Libvirt 0.7.0 capabilities, actually virsh capabilities, been connected to Xen 3.4.3 Hypervisor one more time reproduce nice schema of John Levon of OSOL PV Guest install at Linux Dom0. read more...
mail this link | permapage | score:8431 | -Boris Derzhavets, February 10, 2010

Xen Tutorial: Create CentOS 5.2 Domu on Ubuntu Dom0


This tutorial provides step-by-step instructions on how to install images of Xen on an Ubuntu Hardy Heron (Ubuntu 8.04) server system (i386). Linux distributions that can run as Xen guests out of the box, obviating the need to create your own custom filesystems. The filesystems on have already been tweaked to deal with Xen's idiosyncracies, and are also designed to be lightweight and minimally divergent from the original distribution. read more...
permapage | score:8377 | -falko, October 8, 2008 (Updated: October 21, 2008)

Virt-install Fedora 16 PV guest at Xen 4.1.2 Ubuntu 11.10 Dom0


Install Xen Hypervisor via PPA Xen 4.1.2 with pygrub gpt support for Ubuntu Oneiric not regular one.Pygrub gpt support patches, published @xen-devel, have been back ported to Xen 4.1.2 for build in PPA mentioned above. Next: mount loop Fedora-16-TC1-x86_64-DVD.iso on /var/www/f16 run virt-install in VNC mode :-

virt-install --connect xen:/// --debug -n VF16
--vnc -p -r 2048 --vcpus=2
-f /dev/sda7 -l

Series of screen-shots below...
mail this link | permapage | score:8350 | -Boris Derzhavets, October 24, 2011

Installation guide for DRBD, OpenAIS, Pacemaker + Xen


The following will install and configure DRBD, OpenAIS, Pacemaker and Xen on OpenSUSE 11.1 to provide highly-available virtual machines. This setup does not utilize Xen's live migration capabilities. Instead, VMs will be started on the secondary node as soon as failure of the primary is detected. Xen virtual disk images are replicated between nodes using DRBD and all services on the cluster will be managed by OpenAIS and Pacemaker. The following setup utilizes DRBD 8.3.2 and Pacemaker 1.0.4. It is important to note that DRBD 8.3.2 has come a long way since previous versions in terms of compatibility with Pacemaker. In particular, a new DRBD OCF resource agent script and new DRBD-level resource fencing features. This configuration will not work with older releases of DRBD. read more...
mail this link | permapage | score:8336 | -falko, August 19, 2009
More articles...
Buy Large Wall Art on Canvas

Selected articles

The Supreme Court is wrong on Copyright Case A simple directory shadowing script for Linux

Graffiti Server Download Page

Microsoft to push unlicensed users to Linux

The short life and hard times of a Linux virus

Testing the Digital Ocean $5 Cloud Servers with an MMORPG

Librenix T-Shirts and Coffee Mugs!

How to install Ubuntu Linux on the decTOP SFF computer

Why software sucks

The Network Computer: An opportunity for Linux

No, RMS, Linux is not GNU/Linux

Download: Linux 3D Client for Starship Traders

The life cycle of a programmer

Apple DIY Repair

Why Programmers are not Software Engineers

Scripting: A parallel Linux backup script

Linux dominates Windows

Mono-culture and the .NETwork effect

The Real Microsoft Monopoly

Space Tyrant: A threaded game server project in C

Apple to Intel move no threat to Linux

Hacker Haiku

MiniLesson: An introduction to Linux in ten commands

Missing the point of the Mac Mini

Programming Language Tradeoffs: 3GL vs 4GL

Closed Source Linux Distribution Launched

Beneficial Computer Viruses

Space Tyrant: A threaded C game project: First Code

VPS: Xen vs. OpenVZ

Linux vs. Windows: Why Linux will win

Space Tyrant: Multithreading lessons learned on SMP hardware

Space Tyrant: A multiplayer network game for Linux

Tutorial: Introduction to Linux files


Firefox sidebar

Site map

Site info

News feed


(to post)


Articles are owned by their authors.   © 2000-2012 Ray Yeargin