Testing the Digital Ocean $5 Cloud Servers with an MMORPG
|I've been working on an space-based MMORPG for a while now and I finally reached a point where the server was largely complete and ready for some load testing. I had done quite a bit of testing on my quad-core Xeon development system with great results. Renting a dedicated box like that one, and with a decent Internet connection would not be cheap, however. |
I was curious about Digital Ocean's offerings so I decided to take a couple of their smallest virtual machines for a test run. These 5 dollar/month cloud servers have 512MB of memory, a single virtual CPU, and 20 gigabytes of SSD disk storage. I created the first one, Maelstrom, in one of their New York data centers and the second one, Paradise, in their San Fransico facility. As claimed, each server took just under 60 seconds to deploy.
I made a few customizations to the bare-bones CentOS 6.5 64-bit image that I had selected for both machines. I then ran my personal benchmark package on both. On my 256 megabyte benchmark that emulates a relational database instance, I got about 25% of the performance of one of my Ivy Bridge Xeon cores. This benchmark stresses memory bandwidth, takes advantage of the full range of caches, as well as loading the CPU with integer-only operations.
I consider a fourth of an Ivy Bridge core an excellent result for a $5(US) server! I repeated the tests over the following days and got consistent results varying within a 10 percent range. One thing of note; the SF server consistently performed about 10% faster than the NY server.
Next, I loaded up a million-sector instance of my MMORPG server engine on the NY server and a bundle of testing scripts on the SF machine. The test suite, which emulates 712 concurrent players requires 1424 processes to do that. In retrospect, it wasn't too surprising that I crashed the SF machine as the testing scripts created an urgent demand for far more than 512 MB of memory! My mistake. I suspected that I wasn't going to be able to fully stress the game engine with far fewer emulated players.
But, always the optimist, I set up a gigabyte of swap space on Paradise and tried again. This time the tests ran just fine -- and the SSD-based swap space performed beautifully.
I got an average of 2500 game transactions per second over the duration of the test -- which should be more than enough for my target of 1000 concurrent players per realm. The SF machine used one half of the GB(!) of swap space while running the test scripts and hummed along happily at about 75% busy.
The NY server was fully stressed at about 98% CPU utilization and remained very responsive for the duration of the test.
I'm still running both systems three weeks later and have experienced no reboots or outages so far.
The New York server continues to run my text-based space MMORPG server back-end for game-play testing as the graphical front-end client is developed.
|mail this link | permapage | score:9958 | -Ray, May 13, 2014|
VPS: Xen vs. OpenVZ
This is a short overview of the key differences between OpenVZ and Xen that you might consider when choosing a VPS. Note that this article is based on my opinions and that you must do your own research to determine which, if either, technology is best for you and your application.
First, some terminology. OpenVZ isn't fully virtualized and could be more properly referred to as a 'container', not a VPS. That shouldn't affect your choice. It's the technical differences that matter.
Cheap VPS offers are everywhere lately, it seems. However, upon closer inspection I saw that almost all of the low-priced offers were for OpenVZ. While both Xen and OpenVZ offer their advantages, I chose Xen. So, there's my first bias, right up front. :)
OpenVZ advantages:OpenVZ disadvantages:
- Shared kernel (no custom kernel)
- Shared memory with other users
- Vendor can easily oversell, killing performance
- Dedicated memory
- fully virtualized (can run other kernels or even OS's)
- vendor more limited in overselling
- Less efficient (more overhead due to a kernel-per-VPS)
You'll notice I left price out of the above comparison. In theory, there should be a small price advantage for OpenVZ. I don't know how big it should be but it pertains to two things: 1) Xen uses more memory due to each VPS having its own kernel, and 2) Xen uses more CPU, also due to the additional software layer required to virtualize the kernel.
In practice, however, the price gap appears larger than the above technical differences suggest it should be. I think the remainder of OpenVZ's price advantage is based on 1) the ability for a vendor to easily oversell OpenVZ, and 2) The price competition that results from some vendors overselling OpenVZ.
OpenVZ doesn't encapsulate its containers into a fixed amount of memory, so it runs processes in the host environment to monitor memory usage and kill processes as a container allocates more than its assigned amount.
As a result of this difference, loading down an OpenVZ container is problematic. To partially offset this disadvantage, most OpenVZ vendors offer 'burst' memory in addition to 'dedicated' memory. That is, the monitor process is set to allow the container to use more than its allocated memory -- for a short period of time. This messy situation results in a potentially unreliable environment as some of your processes may be arbitrarily killed -- at the busiest times.
Xen, on the other hand, allows the use of a swap space and excess memory allocation results in (hopefully) idle segments being rolled out to the swap area. While this is good for the memory-hungry VPS user, it can consume significant I/O capacity when memory is overallocated to the point of busy segments getting swapped out. This is bad for everyone sharing the underlying hardware.
I see Xen as clearly the superior technology. A Xen VPS feels and behaves more like a dedicated server. However, I still would have purchased OpenVZ at some price difference. After a bit of research, however, I located Xen VPS's at practically the same price as the cheapest OpenVZ containers. That made my decision easy.
With that said, keep in mind that a bad hosting vendor can ruin either technology through various means. Both technologies share the disk drives and I/O paths as well as the processor cores. Hardware can be poorly configured and managed in any case. A reputable vendor is probably the single most important consideration in choosing a virtual server.
Lastly, carefully check the 'allowed use' policy. Make sure your application is allowed on the server you intend to purchase. Note that due to their different characteristics, the allowed use policy may differ between OpenVZ and Xen for the same host. Also, it's good to understand the memory usage characteristics of your applications. If you know how much memory/swap they require on a physical system, it'll probably work with that same amount of memory/swap on Xen.
[I'll post a review shortly of my current VPS vendor and I will then add a link to that article here.]
|mail this link | permapage | score:9495 | -Ray, June 13, 2011|
Apple DIY Repair
|I won't be buying any more Apple products. Here's why:|
I'm generally capable of repairing my own equipment and can recognize when self-repair has been deliberately undermined. I recently had to replace a hard drive in an early generation white Intel iMac. Innocently, I believed the interior was accessible and serviceable in the manner of the externally identical white PowerPC iMacs.
No such luck. Not only do you have to remove the LCD to get to the hard drive, but you must also remove shielding around the LCD -- mostly by tearing it to bits. No doubt it is attached this way so that an authorized Apple technician will be able to confidently void your warranty if you've ever worked on the system yourself.
You'll also need a #10 torx magnetic screwdriver. And, no, #10 torx bits just won't do due to the narrow and deeply recessed screw holes. Also, since most torx screwdrivers aren't magnetic, you'll probably need to tape the screws to the screwdriver to reattach the LCD. Good thing there's a hardware store near you.
Oh, and don't forget to pick up some rubber cement to 'properly' reattach the hard drive temperature sensor while you're out looking for magnetic torx screwdrivers.
Considering the logical design of its predecessor and the tamper-evident shielding, I'm certain that this machine has been deliberately designed to prevent the owner from performing DIY upgrades and repairs.
While that is all quite annoying, at least working on the system is possible for someone with experience and determination.
Now, Apple has improved their anti-customer techniques with the 'Pentalobe' screw. It doesn't solve any problem but one: it'll keep customers from even being able to open the case.
If you're curious about Apple's evil new invention, you can read its rap sheet and view its mug shot here.
|mail this link | permapage | score:9376 | -Ray, January 25, 2011|
How to install Ubuntu Linux on the decTOP SFF computer
I recently bought a decTOP small form factor (SFF) computer. My goal was to build a cheap, fanless, quiet, and low power consumption Linux server. For $99 plus the cheapest available shipping, $40, my machine arrived 11 days after I placed the order.
This is a tiny computer, about the size of a Mac Mini. But, because it has no fan, it runs a bit quieter and, with the help of a 1-watt, 366 MHz CPU, consumes only 8 watts. For comparison, the G4 Mac Mini consumes about 20-30 watts, depending on load.
The decTOP comes with 128 MB of RAM in its sole SO-DIMM slot and a 10 GB 3.5 inch hard drive. I understand that it's a simple matter to replace the drive and to upgrade the memory to a maximum of 512MB.
It also comes with no operating system and the ability to boot only from a USB drive. This article details the steps I used to build the USB boot/installation drive and install Ubuntu 6.06 on the decTOP.
There is another article -- with additional decTOP links -- here on installing Ubuntu 6.06 on the decTOP with the aid of a Windows system. Fortunately ;), I run Mac OS X and Linux (Ubuntu 7.04), so that article didn't work for me. I did the installation of the Ubuntu 6.06 LTS Server Edition using my Ubuntu Linux box and a 1 GB USB flash drive -- although a 512 MB USB drive should work as well.
- Download the Ubuntu 6.06 server ISO image from the Ubuntu download page. Depending on your plans for the decTOP, you might want to choose the desktop version. Unless you have already upgraded your decTOP's memory, however, you'll want to stick with the 6.06 releases.
- Install the mbr, mtools, and syslinux packages on the Linux system you'll be using to prepare the USB drive. If you run Ubuntu or some other Debian-derived system, the following commands may do the work for you.
apt-get install mbr
apt-get install mtools
apt-get install syslinux
- Partition the USB drive with a single FAT-16 partition. I used the fdisk 'n' command to make the new primary partition 1. The fdisk 't' command can be used to change the partition type to FAT-16. My device name was /dev/sda.
- Make the FAT-16 partition the active partition. I used the fdisk 'a' command.
- Install a master boot record on the USB drive.
- Install syslinux on the USB drive. Note that the USB drive should not be mounted when you do this.
syslinux -s /dev/sda1
- Create a mountpoint and mount the ubuntu ISO image using the loopback device.
mount -o loop -t iso9660 ubuntu.iso /iso
- Create a mountpoint and mount the USB flash drive.
mount /dev/sda1 /usb
- Copy the contents of the ISO image to the USB drive. This will take some time.
cp -r . /usb/
- Copy the /usb/dists/dapper directory into a new /usb/dists/stable directory.
cp -r dapper/* stable
- Copy several files from /usb/install to the /usb root directory.
cp /usb/install/vmlinuz /usb/
cp /usb/install/mt86plus /usb/
cp /usb/install/initrd.gz /usb/
- Install the following text into a file named syslinux.cfg in the /usb root directory.
append initrd=initrd.gz ramdisk_size=24000 root=/dev/ram rw
- Flush all writes, unmount, and remove the USB drive. After the sync step, wait for all of the data to be written to the USB drive.
- Connect the ethernet adapter to the decTOP and connect it to your network to allow automatic configuration of the network interface.
- Insert the USB drive into the decTOP and power it up. The decTOP should automatically boot from the USB drive and start the Ubuntu installation.
- Answer only the first two questions concerning language selection and go to the next step, below.
- Press Alt-F2 (hold down the Alt key and press the F2 function key) to open a shell. Then press enter to start the shell.
- Create a /cdrom and a /dev/cdroms directory in the installation ramdisk
mkdir /cdrom /dev/cdroms
- Go to the /dev/cdroms directory and build a symlink from /dev/sda1 (that is likely the device name of your USB boot partition) to /dev/cdroms/cdrom0.
ln -s ../sda1/cdrom0
- While still in the shell, mount the USB drive to mimic an installation CD-ROM.
mount -t vfat /dev/cdroms/cdrom0 /cdrom
- Return to the installation program with Alt-F1 and continue the installation.
From this point, the process should be identical to a routine CD-ROM installation.
For a grand total of $140 and 8 watts of power consumption, I now have a near-silent Linux server running 24/7. You can telnet to it here and marvel at its blinding speed running a 250,000-sector Space Tyrant game.
|mail this link | permapage | score:9333 | -Ray, August 16, 2007 (Updated: April 26, 2011)||