In the recent crowd of interesting developments in the world of server virtualization, Amazon.coms Elastic Compute Cloud, or EC2, may be the most exciting. The service, which is currently in a limited beta period, lets companies or individuals rent compute time on Amazon.coms data centers for running Xen virtual machines.
The service, combined with the burgeoning numbers of template operating system images and software appliances, makes it very easy for a company to spawn one or more servers for testing or production purposes in a just few minutes and then to discard them back into the cloud just as easily.
We see a lot of potential for the compute cloud approach. The idea of compute power as a utility is not new, but, by enabling users to consume these resources in the form of arbitrary operating system instances, EC2 offers an extremely comfortable scalability arc.
eWEEK Labs was able to begin getting things done with EC2 within minutes, and, with careful crafting, a company could deploy AMIs (Amazon Machine Images) in clusters to respond dynamically to peaks in traffic or other usage. (AMIs differ little from any other Xen-targeted image.) A company could also build Linux-VServers or OpenVZ-each of which allow for creating separate operating system containers within a host Linux machine-and parlay a few EC2 instances into many more virtual servers.
Whats more, the EC2 approach opens the door to geographically-oriented VM deployments-in which companies would not only spawn new instances to shoulder increased load, but spawn those instances at data centers located geographically close to these companies intended audiences, thereby minimizing network latency issues. The virtualization management console maker Enomaly is pursuing this strategy with its GeoElastic initiative.
Click here
to read about Enomalys Web-based management console for the Xen hypervisor.
The EC2 service costs 10 cents an hour, plus 20 cents per gigabyte of data transferred over the Internet. The VMs you run are the equivalent of servers with 1.7GHz x86 processors, 1.75GB of RAM, 160GB of local disk space and 250MB per second of network bandwidth. For now, EC2 works only with 32-bit Linux operating systems, although, eventually, any x86 or x86_64 could run on EC2.
Simple Storage Service
Users may create images to run on EC2 themselves and package them up and upload them to Amazon.coms servers using command-line EC2 tools. For images you host yourself, youll incur costs for Amazon.coms Simple Storage Service, or S3, to the tune of 15 cents per gigabyte per month of storage used and 20 cents per gigabyte of data transferred. Amazons S3 has been in production for more than a year now, and weve noticed that some of the download locations we often visit are served up by S3.
One of the elements of Amazons S3 that impresses us is its support for offering up access to stored files over the BitTorrent protocol, as well as over HTTP. For popular files, BitTorrent can really speed up download times while shifting some of the load away from the host and its bandwidth bill.
Among the S3-consuming sites we often visit is rPaths rBuilder Web site, which offers up to the public tools for assembling, packaging and hosting Linux software appliance images. Among the formats in which rBuilder can output its images is EC2s AMI. For AMIs created with rPaths tools, you need only note the AMI identification number of the image you desire and issue a command to spawn an instance in its image.
Apart from the AMIs of rPath vintage, there are other template images available in S3. You can control access to AMIs youve uploaded to S3, and other people have uploaded base Debian and Fedora images, among others, which would be good candidates on which to build.
For testing, we stepped through EC2s getting-started tutorial, which led us through the download and installation of a set of EC2 command-line tools. While the documentation that accompanies EC2 is well-written and straightforward enough to follow, initial setup was a bit of a chore. We had to fiddle with the executable path for our system and export a few environment variables, which tends to be fertile ground for roadblocks such as syntax errors. Wed love to see Amazon package up its EC2 tools into a couple of platform-specific installers in order to make it easier to get up and running. The tutorial also stepped us through the process of generating encryption keys and a certificate with which to communicate securely with the service. We also installed a Firefox plug-in that provided us with a simple GUI for manipulating the service.
We opted to test out a MediaWiki appliance from rPath and issued a command with the AMI identifier of the image we sought. About 2 minutes later, our image was running on Amazon.coms servers. When we started our first EC2 machine, the instance was spawned, by default, with all its ports closed. Following the recommendations in the startup docs, we issued commands to open our default security group to port 22, for SSH (Secure Shell), and port 80, for Apache, after which any other instances we called for would share those security settings by default.
We could also, either from the command line or from a handy Firefox extension, create other security groups to which we could assign our instances. We found this to be a simple, effective way to control a firewall for our instances.
The MediaWiki appliance we were using comes with a nifty Web-based management interface, so we opened a port for it as well and ran through basic installation steps before configuring MediaWiki for use.
The appliance we were using also recently added a beefed-up backup feature-which saves the MediaWiki installations uploaded files, configurations and its database to a networked location-and offers a bunch of scheduling options. Its a good thing, too, because EC2 instances are fundamentally ephemeral-they will reboot normally, but once theyre shut down or otherwise terminated, theyre gone.
EC2 and S3 dont directly support accessing S3 objects as persistent storage for instances running on EC2, but theres a product, Openfounts S3DFS, that bills itself as a distributed file system for EC2, backed on S3. We havent tested S3DFS, but its approach looks solid, and we plan to investigate it further.
Advanced Technologies Analyst Jason Brooks can be reached at jason_brooks@ziffdavis.com.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.