Web interface roadmap

March 14th, 2007

ClonePanel has up to now been purely command-driven through SSH. This is ok for me – after spending time developing it – but hard for anyone coming to it fresh, particularly if they’re also new to the Linux shell prompt. So I’m very grateful to those who’ve struggled to give it a try(!) but I’m also aware that to gain wider acceptance there should be an easier way to drive it.

So, a web interface then, but how best to implement it for optimum performance and security? These are my thoughts:

I don’t want to lose the command-line functionality and I don’t want to maintain two sets of scripts, so rather than rewriting what’s there now I see the web interface as just that – an interface between the user’s browser and the existing shell scripts.

In terms of performance, the web interface should run on the same machine as the clonepanel scripts themselves so it can execute them directly. But considering security I’ve always believed that the machine running ClonePanel should NOT run other services (eg. a publicly-available webserver) because if this were exploited then the ClonePanel system and all remote hosts it controls could be at risk. Tricky…

One could also argue that even if the web interface ran on a different server, logging in to the ClonePanel system to execute scripts, this would be an almost equal security risk since any credentials (eg. SSH key) allowing the web interface to pass instructions to ClonePanel would also give access to an attacker.

I think the way to resolve this conflict may be to consider the needs of different user-groups and provide different solutions to each.

Admin (full access) – Run the webserver and interface on the ClonePanel server, but block public access. For a home / office-based server on a local network behind a firewall (or at least NAT) this would require no additional setup – the web interface would automatically be available only via the local network. For a remote server this could be achieved by NOT opening ports 80 / 443 in the firewall and instead accessing the server through a VPN or port-forwarding on an SSH connection.

Public (read access to certain information eg. monitor data) – ClonePanel server writes this data out to a hosting account at intervals. This is already in place.

User (limited access to own account eg. backup, restore, list snapshots) – This would be the most difficult since we don’t want to impose a complicated setup (VPN / SSH tunnelling) on a regular user. My preferred solution is to copy relevant information (snapshot directory listings) to user’s own account, run the web interface there and have the ClonePanel server poll the remote servers at intervals to find and execute user requests (backup or restore). However this is compromising performance to a degree – the user can expect to see something like “Restore request received. Your files / directories will be restored within 30 minutes”. So, no instant gratification but probably acceptable considering that most backup systems don’t give users any kind of access!

Have I missed anything vital? Questions and comments welcome, as usual, here or on the forum.


High-availability web hosting at an affordable price

July 21st, 2006

ClonePanel is a software toolkit for backup and synchronization, monitoring and dns control with the ultimate aim of making it possible to operate a web site on two or more separate hosting accounts, giving you:

Fully redundant DNS service
Use the nameservers of both hosting services to provide DNS information about your domain.
Fully redundant mail service
Use one account as your primary mail server, the other as your secondary. Any mail collected on the secondary is transferred at regular intervals to the primary.

Your own multiple snapshot-type backups
Regularly downloading full backups from a hosting control panel is tedious and wasteful. Using the standard linux system tools (notably rsync, ssh, gzip and hard links) you can:

  • Download over a secure connection with data compression
  • Transfer only what’s changed (only the modified part of changed files)
  • Keep multiple copies of your complete web site using disk space that’s only the total size of the site plus changed files.
Hot-spare server
A spare hosting account complete with your web sites, kept up to date and ready to go whenever it’s needed.
DNS Fail-over or round-robin load sharing
Fail-over means that your primary server will handle all requests under normal circumstances, but if it fails DNS is switched automatically to the backup. Round-robin uses multiple A records to the main web site so that visitors are shared between the servers at all times. You will need to decide which is suitable for your site, and what changes are needed to implement them.

Single server or a cluster?

July 21st, 2006

Web hosting (I’m talking about virtual hosting, also called shared hosting) can be divided into two main categories – single server or clustered.Using a single server (eg. with the popular CPanel control panel) is a beautifully simple solution while everything is going well, but for a variety of reasons (badly written or exploited scripts, ddos attack, processing incoming spam…) the server can become overloaded. When this happens web pages load slowly or time-out, databases become unavailable, mail may be rejected or lost. How often this occurs, how severe the effects and how long the server takes to recover depend on the quality of the hosting service.

Clustered solutions distribute the work among a cluster of servers, with separate machines handling the job of web server, database server, mail server etc. This makes it much less likely that the whole cluster will become overloaded and for simple static web sites the reliability should be significantly better. But for dynamic web sites that rely on database access to create their pages the story may be different. If the cluster has one database server and it becomes overloaded or starts refusing connections then all database-driven sites across the whole cluster are rendered useless (this is of course many more sites than would be affected by a failure in the single-server scenario). As before, the quality of management is critical.

Finally, with both the single-server and the clustered hosting systems there are factors completely beyond anyone’s control. Although rare it’s not unknown for datacenters to suffer problems – lightning strikes, power failure, cooling failure, cable cuts etc. Or there can be connection problems like bad routes that leave some people unable to connect to sites hosted in a certain location even though the server may be working perfectly.

Priorities for high-availability hosting

July 21st, 2006

So if you want your web site and e-mail to work all the time, what’s the solution? In my view, the priorities are these:

  1. Choose a reliable host. Shared hosting (whether single-server or clustered) is relatively cheap and finding a good quality hosting company doesn’t require spending a fortune. I expect uptime to be around 99.9% over a prolonged period – good hosts will be able to demonstrate a record of achieving this. Or if you’re looking for the best possible reliability and money is no object, get a fully-managed dedicated server (this eliminates one major source of problems – your fellow users!)
  2. Choose a second reliable host. One in a different datacenter a long distance away from the first (another continent is good!). Use both hosts to provide dns and mail services for your domain – nameserver and mail-exchanger records allow for this so that if one is unreachable the other will be used automatically.
  3. Plan for the worst-case scenario. What would you do if the server died and the backups were found to be corrupt? Or if the server got hacked and your site defaced, with backups being replaced by the modified pages? Sure it shouldn’t happen but it does. So keep your own backups (multiple copies), synchronize them to another host and check that they are good.

Will all that give you 100% uptime? No. The main limitation is dns propagation time – using failover you can update your nameservers almost instantly to point to your spare server but dns servers around the world will not update until their cached data expires. And while you can set a short TTL (time to live) on your records there’s no guarantee that all ISPs will respect it, in fact it’s pretty much guaranteed that some won’t!

So I regard dns failover to a hot-spare server as a rather imperfect emergency measure in the event of disaster or prolonged outage. But in that situation, you will be glad you have it!


July 21st, 2006

ClonePanel aims to combine snapshot backups, dns control and server monitoring to provide exceptional levels of web site reliability and security against data loss. It’s specifically intended for use with virtual hosting accounts rather than dedicated servers – no root access is needed.

This is a discussion of the background history to indicate why ClonePanel has developed these key features.

As a web developer, I want to provide a comprehensive service to clients – not just creating their site but providing ongoing support and maintenance. This is good for me, offering a continuing revenue stream, and also good for the clients who might otherwise have to look for support from others (who would be nowhere near as good, naturally!)

Hosting is an important part of this service, but I don’t want to become a host myself for several reasons:

I suspect it would be hard to break into a market already full of hosting businesses, some offering excellent service, some selling at improbably low prices and even some offering good service for a very reasonable price.

Providing 24/7/365 cover really requires a staff of at least 5 people. I prefer to continue as a one-man-band and I don’t want to be dragged out of bed in the middle of the night to fix a server!

Most of my clients are small businesses with small-business web-sites. They don’t use a lot of disk-space, bandwidth or processing power, and as a result I don’t need even one dedicated server. It’s simply more cost-effective to lease space on a server (or two, for redundancy).

So I prefer to be a “value-added reseller”, purchasing hosting from carefully-selected suppliers and reselling to my clients. Some clients prefer to purchase their own hosting plan, giving them more independence – that’s fine too; I can recommend suitable options and if they want me to continue maintaining their web site they can give me the login details. So I get to sleep at night and even take weekends away (much appreciated by my wife!) without worrying about web servers.

But should I be worried? How reliable is my carefully-chosen host? What uptime does their server really achieve? I believe uptime guarantees are utterly worthless – at best I may get a refund of one month’s hosting fees but the consequences of prolonged downtime could be devastating for my business. There are some good monitoring services available but they often just ping the server – what if the web-server’s down, or the database, or the server is overloaded and timing out on dynamic pages…?

DNS (domain name service) is another grey area. CPanel is a good and very popular control panel but its standard setup with a single dns server on two different IP addresses is woefully inadequate. There are many independent dns services available but whenever I’ve tried their free service I’ve found the servers are often slow to respond, giving a significant delay on the first page of a new site – unacceptable (of course the paid services should be better but I’ve not been inspired to invest in them…) CPanel itself offers a clustering system but this can only be set up with root access.

Finally, and hardest to evaluate there’s the question of backups. There is nothing as valuable as data – mine and my clients’, so how best to protect it? Most hosts back-up their own servers but for liability reasons offer no guarantees. Is their backup system reliable? You’ll probably only find out after the disaster happens, so you’d better hope so! Personally, I want my own backups, but taking full backups manually from numerous different web sites is a tedious job, and simply automating the standard backup system can result in significant data transfer and server loads.

So in short, I wanted a system to do all of this:

Server monitoring
To include server-load, web server and database connection, with data-logging for analysis and summaries.
DNS clustering system
Making use of the DNS services included with CPanel but not restricted to a single server and without requiring root access. The CPanel automation utilities are usable here and only require standard reseller access to WHM.
File transfer using rsync seemed to overcome the problems of creating and transferring huge backup files. The backup is synchronized to the original once or twice each day and the only data transferred is that related to files that have changed. To reduce data transfer even further, only the different parts of the modified file are sent and all data transferred is compressed with gzip.

Well, that was the start. But out of this other opportunities presented themselves:

Snapshot backups
Want to retrieve that file you deleted three weeks ago? How about that article in the database, not sure when it went missing… A snapshot-type backup system keeps “snapshots” of the whole web-server file system from as many different times as you want. Thanks to Linux hard-links large numbers of such snapshots can be stored using hardly any more disk space than the latest copy plus the changed files.
Hot-spare server
Once you have a process in place to synchronize from the web-server to a backup it’s a small step to sync back from the backup to another web-server. So you can keep a second account on another hosting service complete, updated and ready to go at any time – just switch over the dns…
DNS failover
Hang on, we have control of the dns servers too! So why not automate the process of switching? When the monitoring system indicates a server is in trouble we can just divert visitors over to the hot-spare!

Ok, I know – it’s not that simple. Dns record caching and propagation means that for a time after switching the web site over some visitors will continue to get to the original server. And for dynamic sites like this one there’s the important issue of which database gets updated – the one on the original server or the hot-spare? During transfers from one host to another this is often solved by having both servers connect to the same database but obviously in this case that won’t work (no redundancy). But even if we need to make the site read-only until the problem can be sorted out, this is surely still better than having it offline altogether.

So with two reseller-type hosting accounts and a package combining snapshot backups, dns control and monitoring, there’s an opportunity to provide exceptional levels of web site reliability and security against data loss. ClonePanel aims to be that package.