There once was a time when the only way to get your content published online was to work closely with your ISP (Internet Service Provider) – the same company that provided your personal or business Internet access.
Things have changed quite dramatically since then and nowadays there many more options to host your personal or corporate website. It therefore helps to understand your online presence needs and also to know which of the most common hosting approaches would suit your situation best.
With modern broadband connections and impressive uplink speeds, self hosting is still a popular approach among IT enthusiasts and small businesses. You can use a small desktop or a entry-level server as a web server or even get a virtual machine started on one of desktop systems.
Lots of online marketplaces offer complete web server environments freely available for download, and if you’re feeling a bit more adventurous you can even install and configure all the relevant software packages yourself.
There’s a few typical issues associated with self hosting: you need a static IP obtained from your Internet provider (or get a dynamic DNS setup) for website’s domain name to work, your website availability will be limited by your broadband connection and sometimes (if you’re lucky to attract serious traffic) your uplink may get saturated enough to slow website access down.
Rest assured your self hosted solution would have no monitoring or backups unless they are taken into consideration.
Being one of the most popular ways to host a website, shared hosting is an approach where your website is served by a physical or virtual server that’s serving dozens of other websites for different customers at the same time. This type of hosting is popular as the first entry to hosting a website due to really low costs and relatively simple setup – many providers offer automated installs for popular software like blogging platforms and discussion boards, support helpdesk systems and online office packages.
The concept behind shared hosting is that not all the websites are enjoying the same high levels of traffic, meaning that putting multiple websites onto the same physical server will optimise hardware resources usage.
By sharing resources among multiple customers, hosting providers can afford really low prices for meeting basic hosting expectations. When using shared hosting, a customer would rarely have any control over how hardware resources are shared and as such there’s a risk that another customer’s website would consume all the network, CPU, memory or disk storage capacity, thus slowing all the other websites hosted on the same physical server.
Another important thing to know is that most shared hosting solutions would have dozens of websites share the same IP address of the physical server, so if any of such websites get compromised and infected with on-page malware, other websites hosted on the same IP address would probably be blocked by search engines until original issue is sorted.
Shared hosting solutions have stepped up security and reliability wise but it’s still a possibility that one website with compromised security could be used to access or compromise other websites on the same server.
Next step up from Shared Hosting is a Virtual Private Server – VPS. Recently this solution is also referred to as Virtual Dedicated Server – VDS.
With VPS servers you’re still sharing hardware with other customers, but it’s done using industry grade virtualisation solutions (VMware, Xen, KVM), meaning that you’re given a virtual environment with pre-allocated resources that act as if they were a physical server. VPS servers are isolated from each other and the physical host by the virtualisation layer so processes or filesystems do not overlap.
Typically you should look at the following parameters of a VPS server:
what’s the CPU on the physical server (type/speed) and what share of it will your VPS server get – usually this is measured in vCPUs or processor cores how much RAM will your VPS get allocated (usually this memory is reserved for your VPS only, so other VPS environments on the same server won’t be able to tap into it) what’s the disk space and type going to be – 10-20GB is a typical entry point, and SSD disks are much better than traditional HDDs VPS is a great way to improve stability and sometimes even performance of your website. Having greater control over allocated resources, it should be fairly easy for a Linux or Windows professional to setup and fine-tune your website hosting. When website requires more resources, it should be possible to work with the hosting provider to upgrade your hosting plan and thus to permanently increase the allocated CPU, memory or disk space for your VPS.
Some cheaper virtualisation solutions allow for certain resources to be overcommitted. What this means is that an assumption is made that no hardware resource is going to be fully needed and utilised by all the virtual servers at once. Say, each virtual machine needs 1GB of RAM. Overcommitted virtualisation server with 8GB of RAM might have 10 or 12 VPS servers hosted because it’s rather unlikely that all of them would need a full 1GB of their memory used up.
Dedicated servers has traditionally been the next logical step after a VPS hosting, but in recent years the focus is shifting towards cloud hosting. With dedicated servers you get full access to a physical server and all of its hardware. Hosting provider assists with initial OS deployment and you are then responsible for further setup and configuration. There are lots of very affordable dedicated server solutions, starting from as low as €20/month. That’s usually because the hardware you’ll get is not an industry grade server solution from a vendor like Dell or HP. Instead, such dedicated “servers” are custom build solutions or sometimes even upgraded desktop systems which are considerably cheaper to provision. Another reason for really low prices is using the relatively slow processors like Intel Atom or entry level AMD.
All sorts of hardware upgrades like additional memory, extra hard disks (and SSD ones instead of HDDs if you wish) and remote management consoles are usually available at an additional cost.
Benefits of dedicated hosting include flexibility of access to CPU, memory and local storage. Bare metal OS installs usually get you the best performance and multiple network interfaces allow for additional network throughput or physical isolation if needed.
Should you want to optimise hardware usage, it’s usually possible to install virtualisation software on a dedicated server so that multiple virtual machines can be configured as per exact requirements.
Overall, dedicated servers are a great hosting solution when configured and managed by a professional IT team.
A subset of dedicated hosting, co-location is an approach where you buy rack space and power/network ports from a hosting provider. You then need to supply your own server hardware to be installed in the allocated slots. These servers will be configured and dedicated for your use.
Leasing rack space and co-locating is not cheap but this approach allows you to purchase any server hardware and sometimes even networking or storage solutions you want thus obtaining flexibility when it comes to performance and density of compute resources.
On Premises Hosting This is a category whereby you would allocate a specialised computer room or data centre in your offices. Running this setup would mean building a data centre infrastructure from ground up: identifying rooms, providing electrical grid and network links, purchasing cooling and safety systems, deploying server cabinets and purchasing your own networking, storage and compute hardware.
Long term this approach may still be economically viable but with the mega-datacentres being built around the globe by companies like Google and Amazon it’s increasingly more reliable, safe and financially attractive to host, co-locate or even move your solutions into cloud.
Certainly the most rapidly growing area of hosting, cloud hosting offers high flexibility and scalability to meet the most demanding solutions.
By purchasing a number of virtual or dedicated servers hosted in a highly available and performant infrastructure you benefit from lots of services that come as standard but would otherwise requite quite an investment if your company decided to implement them all independently:
wide choice of virtual and physical hardware to rent automated snapshots for virtual machines, databases and storage backups highly available storage and networking automated virtual server migration in case of a hardware fault network and load balancing specialised monitoring application programming interface (API) to create, configure and destroy combination of cloud resources automated updates to software and security settings Cloud providers like Amazon and Google are providing solutions on such a scale that many incredibly complex technologies can be offered for very little added cost. Compared like-for-like with VPS and dedicated servers, virtual servers in cloud hosting are more expensive. But even if all the features listed above were not impressive enough, there’s one really cool thing about cloud infrastructures that may really make a difference when planning your hosting budget: your cloud servers don’t have to stay online 24×7, and as such your bills may be reduced by dynamically shutting down and starting up your virtual servers as needed.
A common practice is to round up billable time to an hour (even less in case of Google), which means that CPU or memory hungry solutions may be automated in a way that your servers (hundreds or even thousands of them, if necessary!) are started to run a specific task and then get immediately shutdown, thus providing great savings (compared to renting similar hardware using traditional 24×7 models like dedicated hosting).
For the ultimate flexibility and cost efficiency, you’ve got the hybrid hosting option: keep critical data on premises but leverage elasticity of cloud hosting for added performance during peak times or automation of just a select list of services.
It’s possible to get started by configuring your own VPN solution connecting cloud and hosted infrastructures, this way you can control entry points and manage security credentials for accessing your infrastructure.
When you services require large data transfers or real time performance, it will be possible to enable AWS Direct Connect or a similar service from alternative providers. Direct Connect allows you to arrange a dedicated network connection of the required speed (starting with 50MBit/sec and scaling up to 10GBit/sec port) going directly from your data centre to your cloud provider. You’ll need to research for various points of presence (POPs) of your cloud provider to identify optimal network connections.
With the advancement of cloud hosting offerings, it makes more and more sense to spin up and use only the cloud instances of precise functionality – the ones with large memory or high performing CPUs.
Get in Touch if you Need Advice
We hope this post will make your hosting selection a more straightforward task. With technology advancements today most of the hosting solutions are easily interchanged or interconnected, so no solution is ever final. Whether it’s hosting a basic small business website or migrating a massive on-premise solution into cloud – get in touch with Tech Stack support team today to discuss how we can be of service.