Host websites with high availability and low latency for less than 1$/month, SSL included

I’ve been deploying static websites the wrong way all of my life. My procedure used to look like this:

  • Choose a server (EC2, Digitalocean) in a region (US, Europe, …)
  • Set up some nginx / apache configuration
  • Maybe add SSL through letsencrypt

I’ve always been aware that there are several drawbacks to this:

  • The smallest EC2 / digital ocean instances cost ~5$ which is quite a lot if you have many websites.
  • To make this cheaper, you start putting everything on one server (e.g. one ec2 small instance for 20$ / month), but if this instance crashes ALL your websites are down. You’ll have a very fragile availability.
  • By selecting a server in a specific region, the other regions have high latency
  • Adding SSL is kind of a pain, even if it’s free through letsencrypt
  • It’s not scalable
  • You need to mess with complicated config files from nginx / apache and need to ssh into your instance.

I’ve just never been aware that there’s a much better and easier solution!

The solution is called … (drumroll) … : AWS CloudFront.

Or to be less product specific, the solution is called: Put a CDN in front of some highly available data store.

Here’s what that means:

Your users will access the content through servers in the region closest to them.

This eliminates all of the previously mentioned drawbacks:

  • It’s cheap: AWS charges less than 1$/month for this setup
  • It’s got high availability: S3 has 99.9999999% uptime (I’ve randomly added 9’s)
  • It’s got low latency: Content is served through the data center closest to you.
  • You get a free SSL certificate from AWS that you can easily, no config file hacking.
  • AWS takes care of scaling for you
  • It’s simple to set up & deploy!

How to set up AWS CloudFront

AWS CloudFront is pretty simple to set up, but there are a few pitfalls, so here’s a tutorial with pictures to guide you.

Step 1: Host website with AWS CloudFront

Go to https://console.aws.amazon.com/quickstart-website/home; you also have that link on your dashboard.

Next, fill out the wizard:

and hit create. Your website needs to have an index.html file for this to work.

Tada. You have a working hosted website at some crazy url.

Step 2: Setup domain

Hit “Buy domain” on the website dashboard. This is the right button even when you’ve already bought you’re website through Route 53.

If you already have a domain, you can simply select it here:

Step 3: Set up SSL

Back on the website’s main page, select “Manage settings in Amazon CloudFront”:

There, click edit:

and select Custom SSL Certificate:

If you don’t have one yet, you can request a new one. It’s easy to set up, but you’ll need to have a working email address for the domain. E.g. when you want to set up example.com, you need e.g. admin@example.com to be working. I’m not going too much into detail here, since that’s a topic on it’s own.

You can also forward HTTP to HTTPS in the CloudFront > Behaviors Tab: Edit the default switch to “Redirect HTTP to HTTPS“.

Once the certificate has been issued and connected, SSL works:

Step 4: Pretty URLs and Subdirectories

By default, the files are hosted as they are: example.com/articles/my-article.html. Now how can you get rid of the html extension? You can either just remove the html extension or have a folder example.com/articles/my-article/index.html.

Unfortunately, there’s also one setting that needs to be changed for this to work. Head over to s3 > website bucket > properties > static website hosting and copy the “Endpoint” URL:

Then in Cloudfront > Origins

select the right website and hit edit, then replace the “Origin Domain Name” with what you’ve copied from the S3 bucket (without the http://).

Congrats, you’re all set up!

Step 5: Deployment

It’s a bit annoying to always have to upload a zip if you’re more the command line type of guy. Fortunately, it’s still possible to upload through command line with AWS cli:

aws s3 cp dist s3://<bucket-name>/ --recursive

You’ll need to set up IAM for this to work. Create a new user, give him S3 access, and hit:

aws configure

once the aws command line cli is installed.

Additional Tips & Tricks

There are a few gotchas you’ll have to be aware of when setting up CloudFront.

Forwarding the Naked (Apex) Domain to the WWW domain

In order to achieve a forward from https://example.com to https://www.example.com you’ll have to use a really weird workaround: Create a new S3 bucket with the name example.com. It HAS to be example.com, you can’t give the bucket any other name. Then in properties > static website hosting redirect all requests:

Finally, in Route53 resolve example.com (or here tsmean.com) to this S3 bucket:

If the S3 bucket name IS EQUAL TO THE RECORD NAME, the S3 website bucket should show up when you select Alias Target. That’s it, that way requests to the naked (apex) domain can be forwarded to a www domain.

Cache Invalidation

If you’ve followed the “one server deployment paradigm” so far, you’re probably used to seeing your changes visible immediately after deployment. This is not the case when using AWS CloudFront. Since it’s a distributed system with servers at edge locations that cache your assets, those caches have to be invalidated first. AWShas two articles on this, first http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html and second http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ReplacingObjectsSameName.html.

They advise you against using their built-in cache invalidation mechanism since it becomes expensive above a certain number of requests. However, this free limit is 1000 requests per month as of August 2017. That’s more than enough for most people since you can declare wildcard paths and those are counted only once, even if thousands of requests are made! So you could just specify “*” as invalidation and your cache is cleared with one request:

I personally prefer this to versioned object / directory names since I find it easier.

You can also easily include this into a deploy script, that could look like this:

#!/usr/bin/env bash

# Prerequisits:
# 1) AWS CLI installation (pip install awscli --upgrade --user)
# 2) AWS CLI Login ("aws configure")
# 3) aws configure set preview.cloudfront true

CDN_DISTRIBUTION_ID="<replace with your cloudfront ID>"
S3_DIST="s3://<replace with your bucket name>/"

aws s3 cp <replace with your dist folder location> $S3_DIST --recursive
aws cloudfront create-invalidation --distribution-id $CDN_DISTRIBUTION_ID --paths "/*"
echo "Done!"

Single Page Applications

If you’re hosting a SPA (Single Page Application), you’ll run into the problem that upon reloading a non-root page (e.g. bla.com/team), you’ll get an error. To fix this, you have two options.

  1. In Cloudfront:
    – Go to Error Pages tab, click on Create Custom Error Response:
    – HTTP Error Code: 403: Forbidden (404: Not Found, in case of S3 Static Website)
    – Customize Error Response: Yes
    –  Response Page Path: /index.html
    – HTTP Response Code: 200: OK
  2. https://stackoverflow.com/a/16877231/3022127

Changing the URL (in case you need to)

If you ever want to migrate, e.g. from dev.bla.com to test.bla.com, you’ll need to:

  1. In Cloudfront, add a cname. To do so, edit the General Settings, it’s the first tab.
  2. Change in Route53
  3. Update AWS_WEBSITE_DOMAIN_NAME und S3 > website bucket > properties > tags. (for the sake of completeness)

Compressing Assets / Gzip

In order to speed up your page, it’s a good idea to compress your assets. To do so

  1. Go to CloudFront > Behaviour Tab.
  2. Select the Default Behaviour and click Edit
  3. Select “Compress Objects Automatically

Checklist

Here’s a short checklist whether you’ve completed all important steps:

  • http://www.your-domain.com is working
  • https://www.your-domain is working (SSL)
  • http://your-domain.com is working (naked)
  • https://your-domain.com is working (naked, SSL)
  • https://your-domain.com/subdirectory is working
  • Compression is enabled

Conclusion

Compared to AWS CloudFront, deploying to single instances just feels hacky and wrong now. It’s simpler, cheaper and safer with AWS CloudFront. It’s the solution I’ve always wanted, but didn’t know existed. Hope this tutorial helps someone!

Forwarding Mail with EC2 (Ubuntu) and ElasticIP

Let’s assume you already have bought a domain – in this example we’re using tsmean.com – and you want to forward mail. You can replace all tsmean.com in this tutorial with yourdomain.com. So for example when sending an email to info@tsmean.com it should forward it to bersling@gmail.com. How can we achieve that? We can set up a mail forwarding server using an ec2 instance and postfix. To get started, create your ec2 instance, in this tutorial we’ll use Ubuntu. You can also reuse an existing instance, for example where you host some websites.

Step 1 – Open ports

You need to have port 25 open for emails to arrive at the server. Head over to the security group of the instance and open port 25:

Step 2 – DNS

We need two DNS records. One is the MX record:

Now we need to create the entry for the mailserver mail.tsmean.com. This is an A record pointing to the elastic IP:

So what this means is that:

mail sent to info@tsmean.com -> mail.tsmean.com -> elastic IP

where -> denotes “is resolved to”.

Step 3 – Install Postfix

On your EC2 instance, run

sudo apt-get install postfix

During the installation, choose “Internet Site”. For the mail choose yourdomain.com, not mail.yourdomain.com:

Step 4 – Set Postfix up

Append the following lines to /etc/postfix/main.cf:

...
virtual_alias_domains = tsmean.com
virtual_alias_maps = hash:/etc/postfix/virtual

And in the /etc/postfix/virtual (you’ll have to create it), insert:

@tsmean.com bersling@gmail.com

if you want to forward all mails or use

info@tsmean.com bersling@gmail.com

to forward a specific mail.

Step 5 – Apply the mapping

Run

postmap /etc/postfix/virtual

in the terminal.

Step 6 – Restart Postfix

sudo /etc/init.d/postfix reload

Result should look like this:

Step 7 – Test

Send an email to info@tsmean.com. Check also your Spam & Junk mail folders!

AWS (Amazon Web Services) is definitely the right choice for your production environment

When your doing your own blog or your own webpage, you can easily choose some simple setup like the ones provided by DigitalOcean. However, when it comes to a scalable setup, AWS is simply unbeatable. Why you may ask? Well, because it offers everything your heart desires. This may be daunting at first, but in the long run it’s definitely easier to manage everything on AWS. And by everything, I mean everything. Most people only think about servers “and a bunch of confusing stuff” when they think about AWS, but once you start looking at it for a bit, you’ll start enjoying the other stuff just as much.

Registrar: Route 53

You won’t need GoDaddy or Gandi or any other registrar anymore. There’s Route 53 for that. So you can also handle your DNS with AWS!

Certificates (SSL): ACM

Obtaining SSL certificates used to be expensive. Nowadays it’s free with things like letsencrypt, but it can be even easier: With ACM (Amazon Certificate Manager) you have created certificates within minutes, ready to attach to anything on AWS just with clicking.

Servers (obviously): EC2

The beginners choice for servers are EC2 Instances. Just spin them up, configure with some settings (security groups etc.) and do whatever you like with them. There isn’t much benefit here over other serverfarms, except that it’s a bit cheaper, more flexible and integrates well with the other AWS services. Next an example for integration.

Load Balancer (also EC2)

From the first server on, it’s a smart idea to put the server behind a load balancer. This ensures that you can easily attach and detach servers on load changes, as opposed to registering your DNS to just one server. In a production environment, this kind of creeps me out. E.g. what happens when you want to reboot the server for updating the kernel? No chance without a load balancer.

File Storage: S3

You could store your files in your db, but it’s also possible and perhaps even more flexible to store them in S3 buckets. You can even build your entire db-system on the S3 storage system, e.g. by storing JSON documents into the S3. S3 has versioning, so you could even build on this!

Security

AWS doesn’t leave security up to chance. Their security is world-leading in all aspects

Availability

AWS has a ridiculously high availability, so you don’t even need to worry that much anymore about accidental data loss by server breakage or something like that. You could furthermore use versioned S3 buckets to give you a backup history

Accessibility

With IAM Users and security groups, AWS has a logical and easy to use interface to manage security to your instances and to your AWS controls. You can even enable MFA (Multi-Factor Authentication) to make sure your precious production environment maybe worth millions isn’t just accessible by a leaked password.

Conclusion

That’s just the tip of the iceberg and what we’re using at the moment, but just for about every need you might encounter with your Web-Setup, AWS has the perfect solution for you.

39181104-iceberg-wallpapers

Migrating from GoDaddy to AWS Route 53

TL;DR: 1) Be careful that the nameservers are up everywhere before you shut down the old system (wait 48h) 2) Check if your MX records are correct (mail still working?)

Migrating Nameservers and DNS is always tricky. It’s not instant, so it could look fine on your computer but be completely broken somewhere else. So how can you do it safely? Here’s a few easy steps to follow to minimise your risks during the migration, illustrated for the case of

GoDaddy => AWS Route 53

This means, we’ll assume you currently have a Production-App registered with GoDaddy, but you want to migrate to AWS Route 53, e.g. because you already have your servers there. We’ll also assume our domain name is “examples.com” (because with example.com I couldn’t do all the steps).

0) Dummy setup

Depending on how important it is to you that everything runs 100% smoothly, you might first want to do the entire process with a dummy domain. You’d spend 12$ on any domain your heart desires, set up some DNS, and then do all the following steps and see if all runs smoothly. This is a very time-consuming process, so I’d only recommend if it’s the end of the world if something goes wrong during the process.

1) Setup the system on the target (AWS)

This step you can always do without impacting anything in production. Make sure you can access your system directly with it’s IP and that it’s running smoothly.

SSL

If you’re running with a AWS load balancer, you almost have to setup SSL with AWS Certificate Manager since it’s free and easy. But how can you check if it’s working or not? Since you’d like to check yourproductiondomain.com, which is still with the other registrar, it’s going to be hard to check.

HTTP VS HTTPS

Make sure that the page is accessible via http and https. On some systems you might get forwarded automatically to https, on others not. You can use http://downforeveryoneorjustme.com/ to check if a page is down on http. NOTE that the default AWS loadbalancer security settings don’t open port 80!!! You need to set that manually:Screen Shot 2017-01-29 at 18.31.36

2) Download the Zone Info from GoDaddy

Screen Shot 2017-01-27 at 14.02.47

There’s an option to download the zone information in GoDaddy.

3) Import the Zone Info into AWS Route 53

There’s function to import the zone file in AWS Route 53.

Screen Shot 2017-01-27 at 14.08.41

However,

BE EXTREMELY CAREFUL HERE

The import somehow messes up the MX entries! The MX entries in my zone file were:

@ 3600 IN MX 5 ALT1.ASPMX.L.GOOGLE.COM
@ 3600 IN MX 5 ALT2.ASPMX.L.GOOGLE.COM
@ 3600 IN MX 10 ALT3.ASPMX.L.GOOGLE.COM
@ 3600 IN MX 10 ALT4.ASPMX.L.GOOGLE

But AWS decided to import it as:

Screen Shot 2017-01-27 at 14.11.19

HOW NICE OF THEM, THEY ADD RANDOM STUFF AT THE END… Seriously guys, wtf?

You need to correct the MX records, in case you wish to receive your mails after the migration!!!

In case you don’t use an external provider (i.e. you’re using GoDaddy email), make sure you setup the new MX records first!

Anyways, check your entries entry by entry to make sure they are set correctly.

The only one that you might want to differ is the examples.com. root level entry because you might want to point it to an ALIAS of your AWS LoadBalancer. The loadbalancer doesn’t allow IP’s in the first place, so this might even be a necessary switch (and why you migrated away from GoDaddy?).

Anyways, now we’re entering the

flat,1000x1000,075,f

4) Set the new Name Servers in GoDaddy

The AWS Route 53 will tell you what the new name servers are:

Screen Shot 2017-01-27 at 14.38.16

Delete the old ones from GoDaddy and insert the new ones. But before this, ask yourself again:

  1. Is my new system (if any) up and running?
  2. Are the MX Records correct?
  3. Did I setup the SSL correctly?

After changing, the traffic will slowly starting to go through the new name servers.

BUT BE AWARE: Even though it goes to through the new name servers on your machine, doesn’t mean it goes through the new name servers everywhere!!!

The only way to make sure all traffic goes through the new name-servers, is to wait 48 hours.

So then you do this. Wait 48 hours. What you can do meanwhile is:

CHECK YOUR EMAILS. ARE THEY STILL WORKING?

Run for example an MX check here http://mxtoolbox.com/ to check if the E-Mails via the new DNS are working.

5) Do the transfer (48h later, or with the old system still running)

To do so, unlock your domain name first at GoDaddy. They’ll provide you with an authorization code to put into AWS route 53. Now all you have to do is request the transfer and accept it through the email they send you.

And that’s it. That’s how you migrate domains .