Hey there,

I’m guessing you’ve heard about Amazon Web Services —AWS for short— and you’re curious to know how things work there. You probably tried it out already; by hosting a simple website, but you want to have better control of your website by learning some of the fundamental concepts of the platform. Well, lucky for you, I’ve taken some time to explain a few things that’ll help you get started. In this post, we’ll take a look at the Amazon Web Services platform and how to create a simple infrastructure to suit your personal or business needs.

Let’s begin, shall we?

Amazon Web Services is a cloud platform that offers on demand access to a variety of computing services via the Internet. Essentially, that’s what “the cloud” means. It’s basically using computer resources, such as; storage and CPU provided by an individual or in most cases a company, over the Internet. This eliminates the need for active management by the user.

AWS is one of the many public platforms that offers cloud service to both individuals and businesses.

To ensure your applications are scalable, reliable and well optimized, it is important that your infrastructure is well built. Even though AWS helps with this by default when an account is created, it is important to design yours so you have ultimate control.

Now, let’s explain concepts required in achieving this level of control. We’ll start with networking concepts.

Virtual Private Cloud

A virtual private cloud creates a private network for you within AWS’s infrastructure; logically segmenting your infrastructure from the rest of the world, just similar to an on-premise infrastructure. It’s simply creating an IP based network with a specific CIDR where your computing resources would reside.
It consists of:

Subnets: These can be used to achieve further separation within the same VPC. A VPC is broken down into subnets that have their CIDRs within the CIDR of the VPC itself. For example, a VPC with CIDR of 10.10.0.0/16 can have 4 subnets with CIDRs 10.10.1.0/24, 10.10.2.0/24, 10.10.3.0/24 and 10.10.4.0/24 respectively.

Internet gateways: These are basically routers that route traffic between subnets in your VPC and the Internet. Internet gateways enable communication between your VPC and the Internet.

Route tables: They define how traffic is routed in a subnet using rules. Route tables are similar to the routing table of a typical router. It contains rules that control how traffic flows within a subnet, between two or more subnets or between a subnet and the Internet.

Elastic IPs: These are static public IPs attached to instances (servers). A static IP is an IP address that doesn’t change. It remains permanently to whichever device it is associated to.

Security groups: They act as firewalls that either allow or block traffic to an instance or instances based on rules defined. For example, all incoming traffic on any other port apart from 80 and 443 can be blocked from reaching your web server using a security group.

Alright, we’ve covered the fundamental networking concepts on AWS, let’s see how they all fit together.

In the diagram above, you can see how there’s a VPC within the general AWS network, which acts as a private network, separated from the rest of AWS. There’s an Internet gateway that provides Internet reachability to and fro our VPC, two public subnets (public because they are accessible from the Internet as shown by the arrows coming from the Internet gateway) and two private subnets all within our VPC. You can also see that the CIDRs of the subnets are within the CIDR of the VPC.

Great! We’ve discussed about networking concepts and created our VPC. Let’s talk about some computational concepts that make up our infrastructure.

Elastic Compute Cloud (EC2)

Elastic Compute Cloud is an AWS service that allows you to provision virtual machines which are called Instances and are used to host custom services and applications. The Instances are booted from an Amazon Machine Image (AMI) and can be managed from the AWS web console.

The service provides the following:

Instances - Virtual machines running a particular Operating System booted from an AMI. Instances are servers for hosting applications either accessible or not accessible to the public. There are different types of EC2 instances, each instance type comprises of different kinds.

Load balancers – These are used to load balance traffic across a number of instances. It is important to have load balancing in place especially in a production system so as to effectively manage large volumes of traffic to your servers. A load balancer sits in front of one or more instances, so that traffic destined for those instances, hits the load balancer first. The load balancer, then evenly distributes this traffic across the various instances, ensuring that no instance is overwhelmed at any time.

Target Groups – Defines a group of instances that a particular Load Balancer sends incoming traffic to. How does a load balancer know which instances to send traffic it receives to? The answer, Target Groups. A Target Group is created with one or more instances in it. The Target Group is then assigned to a particular Load Balancer on a particular port. Example: If I have a Target Group called “web-servers”, with a port 80, and assign it to a Load Balancer called “ELB-101”, it means that traffic received by the Load Balancer would be forwarded to instances in the Target Group web-servers on port 80.

Auto Scaling Groups – It’s a group of instances that either scale in or out based on the amount of traffic to them. The ability for an infrastructure to scale, and to accommodate various circumstances, are some of the attributes of any good setup. Auto Scaling helps to automatically increase the number of instances in your environment, to accommodate large volume of traffic, and also reduce them when traffic is low. This way you’re sure to always have the right number of instances in your environment that can handle various traffic situations. Auto Scaling can watch out for various metrics; such as RAM utilization and CPU utilization, and use them to determine how many instances are required to effectively handle various traffic volume.

Let’s take a look at a diagram that shows how EC2 instances fit into the VPC we created earlier.

Ok, enough talk for now, try them out.

If you already have an AWS account, create a VPC, create 2 subnets, add an Internet gateway, and define how traffic is routed using a route table. Don’t worry; a video that shows how to do this would be up shortly. If you don’t have an AWS account, you can create one for free here. Now, let’s look at something that would be particularly interesting to software developers. DevOps! Yay!

Development Operations—or DevOps for short— refers to a set of practices and tools that effectively manage the entire software development process. An advantage of this is that, it helps to speed up the time to release software while maintaining best practices. Thankfully, AWS has some tools that help with this. Let’s take a brief look at two of them. AWS CodeDeploy and AWS CodePipeline.

AWS CodeDeploy is a tool that automatically deploys applications to instances and gets them running. It makes use of tags or auto scaling groups to identify instances for a particular deployment. With AWS CodeDeploy, you can create an application, create a deployment group that contains one or more instances, and then deploy your application to them. You can get started with it here.

AWS CodePipeline automates the entire deployment process end to end. From identifying changes to codebase, to creating a new build of the application after changes have been identified, to deploying and running the application. You can deploy deploy your codebase to either a git repository, or an Amazon S3 bucket. Build using a tool such as Jenkins and then deploy using AWS CodeDeploy. AWS CodePipeline simply allows you to combine other AWS tools to form a continuous delivery pipeline for your application. You can get started with it here.

The diagram below shows what a simple delivery continuous pipeline looks like: