Tag: AWS

Deploy AWS EC2 instances using Ansible

We have seen in the past how to use Terraform to deploy AWS EC2 instance. But, this is also possible using Ansible. In this blog post, we will focus on the deployment of AWS EC2 instance using Ansible. I assume that you have already been to the basics installation of Ansible and basic playbook creation. Here are some links on tunnelix.com on Ansible. Please consider visiting them if you have any doubt. I assume that you have already install Ansible on your machine and know the basics of Ansible Playbook creation.

Setup the AWS IAM Account

1. Start by creating an AWS user account through the AWS IAM. Go to IAM, then click on USER, then click on ADD USER:

2. Once you have click on the ADD USER, Enter a name and tick on PROGRAMMATIC ACCESS, click on NEXT: PERMISSIONS

3. On the following page, Create a group, I have created one called ‘Ansible’ then attached the user to the group. After that click on ATTACH EXISTING POLICIES DIRECTLY’, then search for AMAZONEC2FULLACCESS and tick it and click on NEXT: TAGS.

4. Click on Next, add the tags and click on ‘CREATE USER’.

5. Consider downloading the credential.csv file by clicking on Download .csv

6. Consider also creating a key pair

Some installations and configurations on the Ansible controller

7. Now, on your Linux controller, we will need some Python modules to interact with AWS. Assuming you have already installed Ansible, consider installing python-pip:

yum install python-pip

8.  Let’s now install the AWS CLI

yum install awscli

9. Sync the clock of the VM to prevent any error

hwclock -s

10. Configure your AWS CLI

aws configure

It will prompt you to enter the AWS Access Key ID, secret key, etc. Just enter the information. Example:

[[email protected] ~]# aws configure

AWS Access Key ID [****************GYGY]: AKIA5xxxxxxxxxx
AWS Secret Access Key [****************458q]: EvEd55xxxxxxxxxx
Default region name [us-east-1]:
Default output format [json]: 

11. Create the following file: /home/.boto

[Credentials]
aws_access_key_id = AKIAxxxxxxxxxxxxx
aws_secret_access_key = xc3xxxxxxxxxxx

12. The following command should test your AWS credentials

aws get sts-caller-identity

13. Install the boto Python module. The boto python module will talk with the AWS CLI to authenticate on aws.

pip install boto 

Creating the Playbook and Deploying the AWS EC2 Instance

14. Now, let’s create a playbook as follows in /home/AWSTask.yml

- name: EC2 Instance creation
  hosts: localhost
  connection: local
  tasks:
  - name: Launching the EC2 instance
    ec2: 
      instance_type: t2.nano
      key_name: ansible
      image: ami-0b69ea66ff7391e80
      region: us-east-1
      group: default
      count: 1
      vpc_subnet_id: subnet-ef9179a4
      wait: yes
      assign_public_ip: yes

You can also access it on my Ansible Github repository.

15. Simply launch the Playbook

ansible-playbook AWSTask.yml

16. As you can see below, the EC2 instance has been created.

Getting started with Terraform

Terraform is an open-source tool created by HashiCorp and it is written in Go programming language. Using Terraform allows us to define our infrastructure as a Code by using declarative language. It’s important to understand that Terraform language is declarative, which describes an intended goal rather than the steps to reach the goal. Once you define your infrastructure, Terraform will figure out how to create it. Terraform also supports a variety of cloud providers and virtualization platforms such as AWS, Azure, VMware, OpenStack, etc.. This is pretty cool as it eliminates several tasks, for example, to create several AWS instances.

Photo credits: terraform.io
Photo credits: terraform.io

Installation of Terraform

1. This is pretty simple. You just have to go on the official website and download the package. In my case, I am on a Linux machine, and I am choosing a Linux 64 bit package.

To download and unzip it, use the following command:

wget https://releases.hashicorp.com/terraform/0.12.10/terraform_0.12.10_linux_amd64.zip && unzip terraform*.zip

2. I moved the binary to /usr/local/bin. Make sure it is in the path environment variable.

mv terraform /usr/local/bin

3. By this time, you should get your binary and be able to check the version.

terraform version

Setting up API call for Terraform on AWS

4. We also need to allow terraform to make an API call on our behalf. I will be calling the API on AWS. For that, you will need to create a user on the AWS IAM and assign the rights and policies. Assuming that you have already created the user and you have the credentials to move ahead. Use the following commands:

export AWS_ACCESS_KEY_ID="AKIA***************"
export AWS_SECRET_ACCESS_KEY="mVTwU6JtC***************"
export AWS_DEFAULT_REGION="us-east-1"

Writing the codes

5. Once you are done exporting the credentials, you can start building your Terraform code. The whole code is in my Github and you can download it for free.

The first thing is to configure the provider and the region.

provider "aws" {

 region = "us-east-1"

}

6. Each provider supports different kinds of resources such as load balancers, servers, databases, etc.. In this example, we are trying to create a single EC2 instance. I have chosen the AWS Linux OS and the smallest nano server. The tags are just the identifier in AWS.

resource "aws_instance" "web" {

  ami           = "ami-0b69ea66ff7391e80"

  instance_type = "t2.nano"

} 

7. Then launch a terraform init to initialized the Terraform working directory.  By that, I mean that it will download the AWS plugin. You should found a similar type of output from your screen.

8. Before performing the actual change, you can use the terraform plan to understand what change has been established. The plus sign means what is going to be added and the minus sign means those that are going to be removed.

9. To create the instance use the terraform apply to create the instance. It will prompt you to type ‘yes’ to continue on with the creation.

10. If you go on the AWS EC2 console, you will notice that the resource has been created successfully.

11. Hey, it’s not over yet! There are more things that need to be added for example the name of the instance. Let’s called it Nginx-Server. Let’s add the tags. Also, launch a terraform apply.

tags = {

    Name = "Nginx-Web"

 }

Adding User Data and Security groups

12. At this stage, I believed you must understand what is Terraform and how it works? To make the installation of Nginx add the following block of lines:

user_data = <<-EOF

  #!/bin/bash

  yum install nginx -y

  systemctl start nginx

  systemctl enable nginx

  EOF

13. To add the security groups, enter these codes:

resource "aws_security_group" "allow_http" {

  name        = "allow_http"

  description = "Allow HTTP inbound traffic"

  ingress {

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }

14. In part 6 under instance_type, I have added this line. What it means? “aws_security_group” is a resource, “allow_http” is a variable that has been called from the security group in part 13, and lastly “id” is the attribute.

  vpc_security_group_ids = ["${aws_security_group.allow_http.id}"]

15. Note that when launching terraform apply, you will notice that Terraform will destroy the old machine and build a new one which implies that there will be a downtime.

16. You can also view your code through a graph. Launch the command terraform graph. The output can also be viewed as more human-readable through Graphviz which you have to install. You can also go to webgraphviz.com to view it online.

It is very interesting to understand the dependency when using declarative language in Terraform. The full code can be viewed here on my Github Repository.


Attending AWSome day online conference 2019

The AWSome day was a free online Conference and a training event sponsor by Intel that will provide a step-by-step introduction to the core AWS (Amazon Web Services) services. Its free and everyone can attend. It was scheduled on 26 March 2019 online. The agenda covered broad topics such as AWS Cloud Concepts, AWS Core Services, AWS Security, AWS Architecting and AWS Pricing and Support. It’s pretty interesting for IT manager, system engineers, system administrators, and architects who are eager to learn more about cloud computing and how to get started on the AWS cloud. I do have some experience in managing AWS servers and even host my own server. However, I registered for the free training to refresh my knowledge and get more exposure such as the AWS pricing which I am not aware at all. Another interesting thing is that you will receive a certificate of attendance and you received 25 USD of AWS credits. Pretty cool right?

Right from the beginning, I knew this was something interesting. I encountered a minor problem whilst signing in. I had to send a mail to support and it was resolved immediately. Once connected to the lobby, it was pretty easy to attend and follow the online conference. After some minutes, Steven Bryen, head in the AWS Cloud delivered the keynote speech.

There was also an online challenge and I score 25,821 on the Trivia Leaderboard.

On the “Ask an Expert” tab, I was mostly interested in Man on the Side Attack – MOTS attack. They referred me to the WAF section on AWS. Another interesting link is the whitepaper of the AWS Overview of Security guidelines. AWS also offers comprehensive security across all the layers, SSL, DDoS, Firewall, HSM and Networking. I also shoot some question on Metric and Monitoring on application level such as on MariaDB. I discovered about the RDS performance insight. For applications on EC2, Containers, and Lamda, X-Ray looks very promising. Apart from virtualization, its good to note that AWS also provides containerization services.

The event was pretty enriching. The panel on the question area knows well their subject. I discovered a lot by participating in the AWSomeDay. I’m looking forward to AWS certifications in the near future.

Changing Instance Type on Amazon Web Service – AWS

Some days back, i had to downgrade an AWS (Amazon Web Service) EC2 instance type. I thought it was quite complex, but it is not. Since the disk is a different entity in the AWS service, by downgrading CPU or RAM does not affect the data of the server compared to VPS service where data needs to be migrated. Here are the procedures. I have blurred the name and ID of the instance for security purposes.

1.Once you locate your instance, start first by switching off the instance by right click and stop which will stop the instance.

2. It will prompt you if you are sure that you want to stop these instances. Just click yes, stop.

3. You will notice that it takes some times stopping under the tab ‘Instance state’.

4. Now that your instance is stopped as indicated below, move to next part.

5. Now right click on the instance and click on ‘change instance type’.

6. Now, look up for the Instance type you want and click on Apply.

7. You now start the instance and you are done.

To have an idea of the instances types specifications, click here. Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.