Main AWS Services You Need To Get Started

Enrico Portolan
8 min readJul 3, 2022

--

Hello Cloud people, this blog post is intended for people who are at their first time playing with AWS.

Everyone felt the same at the beginning of their cloud journey. Where do I start from? Which are the services I need to know? (AWS has more than 200 services and counting)

Keep reading, and I’ll give you a summary of the most used AWS Services. The best way to start exploring a cloud provider is to ask ourselves: what do we need to deploy?

In this example, we are going to deploy a web application with some static content that we need to host, APIs, Databases, Monitoring systems, Analytics and so on. The peculiar factor is that I will do it in two paradigms: Serverless and Traditional.

If you want to follow the tutorial on Youtube, you can find the link here:

Basic Web App Diagram

Above we have our starting point. A client (laptop, mobile phone, etc) visits a web page in the cloud. Let’s start with the Serverless Journey.

The Serverless Way

Host static files 🌎

Firstly, we need a service to host the static files of our website. Our website can be composed of HTML, CSS and JS files, it can be a Angular, ReactJS, etc so we need a way to host these files and have them accessible from the web. The first AWS service I will introduce is Amazon S3, which is one of the first services implemented by Amazon.

Amazon S3

S3 is an object storage service organised in buckets. Buckets can scale infinitely, have a unique DNS name through all AWS accounts and are the perfect place to put our static files. So, once we have put our static files into S3, we want our users to be able to visit an URL and navigate through our website. S3 has a built-in web hosting feature. The website is available at the AWS Region-specific website endpoint of the bucket. Depending on the region, the URL would be something like:

It sounds great but we can do better. We want our website to use HTTPS and to be available at our users’ closest location. To do that, we can leverage AWS Edge Location.

AWS cloud infrastructure is composed of Regions, Availability Zones and Edge Locations. Edge locations are located in the main cities in the world and they act as a CDN for our users. The service that we’re going to use for this is Amazon Cloudfront.

Amazon Cloudfront is a CDN service that helps us distribute our content using AWS Edge Locations. Cloudfront can be configured with an S3 bucket as the origin, meaning it will serve the files contained inside an S3 bucket.

S3 and Cloudfront

At this point, when a user visits our website, the content will be served by Cloudfront Edge locations. Cloudfront will check if the files are in the cache. If so, they will be served directly to the user. If not, the request will be forwarded to the origin (the S3 bucket) and back to the customer.

Cloudfront serves the files using a domain such as abc123456.cloudfront.net . It’s possible to configure a custom domain using the AWS DNS service, Amazon Route 53. Using Route 53 we can add an A record which points to the CloudFront distribution.

For more details, I’ve made a video about how to deploy a React App with S3 and Cloudfront:

Add APIs 🛠

Alright, the frontend layer has been set up, now we need to add the backend layer to our web app, we need some APIs. Since we’re in the Serverless Way Journey, we need to introduce our main actors: AWS Lambda and Amazon API Gateway

Lambda is the Serverless compute service which lets us upload our code and execute it without worrying about servers, uptime and availability. Lambda can be triggered by different AWS services: S3 event, Amazon SQS, SNS and AWS API Gateway. In our use-case, since we are providing APIs to the users, we will use API Gateway. AWS API Gateway is a service that stays between the client and our backend service acting as a reverse proxy. We can create multiple endpoints, and stages (such as dev, staging, and prod) and manage our APIs using API Gateway. API Gateway will forward the request to the Lambda service and handle the response back to the client.

Databases 🚚

Where do we store our data? We need a database! There are different options depending on our choice of SQL vs NoSQL. Since we are using APIGw + Lambda, the ideal choice would be DynamoDB.

DynamoDB is a NoSQL key-value database, it’s Serverless and its interface is API based. DynamoDB has a long list of features such as indexes, cache, Dynamo Streams and so on which deserves a blog post for itself. Luckily, I made a playlist about the main features and how to design a table with DynamoDB:

If we prefer to stay with the SQL world, AWS Aurora Serverless is our go-to option. Aurora Serverless is an on-demand autoscaling configuration for Amazon Aurora database. Amazon Aurora database is a MySQL and Postgres compatible database implemented by AWS.

In the picture above are also listed AWS Elasticache and Redshift. The first is an AWS service for a caching layer that can be used to offload requests from the database or from the API layer. For example, the most frequent queries to the database can be cached using AWS Elasticache.

The latter, Amazon Redshift, uses SQL to analyse structured and semi-structured data across data warehouses, databases and data lakes.

Add Monitoring 🔎

Now let’s move to the observability and monitoring services for our web application. The main services are Cloudwatch and Cloudtrail.

Amazon Cloudwatch collects the logs from AWS services, creates Metrics and lets us set alarms based on criteria. In our use case, we’re going to get the Lambda function logs inside Cloudwatch. Cloudwatch is a very powerful tool which helps us with the monitoring and observability of our application.

The other service, Amazon Cloudtrail, monitors and records account activity across our AWS infrastructure, giving us control over storage, analysis, and remediation actions. It is usually used to enable governance, auditing and compliance in our AWS account.

Decoupling Services

When we design a Serverless application, we need a way to communicate between services. To achieve this, we have different options: Amazon SQS, Amazon SNS and Step Functions. Amazon SQS is a fully managed message queuing service that enables us to decouple and scale microservices. There are producers which put messages in the queue and consumers, which consumes services from the queue.

Additionally, Amazon SNS is a notification service that enables us to send notifications to users, AWS services and any third-party services using Topics. Topics have subscribers who receive notifications pushed into SNS Topics.

The last service mentioned is Step Functions, which is a service to coordinate and orchestrate services using state machine-like language.

The Traditional Way

Let’s take a step back and explore how we can deploy the same app in a more traditional way, without Serverless services. For the static hosting, we can assume to keep the solution with S3+Cloudfront.

Add APIs 🛠

For APIs development, the first option is to use EC2 (Elastic Compute Cloud) which is one of the oldest AWS services. EC2 lets us hire virtual machines in the cloud, choosing different instance families, the storage needed (SSD or HDD) and many other options. In an EC2 instance, we can install our Application Server (PHP, NodeJS, etc) and run it in the cloud. To improve the scalability and availability of the service, we can add in front of our EC2 instance an Application Load Balancer, which can handle the requests and forward them to our EC2 instance. When we add a Load Balancer, it’s common to create an autoscaling group which is a set of EC2 instances that scale up and down based on the load or other parameters.

The second option is to use Amazon ECS (Elastic Cloud Service). With ECS, we can use Docker to upload a Docker image to our service and AWS will spin up and stop instances for us. It’s possible to configure the CPU and memory.

Databases 🚚

Similar to the Serverless use case, we can use a managed service by AWS. Amazon RDS is a fully-managed database service with support for MySQL, Postgres, MariaDB, SQL Server and many others. The main advantage is that the database is managed by AWS so we don’t have to worry about Uptime, patches, security and so on. AWS RDS has also Multi-AZ deployment for Disaster Recovery and Read Replica to improve read performance

Our three-tier application is now complete, we have the frontend using S3+ Cludfront, backend using API Gatway+ Lambda, EC2 or ECS and database using DynamoDB or RDS. Let’s now explore the option for Analytics, Big Data, AI and ML

Analytics and Big Data 📈

The main Analytics services are Athena, EMR and Quicksight. Athena is used for querying data in S3 using SQL-like queries. Amazon EMR (Elastic Map Reduces) is used if we have a Map-Reduce workload and last, and Quicksights is used to create custom dashboards for multiple sources and power our BI analysis.

AI & ML 🤖

On the AI and ML side, we have the following:

  • Amazon SageMaker: fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale
  • Amazon Polly: turns text into lifelike speech, allowing us to create applications that talk to our customers
  • Amazon Transcribe: is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications
  • Amazon Translate: is a neural machine translation service that delivers fast, high-quality, affordable and customizable language translation

Conclusion

I hope you will find this blog post useful to start your cloud journey with AWS☁️

Let me know what you think in the comments. Follow me on Twitter and Youtube for more!

--

--

Enrico Portolan
Enrico Portolan

Written by Enrico Portolan

Passionate about cloud, startups and new technologies. Full-stack web engineer

No responses yet