<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Amitabh Soni]]></title><description><![CDATA[Amitabh Soni]]></description><link>https://blog.amitabh.cloud</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 13:23:33 GMT</lastBuildDate><atom:link href="https://blog.amitabh.cloud/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[No Keys, No Risk - Secure Secrets with AWS Secrets Manager & EC2 IAM Roles]]></title><description><![CDATA[Securely Managing Application Secrets Using AWS Secrets Manager and IAM Roles (No Access Keys)
Learn how real companies securely fetch secrets from AWS Secrets Manager without ever storing AWS access ]]></description><link>https://blog.amitabh.cloud/securely-managing-application-secrets-using-aws-secrets-manager</link><guid isPermaLink="true">https://blog.amitabh.cloud/securely-managing-application-secrets-using-aws-secrets-manager</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[secrets management]]></category><category><![CDATA[IAM]]></category><category><![CDATA[Security]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Fri, 27 Mar 2026 12:01:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66c0a025a59087618d8f7715/0f56794d-a6de-4b76-a320-a7be7acfb7ee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h2>Securely Managing Application Secrets Using AWS Secrets Manager and IAM Roles (No Access Keys)</h2>
<p><strong>Learn how real companies securely fetch secrets from AWS Secrets Manager without ever storing AWS access keys on the instance.</strong></p>
<p>In this hands-on guide you will:</p>
<ul>
<li><p>Launch an EC2 instance with an IAM role (no credentials)</p>
</li>
<li><p>Create a secret in AWS Secrets Manager</p>
</li>
<li><p>Fetch it using AWS CLI, Bash script, and Node.js</p>
</li>
<li><p>Perform a live secret update (zero code change, zero redeploy)</p>
</li>
</ul>
<p>This is exactly how production applications handle secrets in 2026.</p>
<hr />
<h3>Why This Matters</h3>
<p>Hardcoding secrets or storing AWS access keys on EC2 is one of the biggest security risks in the cloud.<br /><strong>The correct way:</strong> Use AWS Secrets Manager + IAM Roles for EC2.</p>
<ul>
<li><p>No access keys in code or environment variables</p>
</li>
<li><p>Automatic temporary credentials via IAM</p>
</li>
<li><p>Secrets encrypted at rest with AWS KMS</p>
</li>
<li><p>Live updates without redeploying your app</p>
</li>
</ul>
<hr />
<h3>Prerequisites</h3>
<ul>
<li><p>An AWS account</p>
</li>
<li><p>One Ubuntu EC2 instance (t3.medium or larger recommended)</p>
</li>
<li><p>IAM permissions to create roles and secrets</p>
</li>
</ul>
<hr />
<h3>Step 0: EC2 Setup (User Data)</h3>
<p>When launching your Ubuntu EC2 instance, add this <strong>User Data</strong> script so AWS CLI is ready:</p>
<pre><code class="language-shell">#!/bin/bash

sudo apt update -y
sudo snap install aws-cli --classic
</code></pre>
<hr />
<h3>1. Create IAM Role for EC2 (Console)</h3>
<ol>
<li><p>Go to <strong>IAM → Roles → Create role</strong></p>
</li>
<li><p>Trusted entity type: <strong>AWS service → EC2</strong></p>
</li>
<li><p>Attach permission policies: <strong>SecretsManagerReadWrite</strong> (managed policy)</p>
</li>
<li><p>Role name: <code>SecretsManagerEC2Role</code></p>
</li>
<li><p>Create the role</p>
</li>
</ol>
<hr />
<h3>2. Attach the IAM Role to Your EC2 Instance</h3>
<ol>
<li><p>Go to <strong>EC2 → Instances</strong></p>
</li>
<li><p>Select your instance → <strong>Actions → Security → Modify IAM role</strong></p>
</li>
<li><p>Choose <code>SecretsManagerEC2Role</code></p>
</li>
<li><p>Save</p>
</li>
</ol>
<hr />
<h3>3. Verify IAM Role (No Credentials Needed!)</h3>
<p>SSH into your EC2 instance and run:</p>
<pre><code class="language-bash">aws sts get-caller-identity
</code></pre>
<p>You should see your Account ID and the <strong>Role ARN</strong> (<code>arn:aws:iam::...:role/SecretsManagerEC2Role</code>).</p>
<p><strong>Important point to remember:</strong><br />You never ran <code>aws configure</code>. The IAM role automatically provides temporary credentials.</p>
<hr />
<h3>4. Create a Secret in AWS Secrets Manager</h3>
<pre><code class="language-bash">aws secretsmanager create-secret \
  --name db-secret-1 \
  --secret-string '{"username":"admin","password":"admin123"}' \
  --region us-east-1
</code></pre>
<hr />
<h3>5. Fetch the Secret (CLI Demo)</h3>
<pre><code class="language-bash">aws secretsmanager get-secret-value \
  --secret-id db-secret-1 \
  --query SecretString \
  --output text \
  --region us-east-1
</code></pre>
<p>You will see:</p>
<pre><code class="language-json">{"username":"admin","password":"admin123"}
</code></pre>
<hr />
<h3>6. Bash Script Demo</h3>
<p>Create the script:</p>
<pre><code class="language-bash">vim app.sh
</code></pre>
<p>Paste:</p>
<pre><code class="language-bash">#!/bin/bash

SECRET=$(aws secretsmanager get-secret-value \
  --secret-id db-secret-1 \
  --query SecretString \
  --output text \
  --region us-east-1)

echo "Fetched Secret:"
echo $SECRET
</code></pre>
<p>Make it executable and run:</p>
<pre><code class="language-bash">chmod +x app.sh
./app.sh
</code></pre>
<hr />
<h3>7. Node.js Application Demo (Real App Use Case)</h3>
<p>Install Node.js and the AWS SDK:</p>
<pre><code class="language-bash">sudo apt install -y nodejs npm
npm init -y
npm install aws-sdk
</code></pre>
<p>Create the app:</p>
<pre><code class="language-bash">vim app.js
</code></pre>
<p>Paste:</p>
<pre><code class="language-javascript">const AWS = require("aws-sdk");

const client = new AWS.SecretsManager({
  region: "us-east-1"   // ← your region
});

async function getSecret() {
  const data = await client
    .getSecretValue({ SecretId: "db-secret-1" })
    .promise();

  const secret = JSON.parse(data.SecretString);

  console.log("Username:", secret.username);
  console.log("Password:", secret.password);
}

getSecret();
</code></pre>
<p>Run it:</p>
<pre><code class="language-bash">node app.js
</code></pre>
<hr />
<h3>8. Live Update Demo (The Best Part)</h3>
<p>Update the secret in AWS:</p>
<pre><code class="language-bash">aws secretsmanager update-secret \
  --secret-id db-secret-1 \
  --secret-string '{"username":"admin","password":"secure123"}' \
  --region us-east-1
</code></pre>
<p>Run the Node.js app <strong>again</strong> (no code change!):</p>
<pre><code class="language-bash">node app.js
</code></pre>
<p>You will now see the new password: <code>secure123</code></p>
<p>This is production magic — secrets are dynamic.</p>
<hr />
<h3>9. Security Best Practices (What Real Companies Do)</h3>
<ul>
<li><p><strong>Never</strong> use <code>SecretsManagerReadWrite</code> in production (too broad)</p>
</li>
<li><p>Use least-privilege policy instead:</p>
</li>
</ul>
<pre><code class="language-json">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret"
      ],
      "Resource": "arn:aws:secretsmanager:us-east-1:YOUR_ACCOUNT_ID:secret:db-secret-1*"
    }
  ]
}
</code></pre>
<ul>
<li><p>Secrets are automatically encrypted with AWS KMS</p>
</li>
<li><p>Temporary credentials are rotated automatically by IAM</p>
</li>
</ul>
<hr />
<h3>Cleanup (Optional)</h3>
<ul>
<li><p>Delete the secret in AWS Secrets Manager console</p>
</li>
<li><p>Terminate the EC2 instance</p>
</li>
</ul>
<hr />
<h3>Final Takeaways</h3>
<ul>
<li><p>No AWS access keys stored anywhere</p>
</li>
<li><p>IAM roles provide secure, temporary credentials</p>
</li>
<li><p>Secrets are fully dynamic and encrypted</p>
</li>
<li><p>This is exactly how production workloads on EC2, ECS, EKS, and Lambda handle secrets</p>
</li>
</ul>
<p><strong>Watch the full video</strong> to see every step live: <a href="https://youtu.be/wHunyApig30">YouTube Video</a></p>
]]></content:encoded></item><item><title><![CDATA[My Journey to Becoming an AWS Community Builder (Containers Category)]]></title><description><![CDATA[Introduction
Woke up to an exciting email from AWS - I’ve been selected as an AWS Community Builder in the Containers category.
This moment feels incredibly special for me, especially because my appli]]></description><link>https://blog.amitabh.cloud/my-journey-to-becoming-an-aws-community-builder-containers-category</link><guid isPermaLink="true">https://blog.amitabh.cloud/my-journey-to-becoming-an-aws-community-builder-containers-category</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[containers]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS Community Builder]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Thu, 05 Mar 2026 09:45:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66c0a025a59087618d8f7715/7cd92e87-8c9f-4e8d-ad7e-cdf200b96b6d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Introduction</h2>
<p>Woke up to an exciting email from AWS - I’ve been selected as an <strong>AWS Community Builder in the Containers category</strong>.</p>
<p>This moment feels incredibly special for me, especially because my application was <strong>rejected last year</strong>. Getting selected this year, even as a <strong>college student</strong>, makes the achievement even more meaningful.</p>
<p>In this blog, I want to share:</p>
<ul>
<li><p>What the AWS Community Builders program is</p>
</li>
<li><p>My journey and experience applying</p>
</li>
<li><p>Tips for anyone who wants to apply in the future</p>
</li>
</ul>
<hr />
<h2>What is the AWS Community Builders Program?</h2>
<p>The AWS Community Builders program is an initiative by <strong>Amazon Web Services (AWS)</strong> that recognizes individuals who actively share knowledge about AWS with the community.</p>
<p>Community Builders contribute by creating technical content such as:</p>
<ul>
<li><p>Blog posts</p>
</li>
<li><p>Tutorials</p>
</li>
<li><p>Videos</p>
</li>
<li><p>Talks</p>
</li>
<li><p>Workshops</p>
</li>
</ul>
<p>The program includes multiple categories such as:</p>
<ul>
<li><p>Containers</p>
</li>
<li><p>Serverless</p>
</li>
<li><p>DevTools</p>
</li>
<li><p>Machine Learning</p>
</li>
<li><p>Security</p>
</li>
<li><p>Data</p>
</li>
</ul>
<p>I was selected in the <strong>Containers category</strong>, which focuses on cloud-native technologies and container-based architectures.</p>
<hr />
<h2>My Journey</h2>
<p>My journey into cloud and DevOps started during my college years when I began exploring technologies like:</p>
<ul>
<li><p>Linux</p>
</li>
<li><p>Docker</p>
</li>
<li><p>Kubernetes</p>
</li>
<li><p>CI/CD</p>
</li>
<li><p>Cloud platforms</p>
</li>
</ul>
<p>Currently, I’m also working as a <strong>DevOps Engineer Intern</strong>, where I get to work with tools like Kubernetes, CI/CD pipelines, and cloud infrastructure.</p>
<p>Last year, I applied to the AWS Community Builders program but <strong>my application was rejected</strong>. Instead of getting discouraged, I focused on improving my skills and continuing to share my learning with the community.</p>
<p>This year, I applied again - and waking up to the acceptance email was an unforgettable moment.</p>
<hr />
<h2>Why This Program Matters</h2>
<p>Being part of the AWS Community Builders program provides several opportunities, including:</p>
<ul>
<li><p>Connecting with AWS product teams</p>
</li>
<li><p>Early access to upcoming AWS features</p>
</li>
<li><p>Joining a global network of cloud professionals</p>
</li>
<li><p>Learning and collaborating with other builders</p>
</li>
</ul>
<p>It also encourages members to continue sharing knowledge and helping others learn cloud technologies.</p>
<hr />
<h2>Tips for Future Applicants</h2>
<p>If you are interested in becoming an AWS Community Builder, here are a few tips:</p>
<h3>1. Share Your Knowledge</h3>
<p>Create technical content such as blog posts, tutorials, or videos related to AWS.</p>
<h3>2. Build Real Projects</h3>
<p>Hands-on projects using AWS services help demonstrate practical experience.</p>
<h3>3. Stay Consistent</h3>
<p>Consistency in learning and sharing your knowledge with the community is important.</p>
<h3>4. Don’t Give Up</h3>
<p>If your application is rejected, treat it as motivation to improve and try again.</p>
<hr />
<h2>Final Thoughts</h2>
<p>Becoming an AWS Community Builder is an exciting milestone in my cloud journey. I’m looking forward to contributing to the community by sharing knowledge around:</p>
<ul>
<li><p>Cloud-native technologies</p>
</li>
<li><p>Containers</p>
</li>
<li><p>Kubernetes</p>
</li>
<li><p>DevOps practices on AWS</p>
</li>
</ul>
<p>A big thank you to <strong>Amazon Web Services</strong>, <strong>Paxton L. Hall</strong>, and <strong>Ridhima Kapoor</strong> for this opportunity.</p>
<p>I’m excited for the journey ahead.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Private IP vs Public IP vs Elastic IP]]></title><description><![CDATA[When you start working with AWS - especially with EC2 instances - one of the most confusing topics is IP addressing. You launch a server, and suddenly you see Private IP, Public IP, and sometimes something called an Elastic IP.
Understanding the diff...]]></description><link>https://blog.amitabh.cloud/aws-private-ip-vs-public-ip-vs-elastic-ip</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-private-ip-vs-public-ip-vs-elastic-ip</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ip address]]></category><category><![CDATA[AWS Private IP]]></category><category><![CDATA[aws elastic ip]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Fri, 13 Feb 2026 16:16:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770999357075/adbec799-d9d1-4d42-bdba-4dbed6f8f8f9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you start working with AWS - especially with EC2 instances - one of the most confusing topics is IP addressing. You launch a server, and suddenly you see <strong>Private IP</strong>, <strong>Public IP</strong>, and sometimes something called an <strong>Elastic IP</strong>.</p>
<p>Understanding the difference between these three is critical if you’re preparing for AWS certifications, working in DevOps, or building production infrastructure.</p>
<p>Private IP is used for internal communication inside a VPC, Public IP allows internet access, and Elastic IP is a static public IP you control and can remap.</p>
<p>Let’s break everything down simply and practically.</p>
<hr />
<h2 id="heading-1-what-is-a-private-ip-in-aws">1️⃣ What is a Private IP in AWS?</h2>
<p>A <strong>Private IP address</strong> is assigned to an EC2 instance within a VPC (Virtual Private Cloud). It is used for <strong>internal communication</strong> between resources inside the AWS network.</p>
<p>Private IPs:</p>
<ul>
<li><p>Are assigned from your VPC CIDR block (e.g., 10.0.0.0/16)</p>
</li>
<li><p>Cannot be accessed directly from the internet</p>
</li>
<li><p>Remain with the instance for its lifetime</p>
</li>
<li><p>Are used for backend communication (e.g., app server → database)</p>
</li>
</ul>
<h3 id="heading-example">Example</h3>
<p>If you launch two EC2 instances in the same VPC:</p>
<ul>
<li><p>Instance A: 10.0.1.10</p>
</li>
<li><p>Instance B: 10.0.1.20</p>
</li>
</ul>
<p>They can communicate using these private IPs without going over the internet.</p>
<h3 id="heading-when-to-use-private-ip">When to Use Private IP</h3>
<ul>
<li><p>Connecting application servers to databases</p>
</li>
<li><p>Internal microservices communication</p>
</li>
<li><p>Backend-only systems</p>
</li>
<li><p>Secure internal networking</p>
</li>
</ul>
<p>In real-world production setups, databases like RDS are accessed only via private IPs for security.</p>
<hr />
<h2 id="heading-2-what-is-a-public-ip-in-aws">2️⃣ What is a Public IP in AWS?</h2>
<p>A <strong>Public IP address</strong> allows your EC2 instance to communicate with the internet.</p>
<p>Public IPs:</p>
<ul>
<li><p>Are assigned automatically (if enabled)</p>
</li>
<li><p>Change when you stop and start the instance</p>
</li>
<li><p>Allow inbound/outbound internet traffic</p>
</li>
<li><p>Are mapped to the instance’s private IP</p>
</li>
</ul>
<p>If your EC2 instance is in a public subnet and has an Internet Gateway attached to the VPC, it can receive a public IP.</p>
<h3 id="heading-example-1">Example</h3>
<p>You launch a web server:</p>
<ul>
<li><p>Private IP: 10.0.1.15</p>
</li>
<li><p>Public IP: 3.110.45.123</p>
</li>
</ul>
<p>Users access your website via the public IP.</p>
<h3 id="heading-important-limitation">Important Limitation</h3>
<p>If you:</p>
<ul>
<li><p>Stop the instance</p>
</li>
<li><p>Start it again</p>
</li>
</ul>
<p>The public IP changes.</p>
<p>This is a big problem for production systems.</p>
<hr />
<h2 id="heading-3-what-is-an-elastic-ip-eip">3️⃣ What is an Elastic IP (EIP)?</h2>
<p>An <strong>Elastic IP</strong> is a <strong>static public IP address</strong> that you allocate manually and attach to your EC2 instance.</p>
<p>Unlike regular public IPs:</p>
<ul>
<li><p>It does NOT change when you stop/start the instance</p>
</li>
<li><p>It belongs to your AWS account</p>
</li>
<li><p>You can remap it to another instance</p>
</li>
</ul>
<p>Elastic IP solves the “changing public IP” problem.</p>
<h3 id="heading-why-elastic">Why “Elastic”?</h3>
<p>Because you can:</p>
<ul>
<li><p>Detach it from one instance</p>
</li>
<li><p>Attach it to another instance instantly</p>
</li>
</ul>
<p>This is useful in:</p>
<ul>
<li><p>Disaster recovery</p>
</li>
<li><p>Failover setups</p>
</li>
<li><p>Production environments</p>
</li>
</ul>
<hr />
<h2 id="heading-quick-comparison-table">Quick Comparison Table</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Private IP</td><td>Public IP</td><td>Elastic IP</td></tr>
</thead>
<tbody>
<tr>
<td>Internet Accessible</td><td>❌ No</td><td>✅ Yes</td><td>✅ Yes</td></tr>
<tr>
<td>Static</td><td>✅ Yes</td><td>❌ No</td><td>✅ Yes</td></tr>
<tr>
<td>Used For</td><td>Internal communication</td><td>Basic internet access</td><td>Production-grade public access</td></tr>
<tr>
<td>Changes on Restart</td><td>❌ No</td><td>✅ Yes</td><td>❌ No</td></tr>
<tr>
<td>Extra Cost</td><td>❌ No</td><td>❌ No</td><td>⚠️ Yes (if unused)</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-real-world-use-case-example">Real-World Use Case Example</h2>
<p>Let’s say you're deploying a production application:</p>
<ul>
<li><p>Load Balancer → Public access</p>
</li>
<li><p>EC2 App Servers → Private IP only</p>
</li>
<li><p>RDS Database → Private IP only</p>
</li>
</ul>
<p>In some cases:</p>
<ul>
<li><p>You attach an Elastic IP to a Bastion Host</p>
</li>
<li><p>Or attach an Elastic IP to a production EC2 server</p>
</li>
</ul>
<p>This setup improves both security and reliability.</p>
<hr />
<h2 id="heading-cost-considerations-important">Cost Considerations (Important)</h2>
<p>Elastic IPs are free <strong>only when attached to a running instance</strong>.</p>
<p>AWS charges you if:</p>
<ul>
<li><p>You allocate an Elastic IP, but don’t use it</p>
</li>
<li><p>You attach more than one Elastic IP per instance (in some cases)</p>
</li>
</ul>
<p>Always release unused Elastic IPs to avoid charges.</p>
<hr />
<h2 id="heading-security-perspective">Security Perspective</h2>
<p>Best practice in AWS architecture:</p>
<ul>
<li><p>❌ Never expose databases with a public IP</p>
</li>
<li><p>❌ Avoid unnecessary public IP assignments</p>
</li>
<li><p>✅ Use private subnets for backend services</p>
</li>
<li><p>✅ Use Elastic IP only when you truly need static public access</p>
</li>
</ul>
<p>Security Groups and NACLs still control traffic regardless of IP type.</p>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Understanding Private, Public, and Elastic IPs is foundational for AWS networking.</p>
<p>If you remember just one thing:</p>
<ul>
<li><p>Private IP → Internal communication</p>
</li>
<li><p>Public IP → Temporary internet access</p>
</li>
<li><p>Elastic IP → Permanent public identity</p>
</li>
</ul>
<p>Once you master this, VPC architecture becomes much easier to design and troubleshoot.</p>
<hr />
<p>Happy Learning 🚀</p>
]]></content:encoded></item><item><title><![CDATA[AWS STS: Secure Temporary Credentials]]></title><description><![CDATA[The "Hall Pass" of the Cloud: Understanding AWS STS
If you’ve spent any time in AWS, you’ve seen the term STS (Security Token Service). It sounds like a boring background process, but it is actually the secret sauce that keeps professional AWS enviro...]]></description><link>https://blog.amitabh.cloud/aws-sts-secure-temporary-credentials</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-sts-secure-temporary-credentials</guid><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[sts]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[tech ]]></category><category><![CDATA[technology]]></category><category><![CDATA[Session]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Tue, 03 Feb 2026 12:53:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770123077745/13e74f29-e100-45b2-af06-4b92632da087.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-the-hall-pass-of-the-cloud-understanding-aws-sts"><strong>The "Hall Pass" of the Cloud: Understanding AWS STS</strong></h1>
<p>If you’ve spent any time in AWS, you’ve seen the term <strong>STS (Security Token Service)</strong>. It sounds like a boring background process, but it is actually the secret sauce that keeps professional AWS environments secure.</p>
<p>Many people get confused: <em>"If I already have my IAM Access Keys, why do I need STS?"</em></p>
<p>The answer lies in the difference between <strong>who you are</strong> and <strong>what you are allowed to do right now.</strong></p>
<hr />
<h3 id="heading-the-analogy-the-atm-card-vs-the-transaction"><strong>The Analogy: The ATM Card vs. The Transaction</strong></h3>
<p>Think of your <strong>IAM Access Keys</strong> (Long-term) as your <strong>ATM Card</strong>. You keep it in your wallet for years. It represents your identity at the bank.</p>
<p><strong>AWS STS</strong> is the <strong>Transaction Receipt/PIN check</strong>. Even if you have the card, the bank doesn’t let you walk into the vault. Instead, they give you a temporary "session" to perform one specific task.</p>
<p>In AWS, your long-term keys are used to ask STS for a <strong>Temporary Session</strong>. This session is like a "Hall Pass" - it has a countdown timer (usually 1 hour). When the timer hits zero, the pass becomes useless paper.</p>
<hr />
<h3 id="heading-why-temporary-is-better-than-permanent"><strong>Why "Temporary" is Better Than "Permanent"</strong></h3>
<p>You might think, <em>"If my session expires, I just use my keys to get a new one. What’s the point?"</em> The point is <strong>Force-Multiplying Security</strong>:</p>
<ol>
<li><p><strong>The MFA Checkpoint:</strong> You can set a rule that says: <em>"To get an STS session, you must provide an MFA code."</em> Now, if a hacker steals your permanent keys, they are stuck. They can’t start a session because they don't have your phone.</p>
</li>
<li><p><strong>The "Kill Switch":</strong> If an admin suspects your account is compromised, they can Revoke all active sessions in one click. Even if you have the permanent keys, STS will refuse to give you a new session.</p>
</li>
<li><p><strong>Cross-Account Access:</strong> STS allows you to jump from a "Dev" account to a "Prod" account without needing a separate username and password for both. You "Assume a Role," do the work, and the access automatically vanishes after an hour.</p>
</li>
<li><p><strong>Least Privilege:</strong> Your permanent user can have <strong>zero</strong> permissions. You only gain power when you request an STS session for a specific <strong>IAM Role</strong>. If your laptop is stolen while you're at lunch, the thief only has access to a "powerless" user once the session expires.</p>
</li>
</ol>
<hr />
<h3 id="heading-the-bottom-line"><strong>The Bottom Line</strong></h3>
<p>STS turns your static, dangerous permanent keys into dynamic, short-lived permissions. It ensures that every hour (or however long the admin defines), AWS stops and asks: <strong>"Are you still who you say you are? And are you still allowed to do this?"</strong></p>
<p>It’s not an inconvenience; it’s a heartbeat check for your cloud security.</p>
<p>Happy Learning!<br />Amitabh Soni</p>
]]></content:encoded></item><item><title><![CDATA[AWS IAM Policy Simulator]]></title><description><![CDATA[Test Permissions Without Risking Your AWS Account
When working with AWS IAM, one of the most common questions engineers face is:

“Is this permission enough?”“Did I accidentally give more access than required?”

Testing IAM permissions directly in a ...]]></description><link>https://blog.amitabh.cloud/aws-iam-policy-simulator</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-iam-policy-simulator</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[AWS IAM Policy]]></category><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Wed, 14 Jan 2026 14:21:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768400741409/97f29f01-e0bd-4e1c-87d6-9c5328550ec1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-test-permissions-without-risking-your-aws-account">Test Permissions Without Risking Your AWS Account</h1>
<p>When working with AWS IAM, one of the most common questions engineers face is:</p>
<blockquote>
<p><em>“Is this permission enough?”</em><br /><em>“Did I accidentally give more access than required?”</em></p>
</blockquote>
<p>Testing IAM permissions directly in a real AWS account can be risky, especially in production environments. Creating users, attaching policies, and performing actions just to verify access can lead to unintended security issues.</p>
<p>This is where <strong>AWS IAM Policy Simulator</strong> becomes extremely useful.</p>
<hr />
<h2 id="heading-what-is-aws-iam-policy-simulator">What Is AWS IAM Policy Simulator?</h2>
<p>AWS IAM Policy Simulator is a built-in AWS tool that allows you to <strong>test and validate IAM permissions without actually creating or modifying resources</strong>.</p>
<p>It simulates how AWS evaluates policies and tells you whether a specific action would be <strong>allowed or denied</strong> for a given IAM user, role, or policy.</p>
<p>In simple terms, it answers:</p>
<ul>
<li><p><em>Can this user perform this action?</em></p>
</li>
<li><p><em>Which policy allows or denies it?</em></p>
</li>
</ul>
<p>All without touching real infrastructure.</p>
<hr />
<h2 id="heading-why-you-should-use-iam-policy-simulator">Why You Should Use IAM Policy Simulator</h2>
<h3 id="heading-1-avoid-testing-in-real-aws-accounts">1. Avoid Testing in Real AWS Accounts</h3>
<p>Testing permissions manually often means:</p>
<ul>
<li><p>Creating resources</p>
</li>
<li><p>Triggering API calls</p>
</li>
<li><p>Risking security or unexpected costs</p>
</li>
</ul>
<p>The Policy Simulator removes this risk entirely.</p>
<hr />
<h3 id="heading-2-validate-least-privilege-access">2. Validate Least Privilege Access</h3>
<p>IAM best practice recommends granting <strong>only the permissions required</strong>.</p>
<p>With the simulator, you can:</p>
<ul>
<li><p>Check if permissions are insufficient</p>
</li>
<li><p>Detect over-permissioned policies</p>
</li>
<li><p>Fine-tune policies before deployment</p>
</li>
</ul>
<hr />
<h3 id="heading-3-debug-permission-issues-faster">3. Debug Permission Issues Faster</h3>
<p>Instead of guessing why an action is failing:</p>
<ul>
<li><p>Simulate the action</p>
</li>
<li><p>Identify the exact policy causing the denial</p>
</li>
<li><p>Fix the issue quickly</p>
</li>
</ul>
<p>This is especially helpful in complex environments with multiple attached policies.</p>
<hr />
<h2 id="heading-how-iam-policy-simulator-works">How IAM Policy Simulator Works</h2>
<p>At a high level, the simulator follows these steps:</p>
<ol>
<li><p>Select an IAM user, role, or policy</p>
</li>
<li><p>Choose AWS services and actions (for example: <code>s3:PutObject</code>)</p>
</li>
<li><p>Simulate the request</p>
</li>
<li><p>Review the result (Allowed or Denied)</p>
</li>
<li><p>See which policy affected the decision</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768400311594/daf6ac08-9afb-47d2-8319-ef6a7106b80f.png" alt class="image--center mx-auto" /></p>
<p>AWS evaluates permissions exactly as it would during a real API call - just without executing it.</p>
<hr />
<h2 id="heading-how-to-access-iam-policy-simulator">How to Access IAM Policy Simulator</h2>
<p>You can access the IAM Policy Simulator using the link below:</p>
<p><a target="_blank" href="https://policysim.aws.amazon.com/home/index.jsp">https://policysim.aws.amazon.com/home/index.jsp</a></p>
<p>Steps:</p>
<ol>
<li><p>Log in to your AWS account</p>
</li>
<li><p>Open the IAM Policy Simulator</p>
</li>
<li><p>Select a user, role, or group</p>
</li>
<li><p>Choose actions to simulate</p>
</li>
<li><p>Review the results</p>
</li>
</ol>
<hr />
<h2 id="heading-real-world-use-cases">Real-World Use Cases</h2>
<h3 id="heading-use-case-1-before-assigning-permissions">Use Case 1: Before Assigning Permissions</h3>
<p>Before attaching a policy to a user or role:</p>
<ul>
<li><p>Simulate required actions</p>
</li>
<li><p>Confirm permissions are sufficient</p>
</li>
<li><p>Avoid granting unnecessary access</p>
</li>
</ul>
<hr />
<h3 id="heading-use-case-2-troubleshooting-access-denied-errors">Use Case 2: Troubleshooting Access Denied Errors</h3>
<p>When an application fails due to permission issues:</p>
<ul>
<li><p>Simulate the failing action</p>
</li>
<li><p>Identify missing permissions</p>
</li>
<li><p>Update policies confidently</p>
</li>
</ul>
<hr />
<h3 id="heading-use-case-3-security-reviews-and-audits">Use Case 3: Security Reviews and Audits</h3>
<p>During audits:</p>
<ul>
<li><p>Validate access paths</p>
</li>
<li><p>Ensure least privilege</p>
</li>
<li><p>Demonstrate compliance without modifying infrastructure</p>
</li>
</ul>
<hr />
<h2 id="heading-limitations-to-keep-in-mind">Limitations to Keep in Mind</h2>
<p>While powerful, the IAM Policy Simulator:</p>
<ul>
<li><p>Does not simulate resource-based policies perfectly in all scenarios</p>
</li>
<li><p>Does not execute real AWS operations</p>
</li>
<li><p>Should be used alongside logging tools like AWS CloudTrail</p>
</li>
</ul>
<p>It is best used as a <strong>pre-deployment and debugging tool</strong>, not a replacement for monitoring.</p>
<hr />
<h2 id="heading-best-practices-when-using-iam-policy-simulator">Best Practices When Using IAM Policy Simulator</h2>
<ul>
<li><p>Always simulate permissions before production deployment</p>
</li>
<li><p>Use it to refine least-privilege policies</p>
</li>
<li><p>Combine it with IAM Access Analyzer and CloudTrail</p>
</li>
<li><p>Regularly review policies as services evolve</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>AWS IAM Policy Simulator is an essential tool for anyone working with AWS security and access management.</p>
<p>It allows you to:</p>
<ul>
<li><p>Test permissions safely</p>
</li>
<li><p>Reduce security risks</p>
</li>
<li><p>Debug faster</p>
</li>
<li><p>Follow IAM best practices</p>
</li>
</ul>
<p>If you’re working with IAM and not using the Policy Simulator yet, you’re missing a powerful safety net.</p>
<hr />
<p><strong>Happy Learning,</strong><br /><strong>Amitabh Soni</strong></p>
]]></content:encoded></item><item><title><![CDATA[Setting Up Self-hosted Runners on AWS EC2 (Ubuntu)]]></title><description><![CDATA[Introduction
Self-hosted runners provide greater flexibility and customization for your GitHub Actions workflows. This guide will walk you through the process of setting up and managing your own runners on AWS EC2 using Ubuntu.
Why Use Self-hosted Ru...]]></description><link>https://blog.amitabh.cloud/setting-up-self-hosted-runners-on-aws-ec2-ubuntu</link><guid isPermaLink="true">https://blog.amitabh.cloud/setting-up-self-hosted-runners-on-aws-ec2-ubuntu</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[#github_actions_runners]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Wed, 23 Jul 2025 13:44:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753278157919/d7c59fea-584a-4c07-b12c-d95d58f5cefe.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Self-hosted runners provide greater flexibility and customization for your GitHub Actions workflows. This guide will walk you through the process of setting up and managing your own runners on AWS EC2 using Ubuntu.</p>
<h2 id="heading-why-use-self-hosted-runners-on-ec2">Why Use Self-hosted Runners on EC2?</h2>
<p><strong>Benefits of EC2-based Runners</strong>:</p>
<ul>
<li><p><strong>Cost Control</strong>: Optimize instance types for your specific workloads</p>
</li>
<li><p><strong>Network Access</strong>: Direct access to your AWS resources and VPCs</p>
</li>
<li><p><strong>Customization</strong>: Install specific software and dependencies</p>
</li>
<li><p><strong>Scalability</strong>: Easily scale up or down based on workflow demands</p>
</li>
<li><p><strong>Persistence</strong>: Maintain state between workflow runs if needed</p>
</li>
<li><p><strong>Resource Control</strong>: Choose instance types with the CPU/RAM/storage you need</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin, make sure you have:</p>
<ul>
<li><p>An AWS account with permissions to create EC2 instances</p>
</li>
<li><p>A GitHub repository where you want to use self-hosted runners</p>
</li>
<li><p>Basic knowledge of AWS EC2 and SSH</p>
</li>
<li><p>Admin access to the GitHub repository or organization</p>
</li>
</ul>
<h2 id="heading-step-1-launch-an-ec2-instance">Step 1: Launch an EC2 Instance</h2>
<h3 id="heading-1-log-in-to-aws-console-and-launch-an-instance">1. Log in to AWS Console and Launch an Instance</h3>
<ol>
<li><p>Go to the AWS Management Console</p>
</li>
<li><p>Navigate to EC2 Dashboard</p>
</li>
<li><p>Click "Launch Instance"</p>
</li>
</ol>
<h3 id="heading-2-choose-an-ubuntu-ami">2. Choose an Ubuntu AMI</h3>
<ol>
<li><p>Select "Ubuntu Server 22.04 LTS (HVM)"</p>
</li>
<li><p>This provides a stable, long-term supported environment for your runner</p>
</li>
</ol>
<h3 id="heading-3-select-instance-type">3. Select Instance Type</h3>
<ol>
<li><p>For basic workflows: <code>t3.small</code> (2 vCPU, 2 GB RAM)</p>
</li>
<li><p>For moderate workloads: <code>t3.medium</code> (2 vCPU, 4 GB RAM)</p>
</li>
<li><p>For resource-intensive tasks: <code>m5.large</code> or higher</p>
</li>
</ol>
<h3 id="heading-4-configure-instance-details">4. Configure Instance Details</h3>
<ol>
<li><p>Network: Select your VPC</p>
</li>
<li><p>Subnet: Choose a subnet with internet access</p>
</li>
<li><p>Auto-assign Public IP: Enable</p>
</li>
<li><p>IAM role: Attach a role with necessary permissions if your workflows need AWS services</p>
</li>
</ol>
<h3 id="heading-5-add-storage">5. Add Storage</h3>
<ol>
<li><p>Root volume: At least 20 GB (more if you'll be building large projects)</p>
</li>
<li><p>Volume type: gp3 (general purpose SSD)</p>
</li>
</ol>
<h3 id="heading-6-configure-security-group">6. Configure Security Group</h3>
<p>Create a security group with these rules:</p>
<ul>
<li><p>SSH (port 22): Restrict to your IP address</p>
</li>
<li><p>HTTPS (port 443): Allow from anywhere (for GitHub communication)</p>
</li>
<li><p>HTTP: Allow from anywhere</p>
</li>
</ul>
<h3 id="heading-7-launch-instance-and-createselect-key-pair">7. Launch Instance and Create/Select Key Pair</h3>
<ol>
<li><p>Launch the instance</p>
</li>
<li><p>Create or select an existing key pair</p>
</li>
<li><p>Download the key pair if creating new</p>
</li>
<li><p>Set proper permissions: <code>chmod 400 your-key.pem</code></p>
</li>
</ol>
<h2 id="heading-step-2-connect-to-your-ec2-instance">Step 2: Connect to Your EC2 Instance</h2>
<pre><code class="lang-bash">ssh -i your-key.pem ubuntu@your-ec2-public-dns
</code></pre>
<h2 id="heading-step-3-prepare-the-ec2-instance">Step 3: Prepare the EC2 Instance</h2>
<h3 id="heading-1-update-system-packages">1. Update System Packages</h3>
<pre><code class="lang-bash">sudo apt update
sudo apt upgrade -y
</code></pre>
<h3 id="heading-2-install-required-dependencies">2. Install Required Dependencies</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Install basic tools</span>
sudo apt install -y curl wget git jq build-essential

<span class="hljs-comment"># Install Docker (optional, if you need Docker for your workflows)</span>
sudo apt install -y docker.io
sudo systemctl <span class="hljs-built_in">enable</span> docker
sudo systemctl start docker
sudo usermod -aG docker ubuntu
</code></pre>
<h3 id="heading-3-create-a-dedicated-user-for-the-runner-optional-but-recommended">3. Create a Dedicated User for the Runner (Optional but Recommended)</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Create a user</span>
sudo adduser github-runner

<span class="hljs-comment"># Add to necessary groups</span>
sudo usermod -aG docker github-runner

<span class="hljs-comment"># Switch to the new user</span>
sudo su - github-runner
</code></pre>
<h2 id="heading-step-4-set-up-the-github-runner">Step 4: Set Up the GitHub Runner</h2>
<h3 id="heading-1-get-runner-registration-token">1. Get Runner Registration Token</h3>
<h4 id="heading-for-repository-level-runner">For Repository-level Runner:</h4>
<ol>
<li><p>Go to your GitHub repository</p>
</li>
<li><p>Navigate to Settings → Actions → Runners</p>
</li>
<li><p>Click "New self-hosted runner"</p>
</li>
<li><p>Select <code>Linux</code> as a Runner Image</p>
</li>
<li><p>Select architecture as <code>x64</code></p>
</li>
<li><p>Open a your EC2 instance terminal and run all command step by step as provide by GitHub</p>
</li>
</ol>
<p>During configuration, you'll be asked to, just press enter, if you do not want to change anything:</p>
<ol>
<li><p>Enter a runner group name (default)</p>
</li>
<li><p>Enter a runner name (default is the hostname)</p>
</li>
<li><p>Enter additional labels (e.g., <code>ubuntu-ec2</code>, <code>production</code>)</p>
</li>
<li><p>Enter a work folder (default is <code>_work</code>)</p>
</li>
</ol>
<h2 id="heading-step-5-verify-runner-registration">Step 5: Verify Runner Registration</h2>
<ol>
<li><p>Go back to your repository settings</p>
</li>
<li><p>Navigate to Settings → Actions → Runners</p>
</li>
<li><p>You should see your new runner listed as "Idle"</p>
</li>
<li><p>The status will change to "Active" when running a job</p>
</li>
</ol>
<h2 id="heading-step-6-create-a-hello-world-workflow-to-test-your-self-hosted-runner">Step 6: Create a Hello World Workflow to Test Your Self-hosted Runner</h2>
<p>Let's create a simple "Hello World" workflow to verify your self-hosted runner is working correctly.</p>
<p>Create a <code>.github/workflows/hello-world-runner.yml</code> file in your repository:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># ===================================================</span>
<span class="hljs-comment"># HELLO WORLD SELF-HOSTED RUNNER TEST - MODULE 3</span>
<span class="hljs-comment"># ===================================================</span>
<span class="hljs-comment"># LEARNING OBJECTIVES:</span>
<span class="hljs-comment"># - Understand how to target self-hosted runners</span>
<span class="hljs-comment"># - Learn how to verify runner functionality</span>
<span class="hljs-comment"># - See basic environment information retrieval</span>
<span class="hljs-comment"># - Experience the difference between GitHub-hosted and self-hosted runners</span>
<span class="hljs-comment"># ===================================================</span>

<span class="hljs-attr">name:</span> <span class="hljs-string">Hello</span> <span class="hljs-string">World</span> <span class="hljs-string">Self-Hosted</span> <span class="hljs-string">Runner</span>

<span class="hljs-comment"># LEARNING POINT: Multiple trigger types for flexibility</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-comment"># Manual trigger from GitHub UI</span>
  <span class="hljs-attr">workflow_dispatch:</span>

  <span class="hljs-comment"># Automatic trigger on push to main branch</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">main</span> ]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-comment"># LEARNING POINT: Simple job targeting self-hosted runner</span>
  <span class="hljs-attr">hello-world:</span>
    <span class="hljs-comment"># LEARNING POINT: This is how you specify a self-hosted runner</span>
    <span class="hljs-comment"># Instead of ubuntu-latest, windows-latest, etc.</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">self-hosted</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-comment"># LEARNING POINT: Standard checkout action works on self-hosted runners too</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">code</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-comment"># LEARNING POINT: Basic hello world with runner information</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Hello</span> <span class="hljs-string">from</span> <span class="hljs-string">self-hosted</span> <span class="hljs-string">runner</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "========================================"
          echo "👋 Hello, World! I'm a self-hosted runner!"
          echo "========================================"
          echo "🏷️ Runner name: ${{ runner.name }}"
          echo "💻 Runner OS: ${{ runner.os }}"
          echo "📂 Working directory: $(pwd)"
          echo "📦 Repository: ${{ github.repository }}"
          echo "🔄 Workflow: ${{ github.workflow }}"
</span>
      <span class="hljs-comment"># LEARNING POINT: Simple system information for verification</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Basic</span> <span class="hljs-string">system</span> <span class="hljs-string">info</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          echo "========================================"
          echo "📊 System Information"
          echo "========================================"
          echo "🖥️ Hostname: $(hostname)"
          echo "💾 Disk space:"
          df -h / | grep -v Filesystem
</span>
          <span class="hljs-comment"># Show that we're running on the self-hosted machine</span>
          <span class="hljs-string">echo</span> <span class="hljs-string">"🌐 IP Address:"</span>
          <span class="hljs-string">hostname</span> <span class="hljs-string">-I</span> <span class="hljs-string">||</span> <span class="hljs-string">echo</span> <span class="hljs-string">"IP address command not available"</span>

<span class="hljs-comment"># ===================================================</span>
<span class="hljs-comment"># LEARNING NOTES:</span>
<span class="hljs-comment"># ===================================================</span>
<span class="hljs-comment"># 1. Self-hosted runners are specified with "runs-on: self-hosted"</span>
<span class="hljs-comment"># 2. You can add labels to runners and target them with:</span>
<span class="hljs-comment">#    runs-on: [self-hosted, linux, production]</span>
<span class="hljs-comment"># 3. Self-hosted runners have access to their host environment</span>
<span class="hljs-comment"># 4. You can install custom software on self-hosted runners</span>
<span class="hljs-comment"># 5. Self-hosted runners can access internal networks</span>
<span class="hljs-comment"># ===================================================</span>
</code></pre>
<h3 id="heading-how-to-use-this-test-workflow">How to Use This Test Workflow</h3>
<ol>
<li><p><strong>Commit and push</strong> the workflow file to your repository</p>
</li>
<li><p><strong>Go to the Actions tab</strong> in your GitHub repository</p>
</li>
<li><p><strong>Select "Hello World Self-Hosted Runner"</strong> from the workflows list</p>
</li>
<li><p><strong>Click "Run workflow"</strong> and select the branch to run on</p>
</li>
<li><p><strong>Watch the workflow run</strong> on your self-hosted runner</p>
</li>
</ol>
<h3 id="heading-what-this-test-verifies">What This Test Verifies</h3>
<p>This simple workflow confirms:</p>
<ul>
<li><p>Your self-hosted runner is properly connected to GitHub</p>
</li>
<li><p>The runner can check out code from your repository</p>
</li>
<li><p>Basic system information about your EC2 instance</p>
</li>
</ul>
<h3 id="heading-learning-points">Learning Points</h3>
<ol>
<li><p>Self-hosted runners are specified with <code>runs-on: self-hosted</code></p>
</li>
<li><p>You can add labels to runners and target them with: <code>runs-on: [self-hosted, linux, production]</code></p>
</li>
<li><p>Self-hosted runners have access to their host environment</p>
</li>
<li><p>You can install custom software on self-hosted runners</p>
</li>
<li><p>Self-hosted runners can access internal networks</p>
</li>
</ol>
<p>If you see output from this workflow, congratulations! Your self-hosted runner on EC2 is working correctly and ready to run your GitHub Actions workflows.</p>
<blockquote>
<p>For Sample Output you can visit: <a target="_blank" href="https://github.com/Amitabh-DevOps/GitHub_Actions_Workflow_text/blob/main/Output_images/self_hosted_runner_output.md">Sample Output</a></p>
</blockquote>
<h3 id="heading-ec2-instance-maintenance">EC2 Instance Maintenance</h3>
<p><strong>Regular Updates</strong>:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt upgrade -y
</code></pre>
<h2 id="heading-troubleshooting">Troubleshooting</h2>
<p><strong>Common issues</strong>:</p>
<ul>
<li><p>Incorrect permissions</p>
</li>
<li><p>Network connectivity problems</p>
</li>
<li><p>GitHub token expired</p>
</li>
</ul>
<h3 id="heading-jobs-not-being-assigned">Jobs Not Being Assigned</h3>
<p><strong>Check</strong>:</p>
<ol>
<li><p>Runner is online in GitHub UI</p>
</li>
<li><p>Job's <code>runs-on</code> label matches your runner's labels</p>
</li>
<li><p>Repository has access to the runner group</p>
</li>
</ol>
<h3 id="heading-network-connectivity-issues">Network Connectivity Issues</h3>
<p><strong>Test GitHub connectivity</strong>:</p>
<pre><code class="lang-bash">curl -v https://github.com
</code></pre>
<p><strong>Check outbound access</strong>:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Test HTTPS connectivity</span>
curl -v https://api.github.com

<span class="hljs-comment"># Test network configuration</span>
ip addr
route -n
</code></pre>
<h2 id="heading-cost-optimization">Cost Optimization</h2>
<h3 id="heading-1-choose-the-right-instance-type">1. Choose the Right Instance Type</h3>
<p>Match the instance type to your workload:</p>
<ul>
<li><p>CPU-intensive: Compute-optimized instances (c5, c6g)</p>
</li>
<li><p>Memory-intensive: Memory-optimized instances (r5, r6g)</p>
</li>
<li><p>Balanced: General purpose instances (t3, t2, m5)</p>
</li>
</ul>
<h3 id="heading-2-use-spot-instances">2. Use Spot Instances</h3>
<p>For non-critical workflows, consider spot instances:</p>
<ul>
<li><p>Up to 90% cheaper than on-demand</p>
</li>
<li><p>Configure fallback to GitHub-hosted runners if spot instances are terminated</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You now have a self-hosted GitHub Actions runner running on an AWS EC2 Ubuntu instance. This setup gives you full control over your CI/CD environment while leveraging the flexibility of AWS infrastructure.</p>
<p>By using EC2 for your self-hosted runners, you can:</p>
<ul>
<li><p>Customize the environment to your exact needs</p>
</li>
<li><p>Access AWS resources directly with low latency</p>
</li>
<li><p>Control costs by choosing appropriate instance types</p>
</li>
<li><p>Scale your CI/CD capacity as your needs grow</p>
</li>
</ul>
<h2 id="heading-additional-resources">Additional Resources</h2>
<ul>
<li><p><a target="_blank" href="https://docs.github.com/en/actions/hosting-your-own-runners">GitHub Actions Self-hosted Runners Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html">AWS EC2 User Guide</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/actions/runner">GitHub Actions Runner Repository</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html">AWS Auto Scaling Documentation</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Networking Essentials for Beginners]]></title><description><![CDATA[Introduction:
Networking is the backbone of modern communication, enabling billions of devices to connect and exchange information across the globe. Whether you're browsing a website, sending an email, or streaming a video, networking concepts are wo...]]></description><link>https://blog.amitabh.cloud/networking-essentials-for-beginners</link><guid isPermaLink="true">https://blog.amitabh.cloud/networking-essentials-for-beginners</guid><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Mon, 23 Jun 2025 05:05:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752591351621/56dbedff-48d0-41b1-b7e5-eabb40e05ccb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction:</h2>
<p>Networking is the backbone of modern communication, enabling billions of devices to connect and exchange information across the globe. Whether you're browsing a website, sending an email, or streaming a video, networking concepts are working behind the scenes to make it possible.</p>
<p>In this blog, I’ve summarized key networking fundamentals including how the Internet works, types of networks (LAN, WAN, MAN, PAN), the OSI and TCP/IP models, IP &amp; MAC addressing, network devices like routers and switches, essential protocols and ports, firewall basics, and the client-server architecture.</p>
<p>If you're starting your journey in tech, preparing for DevOps roles, or aiming to strengthen your fundamentals, this blog will serve as a comprehensive guide to networking essentials.</p>
<hr />
<h2 id="heading-how-does-the-internet-work">How does the Internet work?</h2>
<ol>
<li><p>It basically uses the Optical fiber cable that is placed in the oceans at a large depth.</p>
<ol>
<li>to View that visit: <a target="_blank" href="https://www.submarinecablemap.com/">https://www.submarinecablemap.com/</a></li>
</ol>
</li>
<li><p>There are 3 tiers of company architecture used to provide the Internet to end users</p>
<ol>
<li><p>Tier 1: A company that owns the optical fiber cable in the ocean, companies like AT&amp;T, NTT, Verizon</p>
</li>
<li><p>Tier 2: A company like Jio, Airtel, and BSNL that takes the cable on RENT and provides it to their end users</p>
</li>
<li><p>Tier 3: Small companies that also take internet from TIER 2 companies and provide the internet in a small area</p>
<ol>
<li>like, Jio fiber, Air Fiber, Sway broadband, You Broadband</li>
</ol>
</li>
</ol>
</li>
</ol>
<h3 id="heading-types-of-network">Types of Network:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652842760/38d0cff3-e838-445f-9e22-82b298a45f1a.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>WAN (Wide Area Network):</p>
<ul>
<li><p><strong>Scope:</strong> Covers a very large geographic area, potentially spanning across cities, countries, or even continents.</p>
</li>
<li><p><strong>Characteristics:</strong> Often involves complex infrastructure and can be a combination of public and private networks. Generally offers lower speeds and higher latency compared to LANs and MANs.</p>
</li>
<li><p><strong>Examples:</strong></p>
<p>  The Internet is a multinational corporation's network connecting offices across different countries.</p>
</li>
</ul>
</li>
<li><p>MAN (Metropolitan Area Network):</p>
<ul>
<li><p><strong>Scope:</strong> Covers a larger geographic area than a LAN, typically a city or a large campus</p>
</li>
<li><p><strong>Characteristics:</strong> Can be owned by a single entity or multiple organizations. Offers moderate speed and is suitable for connecting multiple LANs within a city.</p>
</li>
<li><p><strong>Examples:</strong></p>
<p>  A city-wide network connecting different government agencies, or a network connecting multiple university campuses within a metropolitan area.</p>
</li>
</ul>
</li>
<li><p>LAN (Local Area Network):</p>
<ul>
<li><p><strong>Scope:</strong> Limited geographic area, such as a home, office building, or school.</p>
</li>
<li><p><strong>Characteristics:</strong> Typically privately owned and managed. Offers high-speed data transmission and low latency (delay).</p>
</li>
<li><p><strong>Examples:</strong> Your home Wi-Fi network, a computer lab in a school, or the network within a single office building.</p>
</li>
</ul>
</li>
<li><p>PAN (Personal Area Network):</p>
<ol>
<li>Connects devices within a very close proximity, typically a few meters, for an individual's personal use. Examples include Bluetooth connections between a phone and headphones or a wireless mouse connected to a computer.</li>
</ol>
</li>
</ol>
<hr />
<h2 id="heading-osi-model-amp-tcpip-model">OSI Model &amp; TCP/IP Model</h2>
<h3 id="heading-osiopen-systems-interconnection-model">OSI(Open Systems Interconnection) Model:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652875100/9b12193f-0f5e-484a-b831-afae4dff32de.png" alt class="image--center mx-auto" /></p>
<p>The <strong>OSI (Open Systems Interconnection)</strong> Model is a set of rules that explains how different computer systems communicate over a network. The OSI Model was developed by the <strong>International Organization for Standardization (ISO)</strong>. The OSI Model consists of 7 layers, and each layer has specific functions and responsibilities.</p>
<p>7 Layers are:</p>
<ol>
<li><p><strong>Application Layer (Layer 7)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Provides network services directly to user applications, like web browsers or email clients. It’s the layer you interact with when you use the internet.</p>
</li>
<li><p><strong>Key Tasks</strong>: Handles protocols like HTTP (for web browsing), SMTP (for email), or FTP (for file transfers). It ensures apps can communicate over the network.</p>
</li>
<li><p><strong>Example</strong>: When you type a URL into your browser, the Application Layer formats the request to fetch the webpage.</p>
</li>
<li><p><strong>Analogy</strong>: The front desk of a hotel, where guests (users) make requests for services like room bookings or dining.</p>
</li>
</ul>
</li>
<li><p><strong>Presentation Layer (Layer 6)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Translates data between the application and the network, ensuring it’s in a usable format. It handles encryption, compression, and data formatting.</p>
</li>
<li><p><strong>Key Tasks</strong>: Converts data (e.g., text, images) into a standard format, encrypts sensitive data (like passwords), and compresses data to save bandwidth.</p>
</li>
<li><p><strong>Example</strong>: When you send an encrypted email, the Presentation Layer encrypts the message before it’s sent.</p>
</li>
<li><p><strong>Analogy</strong>: A translator who converts your words into a language the recipient understands.</p>
</li>
</ul>
</li>
<li><p><strong>Session Layer (Layer 5)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Manages communication sessions between devices, ensuring connections are established, maintained, and terminated properly.</p>
</li>
<li><p><strong>Key Tasks</strong>: Sets up, coordinates, and ends conversations between applications. It handles session recovery if a connection drops.</p>
</li>
<li><p><strong>Example</strong>: During a video call, the Session Layer keeps the connection active and reconnects if the call briefly drops.</p>
</li>
<li><p><strong>Analogy</strong>: A meeting coordinator who schedules, starts, and ends a conference call.</p>
</li>
</ul>
</li>
<li><p><strong>Transport Layer (Layer 4)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Ensures reliable data transfer between devices, managing flow control, error checking, and data segmentation.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses protocols like TCP (reliable, ordered delivery) or UDP (fast, less reliable). It breaks data into packets and reassembles them at the destination.</p>
</li>
<li><p><strong>Example</strong>: When downloading a file, TCP ensures all packets arrive correctly and in order.</p>
</li>
<li><p><strong>Analogy</strong>: A courier service that ensures packages are delivered intact and in the right sequence.</p>
</li>
</ul>
</li>
<li><p><strong>Network Layer (Layer 3)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Routes data packets between different networks, determining the best path to the destination.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses IP addresses to route packets. Protocols like IP (IPv4 or IPv6) operate here.</p>
</li>
<li><p><strong>Example</strong>: When you visit a website, the Network Layer routes your request from your home network to the website’s server across the internet.</p>
</li>
<li><p><strong>Analogy</strong>: A GPS system that finds the best route for your package to travel across cities.</p>
</li>
</ul>
</li>
<li><p><strong>Data Link Layer (Layer 2)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Handles communication between devices on the same network, ensuring error-free data transfer.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses MAC addresses to identify devices. It detects and corrects errors from the Physical Layer. Protocols include Ethernet.</p>
</li>
<li><p><strong>Example</strong>: In your home Wi-Fi network, the Data Link Layer ensures your laptop and router exchange data correctly.</p>
</li>
<li><p><strong>Analogy</strong>: A local mailroom sorting letters to ensure they reach the right office within a building.</p>
</li>
</ul>
</li>
<li><p><strong>Physical Layer (Layer 1)</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Manages the physical connection between devices, transmitting raw bits over hardware like cables or Wi-Fi.</p>
</li>
<li><p><strong>Key Tasks</strong>: Defines hardware standards (e.g., cables, connectors, signal voltages). It converts data into electrical or optical signals.</p>
</li>
<li><p><strong>Example</strong>: When you connect to Wi-Fi, the Physical Layer sends radio signals between your device and the router.</p>
</li>
<li><p><strong>Analogy</strong>: The physical roads or wires that carry packages from one place to another.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Real-World Example</strong>: Sending an email involves all layers. The Application Layer formats the email, the Presentation Layer encrypts it, the Session Layer maintains the connection, the Transport Layer ensures reliable delivery, the Network Layer routes it to the recipient’s server, the Data Link Layer handles local network communication, and the Physical Layer sends the data over cables or Wi-Fi.</p>
<h3 id="heading-tcpip-model">TCP/IP Model:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652905886/3373639d-f140-4d18-a22e-aceab00b91c5.png" alt class="image--center mx-auto" /></p>
<p>The TCP/IP model (Transmission Control Protocol/Internet Protocol) is a four-layer networking framework that enables reliable communication between devices over interconnected networks. It provides a standardized set of protocols for transmitting data across interconnected networks, ensuring efficient and error-free delivery. Each layer has specific functions that help manage different aspects of network communication, making it essential for understanding and working with modern networks.</p>
<p>TCP/IP was designed and developed by the Department of Defense (DoD) in the 1970s and is based on standard protocols. The TCP/IP model is a concise version of the OSI model. It contains four layers, unlike the seven layers in the OSI model.</p>
<p>4 Layers are:</p>
<ol>
<li><p><strong>Application Layer</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Combines the OSI’s Application, Presentation, and Session layers. It handles user-facing services, data formatting, and session management.</p>
</li>
<li><p><strong>Key Tasks</strong>: Supports protocols like HTTP (web), SMTP (email), FTP (file transfer), and DNS (domain name resolution). It ensures apps can communicate over the network.</p>
</li>
<li><p><strong>Example</strong>: When you visit a website, the Application Layer sends an HTTP request to the server and displays the webpage.</p>
</li>
<li><p><strong>Analogy</strong>: A one-stop service desk handling customer requests, translations, and session coordination.</p>
</li>
</ul>
</li>
<li><p><strong>Transport Layer</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Manages end-to-end data transfer, ensuring reliability and flow control. It’s identical to the OSI Transport Layer.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses TCP for reliable, ordered delivery (e.g., for emails) or UDP for faster, less reliable delivery (e.g., for streaming). It segments data into packets.</p>
</li>
<li><p><strong>Example</strong>: When streaming a video, UDP sends packets quickly, prioritizing speed over perfection.</p>
</li>
<li><p><strong>Analogy</strong>: A shipping company that decides whether to use guaranteed delivery (TCP) or express shipping with some risk of loss (UDP).</p>
</li>
</ul>
</li>
<li><p><strong>Network Layer</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Routes data packets across different networks, finding the best path to the destination. Matches the OSI Network Layer.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses IP (IPv4 or IPv6) to assign addresses and route packets. Works with routing protocols to navigate the internet.</p>
</li>
<li><p><strong>Example</strong>: When you send a message via WhatsApp, the Network Layer routes it from your phone to the recipient’s server.</p>
</li>
<li><p><strong>Analogy</strong>: A postal service that forwards packages across cities using addresses.</p>
</li>
</ul>
</li>
<li><p><strong>Network Access Layer</strong></p>
<ul>
<li><p><strong>What It Does</strong>: Combines the OSI’s Data Link and Physical layers. It handles data transfer within a single network and the physical transmission of bits.</p>
</li>
<li><p><strong>Key Tasks</strong>: Uses MAC addresses for local communication and manages hardware (e.g., Ethernet, Wi-Fi). Converts data into signals for cables or wireless.</p>
</li>
<li><p><strong>Example</strong>: In your home network, the Network Access Layer ensures your laptop’s data reaches your router via Wi-Fi.</p>
</li>
<li><p><strong>Analogy</strong>: A local delivery truck that picks up packages and sends them over physical roads or airwaves.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Real-World Example</strong>: When you stream a movie, the Application Layer handles the streaming app’s interface and protocols (e.g., HTTPS), the Transport Layer uses UDP to deliver video packets quickly, the Network Layer routes packets to the streaming server, and the Network Access Layer sends signals over your Wi-Fi or Ethernet cable.</p>
<hr />
<h2 id="heading-what-is-ip-amp-mac-address">What is IP &amp; MAC Address?</h2>
<h3 id="heading-ip">IP:</h3>
<p><strong>1. Internet Protocol (IP):</strong></p>
<ul>
<li><p>It's a fundamental protocol that defines how data is transmitted across networks.</p>
</li>
<li><p>It ensures that data packets (small chunks of data) are correctly routed to their destination.</p>
</li>
<li><p>Essentially, it's the language that devices use to communicate with each other on the internet.</p>
</li>
</ul>
<p><strong>2. IP Address:</strong></p>
<ul>
<li><p>An IP address is a unique numerical identifier assigned to each device connected to a network that uses the Internet Protocol.</p>
</li>
<li><p>It acts like a postal address for your device on the internet, enabling other devices to find and communicate with it.</p>
</li>
<li><p>There are two main versions of IP addresses: IPv4 and IPv6</p>
<ul>
<li><p><strong>IPv4</strong>: addresses are 32-bit numbers, typically written in dotted-decimal notation (e.g., 192.168.1.1).</p>
</li>
<li><p><strong>IPv6</strong>: addresses are 128-bit numbers, designed to address the shortage of IPv4 addresses.</p>
</li>
</ul>
</li>
<li><p>IP addresses can be public (visible to the outside world, used for internet communication) or private (used within a local network).</p>
</li>
</ul>
<h3 id="heading-mac-address">MAC Address:</h3>
<ul>
<li><p><strong>Physical Address:</strong> MAC addresses are unique, hardware-level identifiers permanently assigned to a network interface card (NIC).</p>
</li>
<li><p><strong>Local Network Communication:</strong> They are used for communication within a local network, like a home or office network, to ensure data packets reach the correct device.</p>
</li>
<li><p><strong>Unchanging:</strong> MAC addresses are typically fixed and do not change.</p>
</li>
</ul>
<hr />
<h2 id="heading-routers-amp-switches">Routers &amp; Switches</h2>
<h3 id="heading-routers">Routers:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652938016/79b9cc55-5ec5-4039-978c-35c2427ee3d2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Function:</strong> A router connects multiple networks, allowing devices on different networks to communicate, including connecting your local network to the internet. It acts as a traffic director between different networks.</p>
</li>
<li><p><strong>Layer:</strong> Routers operate at the Network Layer (Layer 3) of the OSI model.</p>
</li>
<li><p><strong>Connectivity:</strong> Connects LANs to other LANs or to the Internet.</p>
</li>
<li><p><strong>Example:</strong> A router connects your home network (LAN) to your internet service provider's network, enabling you to access the internet.</p>
</li>
</ul>
<h3 id="heading-switches">Switches:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652962059/ac7fa440-108b-4d21-a730-b96b70d43a52.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Function:</strong> A switch connects devices within the same network, allowing them to communicate with each other. Think of it as a traffic controller within a single office building.</p>
</li>
<li><p><strong>Layer:</strong> Switches operate at the Data Link Layer (Layer 2) of the OSI model</p>
</li>
<li><p><strong>Connectivity:</strong> Connects devices like computers, printers, and servers within a LAN.</p>
</li>
<li><p><strong>Example:</strong> In a home network, a switch might connect your computers, gaming consoles, and smart TVs to your router.</p>
</li>
</ul>
<hr />
<h2 id="heading-firewall-ports-protocols">Firewall, Ports, Protocols</h2>
<h3 id="heading-protocols">Protocols:</h3>
<p>A protocol is <strong>a set of rules that govern how data is transmitted and received between devices on a network.</strong></p>
<ol>
<li><p><strong>HTTP (Hypertext Transfer Protocol)(Port: 80):</strong> The foundation of data communication for the World Wide Web. It's used to transmit hypertext (like web pages) between web browsers and servers.</p>
</li>
<li><p><strong>HTTPS (Hypertext Transfer Protocol Secure)(Port: 443):</strong> An encrypted version of HTTP. It uses TLS/SSL to provide a secure connection, protecting sensitive data during transmission.</p>
</li>
<li><p><strong>FTP (File Transfer Protocol)(Port:</strong> <code>20</code> (Data), <code>21</code> (Control)<strong>):</strong> Used to transfer files between computers over a network. It allows for uploading and downloading files.</p>
</li>
<li><p><strong>SMTP (Simple Mail Transfer Protocol)(Port: 25, 465, 587):</strong> The standard protocol for sending emails. It handles the transmission of email messages from a client to a mail server and between mail servers.</p>
</li>
<li><p><strong>TCP (Transmission Control Protocol):</strong> A connection-oriented protocol that provides reliable, ordered, and error-checked delivery of data. It's used for applications where accuracy is crucial.</p>
</li>
<li><p><strong>UDP (User Datagram Protocol):</strong> A connectionless protocol that offers a faster but less reliable way to transmit data. It's suitable for applications where speed is prioritized over absolute accuracy.</p>
</li>
<li><p><strong>IP (Internet Protocol):</strong> The core protocol responsible for routing data packets across networks, ensuring that information reaches its intended destination.</p>
</li>
<li><p><strong>ARP (Address Resolution Protocol)</strong>: A protocol that translates IP addresses into MAC addresses, enabling devices to locate each other on a local network segment.</p>
</li>
<li><p><strong>SSH(Port: 22):</strong> SSH, or Secure Shell, is <strong>a network protocol that provides a secure way to access and manage remote computers</strong>. It uses encryption to protect data transmitted between a client and a server, making it suitable for use over unsecured networks. Essentially, SSH allows you to securely log in to another computer, execute commands, and transfer files.</p>
</li>
</ol>
<h3 id="heading-ports">Ports:</h3>
<ul>
<li><p>Ports are virtual locations on a device that are used to identify specific services or applications.</p>
</li>
<li><p>They are identified by numbers, ranging from 0 to 65535.</p>
</li>
<li><p>Each port is associated with a specific protocol and service. For example, port 80 is typically used for HTTP (web traffic), and port 443 is used for HTTPS (secure web traffic).</p>
</li>
<li><p>Ports allow a device to handle multiple network connections simultaneously, directing incoming data to the appropriate application or service.</p>
</li>
</ul>
<h3 id="heading-firewall">Firewall:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750652996190/bcda8dd2-0a8d-4089-ad93-8cbdc4bad375.png" alt class="image--center mx-auto" /></p>
<p>Firewalls act as security checkpoints, inspecting network traffic and enforcing rules to protect networks from unauthorized access and malicious activity. They use protocols and port numbers to identify and filter traffic, allowing or blocking connections based on pre-defined rules. For example, a firewall can be configured to block all traffic on port 21 (FTP) while allowing traffic on port 80 (HTTP) and 443 (HTTPS).</p>
<hr />
<h2 id="heading-client-server-architecture">Client-Server Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750653021464/be9787b5-751e-4d5b-8f51-023e69774829.png" alt class="image--center mx-auto" /></p>
<p>The client-server model is <strong>a network architecture where clients request resources or services from a central server</strong>. The server then provides these resources or services to the client. This model is fundamental to how many applications and services operate on the internet and within local networks.</p>
<p><strong>Here's a more detailed breakdown:</strong></p>
<ul>
<li><p><strong>Clients:</strong> These are devices or software applications that initiate requests for data or services. Examples include web browsers, email clients, and mobile apps.</p>
</li>
<li><p><strong>Servers:</strong> These are powerful computers or software that provide resources and services to clients. They store data, run applications, and handle requests.</p>
</li>
<li><p><strong>Communication:</strong> Clients and servers communicate over a network using protocols like HTTP, TCP/IP, and others.</p>
</li>
<li><p><strong>Request-Response:</strong> The core of the model is the request-response pattern. A client sends a request, and the server processes it and sends back a response.</p>
</li>
<li><p><strong>Separation of Concerns:</strong> The client-server model promotes separation of concerns. Clients handle user interfaces and some processing, while servers handle data storage and complex operations.</p>
</li>
</ul>
<p><strong>Examples:</strong></p>
<ul>
<li><p>When you visit a website, your web browser (client) sends a request to the web server, which then sends back the website's content.</p>
</li>
<li><p>When you send an email, your email client (client) connects to an email server, which then relays the message to the recipient's email server.</p>
</li>
<li><p>Online banking involves a client (web browser or app) making requests to a bank's server to access accounts, transfer funds, etc.</p>
</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li><p><strong>Centralized Management:</strong> Servers can be managed and maintained centrally, making it easier to update and secure resources</p>
</li>
<li><p><strong>Scalability:</strong> The model can easily scale by adding more clients or servers as needed.</p>
</li>
<li><p><strong>Resource Sharing:</strong> Servers can share resources like databases, applications, and storage with multiple clients.</p>
</li>
<li><p><strong>Improved Performance:</strong> By offloading complex tasks to the server, clients can operate more efficiently</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Understanding networking is crucial for anyone entering fields like DevOps, cybersecurity, system administration, or cloud computing. These core concepts, from layered communication models to addressing, routing, and protocols, form the foundation upon which modern computing relies.</p>
<p>As I continue my journey in tech, diving deep into these topics has helped me understand not just <em>how</em> the internet works, but <em>why</em> it works the way it does. I hope this blog helps you the same way it helped me consolidate my learning.</p>
<p>Stay curious, keep exploring, and happy learning!</p>
]]></content:encoded></item><item><title><![CDATA[AWS Zero to Hero Day - 06]]></title><description><![CDATA[Task for Day 6
What is Elastic Container Service?

Amazon Elastic Container Services (Amazon ECS) is a fully managed container orchestration service that helps organizations easily deploy, manage, and scale containerized applications.

Learn more abo...]]></description><link>https://blog.amitabh.cloud/aws-zero-to-hero-day-06</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-zero-to-hero-day-06</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[software development]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><category><![CDATA[ECS]]></category><category><![CDATA[ecr]]></category><category><![CDATA[route53]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[cache]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Fri, 02 May 2025 09:59:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179854103/a59e0508-0549-45a8-b789-64ee0eb8a0a3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-task-for-day-6">Task for Day 6</h1>
<h3 id="heading-what-is-elastic-container-service">What is Elastic Container Service?</h3>
<ul>
<li><p>Amazon Elastic Container Services (Amazon ECS) is a fully managed container orchestration service that helps organizations easily deploy, manage, and scale containerized applications.</p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/ecs/features/">Learn more about ECS</a></p>
</li>
</ul>
<h3 id="heading-what-is-elastic-container-registry">What is Elastic Container Registry?</h3>
<ul>
<li><p>Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry service provided by Amazon Web Services (AWS). In simple terms, it's a place where you can store, manage, and deploy Docker container images, making it easier for you to run applications in the cloud using containers.</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html">Learn more about ECR</a></p>
</li>
</ul>
<h3 id="heading-what-is-route-53">What is Route 53?</h3>
<ul>
<li><p>Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service provided by Amazon Web Services (AWS). It is named after the TCP/IP port 53, which is used for DNS services. Route 53 is designed to provide reliable and cost-effective domain registration, DNS routing, and health checking of resources within your AWS infrastructure.</p>
</li>
<li><p>How Does DNS Route Traffic To Your Web Application? See the diagram below:</p>
<p>  <img src="https://github.com/LondheShubham153/aws-zero-to-hero/assets/121779953/aac36a26-e48e-4444-bff5-27a15568040a" alt="image" /></p>
</li>
</ul>
<h2 id="heading-tasks">Tasks:</h2>
<h4 id="heading-1-deploy-a-two-tier-application-on-elastic-container-service-ecs-and-configure-elastic-container-registry-ecr-to-push-docker-images">1) Deploy a two-tier application on Elastic Container Service (ECS) and configure Elastic Container Registry (ECR) to push Docker images.</h4>
<blockquote>
<p><code>Note:</code> The Docker image must be fetched from ECR.</p>
</blockquote>
<h3 id="heading-ans"><mark>ANS:</mark></h3>
<ol>
<li><p>Task 1 done - Application running ss:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176202988/06f963b8-15e7-4884-a919-5dccb35fdccf.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>EC2 instance used to build and push an image</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176260441/35b459b5-8ac5-4f8f-a31d-5ca6a6915bcb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Repository where the image is pushed.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176295078/62eb4852-70ba-480b-ac67-0c1700236c03.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>The Docker image:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176318073/da370120-857f-41b0-8517-94cfda01bcd5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Task Definition:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176351694/65769c55-33c8-4c84-9a0f-3b49881e2c76.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Task inside cluster:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176389859/41bc84b6-cb71-4db5-81ef-73774bf37dd4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>That task configuration:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176438815/2ed21a9e-26a2-4125-8398-7fa10778ef96.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Accessed the app at that public IP with port 8000</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746176515861/3d14dda7-c35e-4e92-ac92-5bdae5848fae.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h4 id="heading-2-understand-the-concept-of-cloudfront-and-try-to-perform-below-sub-tasks">2) Understand the concept of CloudFront and try to perform below sub-tasks:</h4>
<p>- What is caching in CloudFront?</p>
<p>- Create an EC2 instance with an Apache web server</p>
<p>- Create a CloudFront distribution and attach it to an EC2 instance to access the Apache webpage.</p>
<h3 id="heading-ans-1"><mark>ANS:</mark></h3>
<p>To learn about AWS CloudFront, <a target="_blank" href="https://amitabhdevops.hashnode.dev/amazon-cloudfront">read this blog</a></p>
<h5 id="heading-1-what-is-caching-in-cloudfront">1. What is Caching in CloudFront?</h5>
<p>Caching in AWS CloudFront involves storing copies of your content at over 600 edge locations globally, as noted in the <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html">AWS CloudFront Documentation</a>. When a user requests content, CloudFront checks the nearest edge location for a cached copy. If available, it delivers the content immediately, reducing latency. If not, it fetches the content from the origin (e.g., an EC2 instance), caches it, and serves it. Key aspects include:</p>
<ul>
<li><p><strong>Cache Key</strong>: Determines what’s cached, based on headers, cookies, or query strings.</p>
</li>
<li><p><strong>TTL Settings</strong>: Controls how long content stays cached (default or custom via cache policies).</p>
</li>
<li><p><strong>Cache Hit Ratio</strong>: The proportion of requests served from the cache, which you can optimize using tools like Origin Shield.</p>
</li>
</ul>
<p>This caching mechanism improves performance, reduces origin server load, and enhances reliability by distributing content closer to users.</p>
<h5 id="heading-2-creating-an-ec2-instance-with-apache-web-server">2. Creating an EC2 Instance with Apache Web Server</h5>
<p>Follow these steps to launch an EC2 instance and install Apache:</p>
<ol>
<li><p><strong>Log into AWS Management Console</strong>:</p>
<ul>
<li>Access the AWS Console at <a target="_blank" href="https://aws.amazon.com/console/">AWS Management Console</a> and navigate to the EC2 dashboard.</li>
</ul>
</li>
<li><p><strong>Launch an EC2 Instance</strong>:</p>
<ul>
<li><p>Click “Launch Instance” and select an AMI, such as Amazon Linux 2 or Ubuntu Server 20.04.</p>
</li>
<li><p>Choose the t2.micro instance type (free tier eligible).</p>
</li>
<li><p>Configure instance details (use defaults unless specific needs arise).</p>
</li>
<li><p>Add storage (8 GB default is sufficient).</p>
</li>
<li><p>Add optional tags (e.g., Name: Apache-Server).</p>
</li>
<li><p>Configure a security group:</p>
<ul>
<li><p>Create a new security group.</p>
</li>
<li><p>Add rules for:</p>
<ul>
<li><p>SSH (port 22) from your IP for secure access.</p>
</li>
<li><p>HTTP (port 80) from 0.0.0.0/0 to allow web traffic.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Review and launch, selecting or creating a key pair (e.g., my-key.pem) for SSH access.</p>
</li>
</ul>
</li>
<li><p><strong>Connect to the Instance</strong>:</p>
<ul>
<li><p>Once the instance is running, note its public DNS (e.g., ec2-xx-xx-xx-xx.compute-1.amazonaws.com).</p>
</li>
<li><p>Use SSH to connect: bashCopy<code>ssh -i /path/to/my-key.pem ec2-user@public-dns-name</code></p>
<ul>
<li>For Amazon Linux, use ec2-user; for Ubuntu, use ubuntu.</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Install and Configure Apache</strong>:</p>
<ul>
<li><p>For Amazon Linux: bashCopy</p>
<pre><code class="lang-bash">  sudo yum update -y 
  udo yum install httpd -y 
  sudo systemctl start httpd 
  sudo systemctl <span class="hljs-built_in">enable</span> httpd
</code></pre>
</li>
<li><p>For Ubuntu: bashCopy</p>
<pre><code class="lang-bash">  sudo apt update 
  sudo apt install apache2 -y 
  sudo systemctl start apache2 
  sudo systemctl <span class="hljs-built_in">enable</span> apache2
</code></pre>
</li>
<li><p>Verify Apache is running by accessing the instance’s public DNS in a browser (e.g., http://ec2-xx-xx-xx-xx.compute-1.amazonaws.com). You should see the Apache default page.</p>
</li>
</ul>
</li>
<li><p><strong>Optional: Customize Webpage</strong>:</p>
<ul>
<li><p>Edit the default webpage (e.g., /var/www/html/index.html) to add custom content, ensuring it’s accessible via the browser.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179035307/30c15746-a9c8-4e90-a5e1-2f592cdf8dc1.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<h5 id="heading-3-creating-a-cloudfront-distribution-for-ec2">3. Creating a CloudFront Distribution for EC2</h5>
<p>To serve the Apache webpage through CloudFront, create a distribution with the EC2 instance as the origin:</p>
<ol>
<li><p><strong>Access CloudFront Console</strong>:</p>
<ul>
<li>Navigate to CloudFront in the AWS Console at <a target="_blank" href="https://console.aws.amazon.com/cloudfront/">CloudFront Console</a>.</li>
</ul>
</li>
<li><p><strong>Create a Distribution</strong>:</p>
<ul>
<li>Click “Create Distribution” and select “Web” delivery method.</li>
</ul>
</li>
<li><p><strong>Configure Origin Settings</strong>:</p>
<ul>
<li><p><strong>Origin Domain Name</strong>: Enter the EC2 instance’s public DNS (e.g., ec2-xx-xx-xx-xx.compute-1.amazonaws.com).</p>
</li>
<li><p><strong>Origin Path</strong>: Leave blank.</p>
</li>
<li><p><strong>Origin Protocol Policy</strong>: Select “HTTP Only” (unless your EC2 supports HTTPS).</p>
</li>
<li><p><strong>Minimum Origin SSL Protocol</strong>: Use default settings.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Default Cache Behavior</strong>:</p>
<ul>
<li><p><strong>Viewer Protocol Policy</strong>: Choose “Redirect HTTP to HTTPS” for security or “HTTP and HTTPS”.</p>
</li>
<li><p><strong>Allowed HTTP Methods</strong>: Select “GET, HEAD” for static content.</p>
</li>
<li><p><strong>Cache Policy</strong>: Use “Managed-CachingOptimized” for typical static content caching, or customize TTL settings (e.g., Minimum TTL: 0, Default TTL: 86400 seconds).</p>
</li>
<li><p><strong>Origin Request Policy</strong>: Select “Managed-AllViewer” to forward all viewer headers.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Distribution Settings</strong>:</p>
<ul>
<li><p><strong>Price Class</strong>: Select “Use All Edge Locations” or restrict based on your audience’s location.</p>
</li>
<li><p><strong>Alternate Domain Names (CNAMEs)</strong>: Leave blank (no custom domain specified).</p>
</li>
<li><p><strong>SSL Certificate</strong>: Use the default CloudFront certificate.</p>
</li>
<li><p>Leave other settings as default unless specific requirements apply.</p>
</li>
</ul>
</li>
<li><p><strong>Create and Deploy</strong>:</p>
<ul>
<li><p>Click “Create Distribution”. Deployment may take 5–20 minutes.</p>
</li>
<li><p>Once the status changes to “Deployed”, note the distribution’s domain name (e.g., d1234567890.cloudfront.net).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179112020/13547d0e-fd21-492d-b02b-e5290c971611.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Test the Distribution</strong>:</p>
<ul>
<li><p>Open a browser and enter the CloudFront domain name (e.g., http://d1234567890.cloudfront.net).</p>
</li>
<li><p>Verify that the Apache webpage displays, matching the EC2 instance’s content.</p>
</li>
</ul>
</li>
</ol>
<ol>
<li><p>EC2 DNS URL:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179159012/f5712084-eb77-42c1-94f7-79267fd8cc7d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Cloud Front Domain Name:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179179439/7b825a6b-575b-4afa-a71b-83d20644a966.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>You can check the URL in both SS.</p>
</li>
</ol>
<hr />
<h4 id="heading-3-learn-about-awss-fully-managed-dns-service-route53-and-write-a-detailed-blog-post-and-post-it-on-linkedin">3) Learn about AWS's fully managed DNS Service (Route53) and write a detailed blog post and post it on LinkedIn.</h4>
<h3 id="heading-ans-2"><mark>ANS:</mark></h3>
<p>AWS Route 53 is a highly available and scalable Domain Name System (DNS) web service, as detailed in the AWS Route 53 Documentation. It translates domain names (e.g., example.com) into IP addresses, enabling users to access websites and applications. Key features include:</p>
<ul>
<li><p><strong>Domain Registration</strong>: Register or transfer domains directly through Route 53.</p>
</li>
<li><p><strong>DNS Management</strong>: Create hosted zones to manage DNS records (e.g., A, CNAME, MX).</p>
</li>
<li><p><strong>Traffic Routing</strong>: Supports policies like simple, weighted, latency-based, geolocation, and failover routing.</p>
</li>
<li><p><strong>Health Checking</strong>: Monitors resource health and reroutes traffic from unhealthy endpoints.</p>
</li>
<li><p><strong>Integration</strong>: Works seamlessly with AWS services like EC2, S3, and CloudFront.</p>
</li>
</ul>
<p>Route 53’s global network of DNS servers ensures reliability, and its pay-as-you-go pricing makes it cost-effective for businesses of all sizes.</p>
<p>To Learn more about AWS Route 53, <a target="_blank" href="https://amitabhdevops.hashnode.dev/unlocking-the-power-of-aws-route-53-your-guide-to-scalable-dns">read this blog</a></p>
]]></content:encoded></item><item><title><![CDATA[Amazon CloudFront]]></title><description><![CDATA[Amazon CloudFront is a content delivery network (CDN) service that delivers data, applications, and other web content to customers globally with low latency and high transfer speeds. It's essentially a globally distributed network of servers that cac...]]></description><link>https://blog.amitabh.cloud/amazon-cloudfront</link><guid isPermaLink="true">https://blog.amitabh.cloud/amazon-cloudfront</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[CDN]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Fri, 02 May 2025 09:54:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179630687/577c32d0-30fc-4bf2-bd50-1b2ad19f1e87.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon CloudFront is <strong><mark>a content delivery network (CDN) service that delivers data, applications, and other web content to customers globally with low latency and high transfer speeds</mark></strong>. It's essentially a globally distributed network of servers that caches content in locations closer to users, allowing for faster access. </p>
<p>Here's a more detailed explanation:</p>
<p><strong>Key Features and Benefits:</strong></p>
<ul>
<li><p><strong>Global Reach:</strong></p>
<p>  CloudFront has a network of edge locations (data centers) around the world, ensuring content is delivered quickly to users regardless of their location. </p>
</li>
<li><p><strong>Low Latency:</strong></p>
<p>  By caching content in edge locations, CloudFront reduces the distance and time it takes for data to reach users, resulting in faster loading times and a better user experience. </p>
</li>
<li><p><strong>High Data Transfer Speeds:</strong></p>
<p>  CloudFront's optimized network infrastructure enables high-speed data transfer, allowing for efficient delivery of large files like videos or software updates. </p>
</li>
<li><p><strong>Scalability:</strong></p>
<p>  CloudFront can automatically scale to handle large volumes of traffic, ensuring consistent performance even during peak times. </p>
</li>
<li><p><strong>Security:</strong></p>
<p>  CloudFront provides security features like HTTPS support and DDoS protection to safeguard your content and users. </p>
</li>
<li><p><strong>Cost-Effective:</strong></p>
<p>  CloudFront is a pay-as-you-go service, meaning you only pay for the resources you use, and there are no minimum fees or long-term commitments. </p>
</li>
<li><p><strong>Flexible Integration:</strong></p>
<p>  CloudFront can be integrated with various AWS services and can also be used with custom origins, allowing you to tailor your content delivery strategy. </p>
</li>
</ul>
<p><strong>How it Works:</strong></p>
<ol>
<li><p><strong>1. Content Origin:</strong></p>
<p> Your content (images, videos, web pages, etc.) is stored on a server known as the "origin". </p>
</li>
<li><p><strong>2. Content Delivery Network (CDN):</strong></p>
<p> CloudFront's edge locations act as a globally distributed network of servers that cache copies of your content. </p>
</li>
<li><p><strong>3. User Request:</strong></p>
<p> When a user requests your content, CloudFront's DNS routing system directs the request to the nearest edge location. </p>
</li>
<li><p><strong>4. Content Retrieval:</strong></p>
<p> If the content is already cached in the edge location, it's delivered to the user immediately. </p>
</li>
<li><p><strong>5. Origin Retrieval:</strong></p>
<p> If the content is not cached in the edge location, CloudFront retrieves it from the origin and stores a copy in the edge location for future requests. </p>
</li>
</ol>
<p>In essence, CloudFront acts as a proxy, caching your content at the edge of the network and delivering it to users with minimal delay, resulting in faster website loading times, quicker file downloads, and improved overall user experience.</p>
]]></content:encoded></item><item><title><![CDATA[Unlocking the Power of AWS Route 53: Your Guide to Scalable DNS]]></title><description><![CDATA[Introduction to DNS
The Domain Name System (DNS) is the internet’s phonebook, translating user-friendly domain names like example.com into IP addresses (e.g., 192.0.2.1) that computers use to communicate. Without DNS, accessing websites would require...]]></description><link>https://blog.amitabh.cloud/unlocking-the-power-of-aws-route-53-your-guide-to-scalable-dns</link><guid isPermaLink="true">https://blog.amitabh.cloud/unlocking-the-power-of-aws-route-53-your-guide-to-scalable-dns</guid><category><![CDATA[Doamin]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[AWS]]></category><category><![CDATA[dns]]></category><category><![CDATA[aws-route53]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Fri, 02 May 2025 09:48:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746179330274/921f7a3d-61ad-483b-bdd6-5f1bbee552ed.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-to-dns">Introduction to DNS</h2>
<p>The Domain Name System (DNS) is the internet’s phonebook, translating user-friendly domain names like <code>example.com</code> into IP addresses (e.g., <code>192.0.2.1</code>) that computers use to communicate. Without DNS, accessing websites would require memorizing complex numerical addresses. DNS ensures seamless navigation, making it a critical component of the internet.</p>
<h2 id="heading-what-is-aws-route-53">What is AWS Route 53?</h2>
<p>AWS Route 53 is Amazon Web Services’ highly available and scalable DNS web service, named after the standard DNS port 53. It provides a robust platform for domain registration, DNS management, traffic routing, and health checking. Route 53 connects user requests to AWS resources (like EC2 instances or S3 buckets) or external infrastructure, ensuring reliable and efficient access.</p>
<h2 id="heading-key-features-of-route-53">Key Features of Route 53</h2>
<p>Route 53 offers a comprehensive set of tools to manage DNS and optimize traffic flow:</p>
<ul>
<li><p><strong>Domain Registration</strong>: Register new domains or transfer existing ones through Route 53’s user-friendly console. For example, you can purchase <code>mywebsite.com</code> directly.</p>
</li>
<li><p><strong>DNS Management</strong>: Create hosted zones to store DNS records, such as A (for IP addresses), CNAME (for aliases), or MX (for email servers).</p>
</li>
<li><p><strong>Traffic Routing</strong>: Route 53 supports multiple routing policies:</p>
<ul>
<li><p><em>Simple Routing</em>: Directs traffic to a single resource.</p>
</li>
<li><p><em>Weighted Routing</em>: Distributes traffic across resources based on assigned weights.</p>
</li>
<li><p><em>Latency-Based Routing</em>: Routes to the resource with the lowest latency.</p>
</li>
<li><p><em>Geolocation Routing</em>: Directs traffic based on user location.</p>
</li>
<li><p><em>Failover Routing</em>: Reroutes traffic to backup resources during failures.</p>
</li>
</ul>
</li>
<li><p><strong>Health Checking</strong>: Monitors resource health by sending automated requests and reroutes traffic if a resource becomes unavailable.</p>
</li>
<li><p><strong>Integration with AWS</strong>: Seamlessly connects to services like CloudFront, S3, and EC2 for streamlined DNS management.</p>
</li>
</ul>
<h2 id="heading-how-to-use-route-53-a-practical-example">How to Use Route 53: A Practical Example</h2>
<p>Let’s walk through setting up Route 53 to point a domain to a CloudFront distribution, a common use case for hosting a website.</p>
<ol>
<li><p><strong>Register a Domain</strong>:</p>
<ul>
<li><p>In the Route 53 console, search for an available domain (e.g., <code>mywebsite.com</code>).</p>
</li>
<li><p>Complete the registration process, providing contact details and payment information.</p>
</li>
</ul>
</li>
<li><p><strong>Create a Hosted Zone</strong>:</p>
<ul>
<li><p>Navigate to “Hosted Zones” and click “Create Hosted Zone”.</p>
</li>
<li><p>Enter your domain name (<code>mywebsite.com</code>) and select “Public Hosted Zone”.</p>
</li>
<li><p>Note the assigned name servers (e.g., <code>ns-123.awsdns-45.com</code>).</p>
</li>
</ul>
</li>
<li><p><strong>Add DNS Records</strong>:</p>
<ul>
<li><p>In the hosted zone, create an A record:</p>
<ul>
<li><p>Name: <code>mywebsite.com</code> (or <code>www.mywebsite.com</code> for a subdomain).</p>
</li>
<li><p>Type: A – IPv4 address.</p>
</li>
<li><p>Alias: Yes.</p>
</li>
<li><p>Alias Target: Select your CloudFront distribution (e.g., <code>d1234567890.cloudfront.net</code>).</p>
</li>
</ul>
</li>
<li><p>Optionally, add a CNAME record for <code>www.mywebsite.com</code> pointing to <code>mywebsite.com</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Update Domain Name Servers</strong>:</p>
<ul>
<li>If the domain was registered elsewhere, update its name servers to those provided by Route 53.</li>
</ul>
</li>
<li><p><strong>Test the Setup</strong>:</p>
<ul>
<li>After DNS propagation (up to 48 hours), access <code>mywebsite.com</code> in a browser to verify it loads the CloudFront-hosted content.</li>
</ul>
</li>
</ol>
<p>This setup ensures users access your website via a custom domain, leveraging CloudFront’s performance benefits.</p>
<h2 id="heading-advanced-features">Advanced Features</h2>
<p>Route 53 offers advanced capabilities for complex use cases:</p>
<ul>
<li><p><strong>Traffic Policies</strong>: Use the visual Traffic Flow editor to create sophisticated routing configurations.</p>
</li>
<li><p><strong>Route 53 Resolver</strong>: Enables DNS resolution for hybrid cloud environments, connecting on-premises networks with AWS VPCs.</p>
</li>
<li><p><strong>DNS Firewall</strong>: Filters outbound DNS traffic to protect against malicious domains.</p>
</li>
</ul>
<h2 id="heading-benefits-of-route-53">Benefits of Route 53</h2>
<p>Route 53 stands out for its:</p>
<ul>
<li><p><strong>High Availability</strong>: Leverages AWS’s global DNS server network for reliable query resolution.</p>
</li>
<li><p><strong>Scalability</strong>: Automatically handles large query volumes without performance degradation.</p>
</li>
<li><p><strong>AWS Integration</strong>: Simplifies DNS management for AWS resources like CloudFront and EC2.</p>
</li>
<li><p><strong>Advanced Routing</strong>: Optimizes performance and reliability with policies like latency-based routing.</p>
</li>
<li><p><strong>Cost-Effectiveness</strong>: Offers pay-as-you-go pricing, with no upfront costs.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>AWS Route 53 is a powerful DNS service that simplifies domain management, enhances traffic routing, and ensures application reliability. Whether you’re hosting a simple website or managing a global application, Route 53’s features and AWS integration make it an essential tool. Explore Route 53 in the AWS Console to streamline your DNS needs and boost your application’s performance.</p>
<p><em>Ready to dive into Route 53? Share your experiences or questions in the comments below!</em></p>
]]></content:encoded></item><item><title><![CDATA[Beginner's Guide to AWS VPC, Subnets, IGWs, and Route Table Management]]></title><description><![CDATA[VPC:
A Virtual Private Cloud (VPC) is a secure, isolated portion of a public cloud infrastructure that allows users to create their own virtual network, similar to a private cloud. It enables organizations to host and manage their resources within a ...]]></description><link>https://blog.amitabh.cloud/beginners-guide-to-aws-vpc-subnets-igws-and-route-table-management</link><guid isPermaLink="true">https://blog.amitabh.cloud/beginners-guide-to-aws-vpc-subnets-igws-and-route-table-management</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[vpc]]></category><category><![CDATA[vpc peering]]></category><category><![CDATA[subnet]]></category><category><![CDATA[Internet Gateway]]></category><category><![CDATA[route table]]></category><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Thu, 01 May 2025 11:29:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746098869372/c39d7a85-1303-4612-b851-83a3106ba8ee.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-vpc"><strong>VPC:</strong></h3>
<p>A Virtual Private Cloud (VPC) is <strong><mark>a secure, isolated portion of a public cloud infrastructure that allows users to create their own virtual network, similar to a private cloud</mark></strong>. It enables organizations to host and manage their resources within a specific, controlled environment, providing security and flexibility on a public cloud platform.</p>
<p><strong>Here's a more detailed explanation:</strong></p>
<ul>
<li><p><strong>Isolation:</strong></p>
<p>  VPCs offer logical isolation, separating resources from other users and tenants on the same public cloud.</p>
</li>
<li><p><strong>Control:</strong></p>
<p>  Users can configure and manage their VPC, including setting up subnets, assigning IP addresses, and managing security groups.</p>
</li>
<li><p><strong>Benefits:</strong></p>
<p>  VPCs offer benefits such as increased security, flexibility, and scalability, allowing users to adapt their infrastructure to changing needs.</p>
</li>
<li><p><strong>Use Cases:</strong></p>
<p>  VPCs are commonly used for hosting web applications, migrating workloads to the cloud, and building secure environments for sensitive data and applications.</p>
</li>
<li><p><strong>Cloud Providers:</strong></p>
<p>  Major cloud providers like AWS, Google Cloud, and IBM offer VPC services, allowing users to leverage the advantages of public cloud infrastructure while maintaining control over their virtual network.</p>
</li>
</ul>
<hr />
<h3 id="heading-subnet"><strong>Subnet:</strong></h3>
<p>A subnet, or subnetwork, is <strong><mark>a logical division of a larger IP network</mark></strong>. It allows for the efficient management of network traffic and resource allocation by breaking down a large network into smaller, more manageable segments. Each subnet has its own unique IP address range.</p>
<p><strong>Key Concepts:</strong></p>
<ul>
<li><p><strong>Logical Partition:</strong></p>
<p>  Subnets are not physical divisions of a network but rather logical groupings of devices based on their IP addresses.</p>
</li>
<li><p><strong>IP Address Ranges:</strong></p>
<p>  Each subnet is assigned a specific range of IP addresses, and devices within that range can communicate directly without needing to go through a router.</p>
</li>
<li><p><strong>Subnet Mask:</strong></p>
<p>  A subnet mask is used to identify the network portion and host portion of an IP address, determining which IP addresses belong to a specific subnet.</p>
</li>
<li><p><strong>Benefits:</strong></p>
<ul>
<li><p><strong>Improved Network Efficiency:</strong> Subnets reduce the amount of traffic on the main network by isolating communication within each subnet.</p>
</li>
<li><p><strong>Simplified Management:</strong> Subnets make it easier to manage and troubleshoot network issues by isolating problems within specific subnet segments.</p>
</li>
<li><p><strong>Enhanced Security:</strong> Subnets can be used to create separate security zones, limiting access and protecting sensitive data.</p>
</li>
<li><p><strong>Better Scalability:</strong> Subnets allow for a more flexible and scalable network design, making it easier to add new devices or expand network segments.</p>
</li>
</ul>
</li>
<li><p><strong>Types of Subnets</strong></p>
<p>  <strong>1. Public Subnets:</strong></p>
<ul>
<li><p>These subnets have a direct route to an internet gateway, allowing resources within the subnet to access the public internet.</p>
</li>
<li><p>Instances in public subnets can be assigned public IP addresses.</p>
</li>
<li><p>They are commonly used for web servers, applications, or services that need to be accessed from the internet.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>2. Private Subnets:</strong></p>
<ul>
<li><p>These subnets do not have a direct route to an internet gateway.</p>
</li>
<li><p>Resources in a private subnet typically require a NAT device (Network Address Translation) or VPN connection to access the internet.</p>
</li>
<li><p>Private subnets are often used for internal applications, databases, or other services that do not need to be publicly accessible.</p>
</li>
</ul>
<ul>
<li><p><strong>Example:</strong></p>
<p>  Imagine a company with multiple departments on different floors. Each department could be assigned a subnet, allowing devices within each department to communicate directly, while routing between departments would occur through the company's main router.</p>
</li>
</ul>
<hr />
<h3 id="heading-internet-gateway"><strong>Internet Gateway:</strong></h3>
<p>An internet gateway is <strong><mark>a virtual component that facilitates communication between a virtual private cloud (VPC) and the internet</mark></strong>. It acts as a bridge, enabling resources within public subnets of a VPC, such as EC2 instances, to connect to the internet and vice versa. Essentially, it allows your VPC to interact with the wider internet.</p>
<p>Here's a more detailed explanation:</p>
<p><strong>Functionality:</strong></p>
<ul>
<li><p><strong>Two-way communication:</strong></p>
<p>  Internet gateways enable both outbound (from VPC to internet) and inbound (from internet to VPC) traffic.</p>
</li>
<li><p><strong>Public IP addresses:</strong></p>
<p>  Resources within public subnets that have public IP addresses can use the internet gateway to connect to the internet.</p>
</li>
<li><p><strong>Network Address Translation (NAT):</strong></p>
<p>  For IPv4 traffic, internet gateways perform NAT, which translates private IP addresses within the VPC to public IP addresses when communicating with the internet.</p>
</li>
<li><p><strong>Routing:</strong></p>
<p>  They serve as the target in route tables within the VPC for internet-routable traffic, ensuring that traffic destined for the internet is routed correctly.</p>
</li>
<li><p><strong>Highly available and redundant:</strong></p>
<p>  Internet gateways are designed to be highly available and redundant, ensuring reliable connectivity.</p>
</li>
</ul>
<p><strong>Key Use Cases:</strong></p>
<ul>
<li><p><strong>Connecting EC2 instances to the internet:</strong></p>
<p>  You can use internet gateways to allow EC2 instances in your VPC to access the internet for tasks like software updates, data transfer, or accessing web services.</p>
</li>
<li><p><strong>Providing public access to web applications:</strong></p>
<p>  You can use internet gateways to make web applications running in your VPC accessible to users on the internet.</p>
</li>
<li><p><strong>Receiving inbound connections from the internet:</strong></p>
<p>  Internet gateways enable your VPC resources to accept connections from the internet, which is crucial for services that need to receive data or handle requests from external sources.</p>
</li>
</ul>
<p>In essence, an internet gateway is a vital component for enabling your VPC to interact with the internet, allowing your resources to connect to and be accessed from the wider web.</p>
<hr />
<h3 id="heading-routing-table"><strong>Routing Table:</strong></h3>
<p>A routing table is <strong><mark>a database that helps determine the best path for data packets to travel across a network</mark></strong>. It's like a map that tells devices (like routers) where to send network traffic based on the destination IP address.</p>
<p><strong>Key aspects of a routing table:</strong></p>
<ul>
<li><p><strong>Routing Decisions:</strong></p>
<p>  Routers use routing tables to make decisions about where to forward data packets, ensuring efficient and reliable network traffic flow.</p>
</li>
<li><p><strong>Database of Network Paths:</strong></p>
<p>  The table contains information about the network, including the destination IP address, the next hop IP address, and the interface to use for forwarding.</p>
</li>
<li><p><strong>Stored in RAM:</strong></p>
<p>  Routing tables are typically stored in the random access memory (RAM) of routers or network switches.</p>
</li>
<li><p><strong>Dynamic Updates:</strong></p>
<p>  Routing tables can be updated dynamically, for example, when a network link goes down or a new network is discovered.</p>
</li>
</ul>
<p><strong>How it works:</strong></p>
<p>When a router receives a data packet, it looks up the destination IP address in its routing table. Based on the table's information, the router determines the best next hop IP address and the interface to use for forwarding the packet towards its destination.</p>
<p><strong>Types of Routing Tables:</strong></p>
<ul>
<li><p><strong>Static Routing Tables:</strong> These are manually configured and do not dynamically update.</p>
</li>
<li><p><strong>Dynamic Routing Tables:</strong> These automatically update based on network changes, using routing protocols like Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP).</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[A Beginner's Overview of AWS ECS and ECR Services]]></title><description><![CDATA[AWS ECR:
AWS ECR stands for Amazon Elastic Container Registry. It's a fully managed container registry service provided by AWS that allows you to store, manage, and deploy Docker container images. Essentially, it's a place where you can keep your con...]]></description><link>https://blog.amitabh.cloud/a-beginners-overview-of-aws-ecs-and-ecr-services</link><guid isPermaLink="true">https://blog.amitabh.cloud/a-beginners-overview-of-aws-ecs-and-ecr-services</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ECS]]></category><category><![CDATA[ecr]]></category><category><![CDATA[ec2]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><category><![CDATA[overview]]></category><category><![CDATA[Blogging]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Thu, 01 May 2025 11:23:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746098524036/a6684e1a-e17f-4b62-a487-bd47e13d5dbe.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-aws-ecr"><strong>AWS ECR:</strong></h3>
<p>AWS ECR stands for <strong><mark>Amazon Elastic Container Registry</mark></strong>. It's a fully managed container registry service provided by AWS that allows you to store, manage, and deploy Docker container images. Essentially, it's a place where you can keep your container images, similar to how you would keep your code in a Git repository.</p>
<p><strong>Here's a more detailed breakdown:</strong></p>
<ul>
<li><p><strong>Amazon Elastic Container Registry (Amazon ECR):</strong></p>
<p>  The official name of the AWS service.</p>
</li>
<li><p><strong>Fully Managed:</strong></p>
<p>  AWS handles the infrastructure and maintenance of the registry, so you don't have to.</p>
</li>
<li><p><strong>Container Registry:</strong></p>
<p>  It's a service for storing and retrieving container images, which are like pre-packaged software environments.</p>
</li>
<li><p><strong>Docker Container Images:</strong></p>
<p>  ECR primarily works with Docker images, but it also supports other container formats like Open Container Initiative (OCI) images.</p>
</li>
<li><p><strong>Storage and Management:</strong></p>
<p>  You can use ECR to store your images, manage them, and then deploy them to other AWS services like ECS or EKS.</p>
</li>
<li><p><strong>Security:</strong></p>
<p>  ECR offers security features like access control through IAM (AWS Identity and Access Management).</p>
</li>
<li><p><strong>Integration:</strong></p>
<p>  ECR integrates well with other AWS services, making it easy to build, store, and deploy containerized applications.</p>
</li>
</ul>
<hr />
<h3 id="heading-aws-ecs"><strong>AWS ECS:</strong></h3>
<p>AWS ECS stands for <strong><mark>Amazon Elastic Container Service</mark></strong>. It is a fully managed container orchestration service offered by Amazon Web Services that simplifies the deployment, management, and scaling of containerized applications.</p>
<p><strong>Elaboration:</strong></p>
<ul>
<li><p><strong>Fully Managed:</strong></p>
<p>  Amazon ECS handles the underlying infrastructure, including provisioning and managing servers, clusters, and container instances, relieving users from these tasks.</p>
</li>
<li><p><strong>Container Orchestration:</strong></p>
<p>  It orchestrates and manages Docker containers, allowing for the deployment, scaling, and coordination of containerized applications.</p>
</li>
<li><p><strong>Scalability and Availability:</strong></p>
<p>  ECS ensures the application's availability by automatically managing container instances, scaling up or down as needed to meet demand.</p>
</li>
<li><p><strong>Integration with AWS:</strong></p>
<p>  ECS seamlessly integrates with other AWS services like Elastic Load Balancer, AWS Fargate, and Amazon RDS, simplifying application deployment and management.</p>
</li>
<li><p><strong>Flexibility:</strong></p>
<p>  ECS supports various deployment options, including running on Amazon EC2 instances, AWS Fargate (a serverless compute engine), or on-premises with Amazon ECS Anywhere.</p>
</li>
</ul>
<hr />
<h3 id="heading-aws-fargate"><strong>AWS Fargate:</strong></h3>
<p>AWS Fargate is <strong><mark>a serverless compute engine for containers that runs within Amazon ECS and EKS, allowing users to run containers without managing servers or clusters of virtual machines</mark></strong>. It eliminates the need for users to provision, configure, or scale servers, focusing instead on application development and deployment.</p>
<p>Here's a more detailed explanation:</p>
<ul>
<li><p><strong>Serverless Compute Engine:</strong></p>
<p>  Fargate is a serverless service, meaning AWS manages the underlying infrastructure and resources, including server provisioning, configuration, and scaling.</p>
</li>
<li><p><strong>Containers:</strong></p>
<p>  Fargate is designed to run Docker containers, making it ideal for cloud-native applications.</p>
</li>
<li><p><strong>ECS and EKS Integration:</strong></p>
<p>  Fargate seamlessly integrates with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).</p>
</li>
<li><p><strong>Simplified Management:</strong></p>
<p>  Fargate removes the need for users to manage the infrastructure, allowing them to focus on their applications and their deployment.</p>
</li>
<li><p><strong>Resource Management:</strong></p>
<p>  Users specify the required resources (CPU, memory) for their containers, and Fargate automatically allocates and manages those resources.</p>
</li>
<li><p><strong>Pay-as-you-go:</strong></p>
<p>  Fargate uses a pay-as-you-go pricing model, where users only pay for the resources they consume.</p>
</li>
<li><p><strong>Benefits:</strong></p>
<p>  Fargate provides benefits such as simplified infrastructure management, resource right-sizing, improved security through application isolation, and potentially lower costs.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS Zero to Hero Day - 5]]></title><description><![CDATA[Tasks:

Learn about the following to get started with VPC and post it on LinkedIn:
 ANS:

Virtual Private Cloud(VPC): A Virtual Private Cloud (VPC) is a secure, isolated portion of a public cloud infrastructure that allows users to create their own v...]]></description><link>https://blog.amitabh.cloud/aws-zero-to-hero-day-5</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-zero-to-hero-day-5</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[vpc]]></category><category><![CDATA[vpc peering]]></category><category><![CDATA[subnet]]></category><category><![CDATA[route table]]></category><category><![CDATA[TrainWithShubham]]></category><category><![CDATA[transit gateway]]></category><category><![CDATA[nginx]]></category><category><![CDATA[ec2]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><category><![CDATA[#CloudWatch]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Sat, 26 Apr 2025 16:30:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684903239/af9e798d-f157-4eae-8e41-408d6047b828.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tasks">Tasks:</h2>
<ol>
<li><p><strong>Learn about the following to get started with VPC and post it on LinkedIn:</strong></p>
<p> <mark>ANS:</mark></p>
<ol>
<li><p>Virtual Private Cloud(VPC): A Virtual Private Cloud (VPC) is <strong><mark>a secure, isolated portion of a public cloud infrastructure that allows users to create their own virtual network, similar to a private cloud</mark></strong>. It enables organizations to host and manage their resources within a specific, controlled environment, providing security and flexibility on a public cloud platform.</p>
</li>
<li><p>Subnet: A subnet, or subnetwork, is <strong><mark>a logical division of a larger IP network</mark></strong>. It allows for the efficient management of network traffic and resource allocation by breaking down a large network into smaller, more manageable segments. Each subnet has its own unique IP address range.</p>
</li>
<li><p>Internet Gateway: An internet gateway is <strong><mark>a virtual component that facilitates communication between a virtual private cloud (VPC) and the internet</mark></strong>. It acts as a bridge, enabling resources within public subnets of a VPC, such as EC2 instances, to connect to the internet and vice versa. Essentially, it allows your VPC to interact with the wider internet.</p>
</li>
<li><p>Route table: A routing table is <strong><mark>a database that helps determine the best path for data packets to travel across a network</mark></strong>. It's like a map that tells devices (like routers) where to send network traffic based on the destination IP address.</p>
</li>
<li><p>Peering connections: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account. The VPCs can be in different regions (also known as inter-region VPC peering connections).</p>
</li>
</ol>
</li>
</ol>
<hr />
<ol start="2">
<li><p><strong>Imagine you’re the cloud architect for a tech company, ByteConnect Inc. They’ve expanded rapidly, and each department operates in its own isolated cloud space(VPC). Now, the challenge is to establish a communication channel for their instances to communicate seamlessly using AWS Transit Gateway.</strong></p>
<ol>
<li>Reference: <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/extend-tgw.html">AWS Transit Gateways</a></li>
</ol>
</li>
</ol>
<p><mark>ANS:</mark></p>
<p>Read this blog for the Task/Project guide: <a target="_blank" href="https://amitabhdevops.hashnode.dev/aws-transit-gateway">AWS Transit Gateway</a></p>
<hr />
<ol start="3">
<li><p><strong>You are an AWS intern at XYZ company, and you have to implement the concept of CloudWatch for your AWS resource to monitor its logs.</strong></p>
<ol>
<li><p><strong>What needs to be done:</strong></p>
<ol>
<li><p><strong>Create an instance and deploy an nginx web server on that instance.</strong></p>
</li>
<li><p><strong>Create a CloudWatch and connect it with your nginx server to monitor</strong></p>
</li>
</ol>
</li>
</ol>
</li>
</ol>
<p><mark>ANS:</mark></p>
<p>Read this blog for the Task/Project guide: <a target="_blank" href="https://amitabhdevops.hashnode.dev/aws-cloudwatch-monitoring">Nginx CloudWatch Monitoring</a></p>
]]></content:encoded></item><item><title><![CDATA[AWS CloudWatch Monitoring]]></title><description><![CDATA[Implementing CloudWatch Monitoring for Nginx Web Server on EC2
Step 1: Launch an EC2 Instance

Go to AWS Console → EC2 → Launch Instance.

Configure the following:

Name: nginx-server

AMI: Ubuntu 22.04 LTS

Instance Type: t2.micro (Free Tier eligibl...]]></description><link>https://blog.amitabh.cloud/aws-cloudwatch-monitoring</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-cloudwatch-monitoring</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[#CloudWatch]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Sat, 26 Apr 2025 16:27:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684819415/770d5255-2dcb-43e6-ba3a-b0f54ce1427f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-implementing-cloudwatch-monitoring-for-nginx-web-server-on-ec2">Implementing CloudWatch Monitoring for Nginx Web Server on EC2</h1>
<h2 id="heading-step-1-launch-an-ec2-instance">Step 1: Launch an EC2 Instance</h2>
<ul>
<li><p>Go to AWS Console → EC2 → Launch Instance.</p>
</li>
<li><p>Configure the following:</p>
<ul>
<li><p>Name: <code>nginx-server</code></p>
</li>
<li><p>AMI: Ubuntu 22.04 LTS</p>
</li>
<li><p>Instance Type: t2.micro (Free Tier eligible)</p>
</li>
<li><p>Key pair: Create or select an existing one.</p>
</li>
<li><p>Security Group: Allow inbound rules for:</p>
<ul>
<li><p>Port 22 (SSH)</p>
</li>
<li><p>Port 80 (HTTP)</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Launch the instance.</p>
</li>
</ul>
<h2 id="heading-step-2-install-and-start-nginx">Step 2: Install and Start Nginx</h2>
<p>Connect to the EC2 instance using SSH:</p>
<pre><code class="lang-bash">ssh -i your-key.pem ubuntu@your-ec2-public-ip
</code></pre>
<p>Update and install nginx:</p>
<pre><code class="lang-bash">sudo apt update -y
sudo apt install nginx -y
</code></pre>
<p>Start and enable nginx service:</p>
<pre><code class="lang-bash">sudo systemctl start nginx
sudo systemctl <span class="hljs-built_in">enable</span> nginx
</code></pre>
<p>Verify by accessing the public IP address of the EC2 instance in a web browser. You should see the Nginx welcome page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684732895/36d9ef04-e043-44d8-839d-8ac7256c3b95.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-create-an-iam-role-for-cloudwatch">Step 3: Create an IAM Role for CloudWatch</h2>
<ul>
<li><p>Navigate to IAM → Roles → Create Role.</p>
</li>
<li><p>Trusted Entity: EC2.</p>
</li>
<li><p>Attach the following policies:</p>
<ul>
<li><p>CloudWatchAgentServerPolicy</p>
</li>
<li><p>AmazonSSMManagedInstanceCore (optional but recommended for easier management)</p>
</li>
</ul>
</li>
<li><p>Name the role appropriately, for example: <code>EC2CloudWatchAgentRole</code>.</p>
</li>
<li><p>Attach this role to the running EC2 instance (EC2 → Actions → Security → Modify IAM Role).</p>
</li>
</ul>
<h2 id="heading-step-4-install-the-cloudwatch-agent">Step 4: Install the CloudWatch Agent</h2>
<p>Since <code>amazon-cloudwatch-agent</code> is not available via <code>apt</code> on Ubuntu, install it manually:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /tmp
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
</code></pre>
<p>Verify the installation:</p>
<pre><code class="lang-bash">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a status
</code></pre>
<h2 id="heading-step-5-create-a-cloudwatch-agent-configuration-file">Step 5: Create a CloudWatch Agent Configuration File</h2>
<p>Create a configuration file to specify which logs to collect:</p>
<pre><code class="lang-bash">sudo vim /opt/aws/amazon-cloudwatch-agent/bin/config.json
</code></pre>
<p>Paste the following configuration:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"logs"</span>: {
    <span class="hljs-attr">"logs_collected"</span>: {
      <span class="hljs-attr">"files"</span>: {
        <span class="hljs-attr">"collect_list"</span>: [
          {
            <span class="hljs-attr">"file_path"</span>: <span class="hljs-string">"/var/log/nginx/access.log"</span>,
            <span class="hljs-attr">"log_group_name"</span>: <span class="hljs-string">"nginx-access-logs"</span>,
            <span class="hljs-attr">"log_stream_name"</span>: <span class="hljs-string">"{instance_id}-access"</span>,
            <span class="hljs-attr">"timezone"</span>: <span class="hljs-string">"UTC"</span>
          },
          {
            <span class="hljs-attr">"file_path"</span>: <span class="hljs-string">"/var/log/nginx/error.log"</span>,
            <span class="hljs-attr">"log_group_name"</span>: <span class="hljs-string">"nginx-error-logs"</span>,
            <span class="hljs-attr">"log_stream_name"</span>: <span class="hljs-string">"{instance_id}-error"</span>,
            <span class="hljs-attr">"timezone"</span>: <span class="hljs-string">"UTC"</span>
          }
        ]
      }
    }
  }
}
</code></pre>
<p>Save and exit the file.</p>
<h2 id="heading-step-6-start-the-cloudwatch-agent">Step 6: Start the CloudWatch Agent</h2>
<p>Start the CloudWatch Agent with the created configuration:</p>
<pre><code class="lang-bash">sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a fetch-config \
  -m ec2 \
  -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json \
  -s
</code></pre>
<p>This will start collecting the nginx logs and push them to CloudWatch.</p>
<h2 id="heading-step-7-verify-in-aws-console">Step 7: Verify in AWS Console</h2>
<ul>
<li><p>Go to AWS Console → CloudWatch → Log Groups.</p>
</li>
<li><p>You should see two log groups:</p>
<ul>
<li><p><code>nginx-access-logs</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684609697/2241638d-98d1-4d9d-9862-a22fd5c7a849.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684622458/5854d3ce-5144-45ef-a323-61712cacfe62.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745684635554/29f7c924-96ae-4f12-b393-02cd61a93b58.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>You can now monitor the nginx access and error logs directly from the CloudWatch console.</p>
</li>
</ul>
<hr />
<h1 id="heading-final-outcome">Final Outcome</h1>
<ul>
<li><p>A running EC2 instance with an nginx web server installed and active.</p>
</li>
<li><p>CloudWatch Agent installed and configured to collect nginx logs.</p>
</li>
<li><p>Log streams visible and accessible in the AWS CloudWatch service for monitoring.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS Transit Gateway]]></title><description><![CDATA[AWS Transit Gateway Setup Guide for ByteConnect Inc.
This guide provides detailed steps to establish a network architecture for ByteConnect Inc., where multiple departments operate in isolated Virtual Private Clouds (VPCs) within AWS. The goal is to ...]]></description><link>https://blog.amitabh.cloud/aws-transit-gateway</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-transit-gateway</guid><category><![CDATA[aws transit]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[vpc]]></category><category><![CDATA[subnet]]></category><category><![CDATA[routing]]></category><category><![CDATA[vpc peering]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Internet Gateway]]></category><category><![CDATA[ec2]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Sat, 26 Apr 2025 14:59:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679405269/9d6bcbb7-ea29-4d44-bf58-fec7a53bc024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-aws-transit-gateway-setup-guide-for-byteconnect-inc">AWS Transit Gateway Setup Guide for ByteConnect Inc.</h1>
<p>This guide provides detailed steps to establish a network architecture for ByteConnect Inc., where multiple departments operate in isolated Virtual Private Clouds (VPCs) within AWS. The goal is to enable seamless communication between instances in these VPCs using AWS Transit Gateway, as depicted in an architecture with three VPCs, each containing a public subnet and an EC2 instance, interconnected via a Transit Gateway.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before starting, ensure you have:</p>
<ul>
<li><p>An AWS account with permissions to create and manage VPCs, subnets, Internet Gateways, route tables, EC2 instances, and Transit Gateways (<a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html">AWS Identity and Access Management</a>).</p>
</li>
<li><p>Basic knowledge of navigating the AWS Management Console.</p>
</li>
<li><p>A key pair created in the EC2 console for SSH access to instances (<a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair">Create a key pair</a>).</p>
</li>
<li><p>All actions performed in the same AWS Region (e.g., us-east-1) for consistency.</p>
</li>
</ul>
<h2 id="heading-architecture-overview">Architecture Overview</h2>
<p>The architecture consists of:</p>
<ul>
<li><p><strong>Three VPCs:</strong></p>
<ul>
<li><p><code>test-vpc-1</code>: CIDR 12.0.0.0/16, with public subnet 12.0.0.0/24.</p>
</li>
<li><p><code>test-vpc-2</code>: CIDR 13.0.0.0/16, with public subnet 13.0.1.0/24.</p>
</li>
<li><p><code>test-vpc-3</code>: CIDR 14.0.0.0/16, with public subnet 14.0.1.0/24.</p>
</li>
</ul>
</li>
<li><p><strong>Public Subnets:</strong> Each hosts an EC2 instance with a public IP for accessibility.</p>
</li>
<li><p><strong>Internet Gateways:</strong> Enable internet access for public subnets.</p>
</li>
<li><p><strong>EC2 Instances:</strong> One per VPC for testing communication.</p>
</li>
<li><p><strong>AWS Transit Gateway:</strong> Central hub connecting all VPCs, facilitating inter-VPC communication.</p>
</li>
<li><p><strong>Route Tables:</strong> Configured to route traffic between VPCs via the Transit Gateway.</p>
</li>
<li><p><strong>Security Groups:</strong> Allow necessary traffic (e.g., ICMP, SSH) between VPCs.</p>
</li>
</ul>
<p>The Transit Gateway simplifies network management by acting as a hub, eliminating the need for complex VPC peering. Each VPC’s route table will include routes to the other VPCs’ CIDR blocks, pointing to the Transit Gateway.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745678544979/68935acb-5aee-4a1a-9955-0e59114a118d.png" alt class="image--center mx-auto" /></p>
<p>Image credit to Rahul Wagh at YT</p>
<h2 id="heading-step-by-step-instructions">Step-by-Step Instructions</h2>
<h3 id="heading-step-1-create-three-vpcs">Step 1: Create Three VPCs</h3>
<ol>
<li><p>Log in to the <a target="_blank" href="https://aws.amazon.com/console/">AWS Management Console</a>.</p>
</li>
<li><p>Navigate to the VPC dashboard under “Networking &amp; Content Delivery.”</p>
</li>
<li><p>Click <strong>Create VPC</strong> and configure:</p>
<ul>
<li><p><strong>VPC 1:</strong></p>
<ul>
<li><p>Name tag: <code>test-vpc-1</code></p>
</li>
<li><p>IPv4 CIDR block: <code>12.0.0.0/16</code></p>
</li>
<li><p>Tenancy: Default</p>
</li>
</ul>
</li>
<li><p><strong>VPC 2:</strong></p>
<ul>
<li><p>Name tag: <code>test-vpc-2</code></p>
</li>
<li><p>IPv4 CIDR block: <code>13.0.0.0/16</code></p>
</li>
<li><p>Tenancy: Default</p>
</li>
</ul>
</li>
<li><p><strong>VPC 3:</strong></p>
<ul>
<li><p>Name tag: <code>test-vpc-3</code></p>
</li>
<li><p>IPv4 CIDR block: <code>14.0.0.0/16</code></p>
</li>
<li><p>Tenancy: Default</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Click <strong>Create VPC</strong> for each and verify creation in the VPC dashboard.</p>
</li>
</ol>
<h3 id="heading-step-2-create-public-subnets">Step 2: Create Public Subnets</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Subnets</strong> and click <strong>Create subnet</strong>.</p>
</li>
<li><p>Configure one subnet per VPC:</p>
<ul>
<li><p><strong>VPC 1:</strong></p>
<ul>
<li><p>VPC: <code>test-vpc-1</code></p>
</li>
<li><p>Subnet name: <code>public-subnet-1</code></p>
</li>
<li><p>Availability Zone: e.g., <code>us-east-1a</code></p>
</li>
<li><p>IPv4 CIDR block: <code>12.0.0.0/24</code></p>
</li>
</ul>
</li>
<li><p><strong>VPC 2:</strong></p>
<ul>
<li><p>VPC: <code>test-vpc-2</code></p>
</li>
<li><p>Subnet name: <code>public-subnet-2</code></p>
</li>
<li><p>Availability Zone: <code>us-east-1a</code></p>
</li>
<li><p>IPv4 CIDR block: <code>13.0.1.0/24</code></p>
</li>
</ul>
</li>
<li><p><strong>VPC 3:</strong></p>
<ul>
<li><p>VPC: <code>test-vpc-3</code></p>
</li>
<li><p>Subnet name: <code>public-subnet-3</code></p>
</li>
<li><p>Availability Zone: <code>us-east-1a</code></p>
</li>
<li><p>IPv4 CIDR block: <code>14.0.1.0/24</code></p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Click <strong>Create subnet</strong> for each and confirm in the Subnets list.</p>
</li>
</ol>
<h3 id="heading-step-3-create-and-attach-internet-gateways">Step 3: Create and Attach Internet Gateways</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Internet Gateways</strong> and click <strong>Create internet gateway</strong>.</p>
</li>
<li><p>Create and attach one per VPC:</p>
<ul>
<li><p><strong>VPC 1:</strong></p>
<ul>
<li><p>Name tag: <code>igw-test-vpc-1</code></p>
</li>
<li><p>After creation, select it, click <strong>Actions</strong> &gt; <strong>Attach to VPC</strong>, and choose <code>test-vpc-1</code>.</p>
</li>
</ul>
</li>
<li><p><strong>VPC 2:</strong></p>
<ul>
<li><p>Name tag: <code>igw-test-vpc-2</code></p>
</li>
<li><p>Attach to <code>test-vpc-2</code>.</p>
</li>
</ul>
</li>
<li><p><strong>VPC 3:</strong></p>
<ul>
<li><p>Name tag: <code>igw-test-vpc-3</code></p>
</li>
<li><p>Attach to <code>test-vpc-3</code>.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Verify attachments in the Internet Gateways section.</p>
</li>
</ol>
<h3 id="heading-step-4-create-and-configure-route-tables">Step 4: Create and Configure Route Tables</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Route Tables</strong> and click <strong>Create route table</strong>.</p>
</li>
<li><p>Create one per VPC:</p>
<ul>
<li><p><strong>VPC 1:</strong> Name: <code>rt-test-vpc-1</code>, VPC: <code>test-vpc-1</code></p>
</li>
<li><p><strong>VPC 2:</strong> Name: <code>rt-test-vpc-2</code>, VPC: <code>test-vpc-2</code></p>
</li>
<li><p><strong>VPC 3:</strong> Name: <code>rt-test-vpc-3</code>, VPC: <code>test-vpc-3</code></p>
</li>
</ul>
</li>
<li><p>For each route table:</p>
<ul>
<li><p>Select the route table, go to <strong>Routes</strong> tab, and click <strong>Edit routes</strong>.</p>
</li>
<li><p>Add a route:</p>
<ul>
<li><p>Destination: <code>0.0.0.0/0</code></p>
</li>
<li><p>Target: Internet Gateway (e.g., <code>igw-test-vpc-1</code> for <code>rt-test-vpc-1</code>)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Save routes</strong>.</p>
</li>
</ul>
</li>
<li><p>Associate each route table with its public subnet:</p>
<ul>
<li><p>Select the route table, go to <strong>Subnet associations</strong>, click <strong>Edit subnet associations</strong>.</p>
</li>
<li><p>Select the corresponding subnet (e.g., <code>public-subnet-1</code> for <code>rt-test-vpc-1</code>) and save.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-step-5-launch-ec2-instances">Step 5: Launch EC2 Instances</h3>
<ol>
<li><p>Navigate to the EC2 dashboard and click <strong>Launch instances</strong>.</p>
</li>
<li><p>Configure one instance per VPC:</p>
<ul>
<li><p><strong>AMI:</strong> Choose Amazon Linux 2 or similar.</p>
</li>
<li><p><strong>Instance type:</strong> <code>t2.micro</code> (free tier eligible).</p>
</li>
<li><p><strong>Network settings:</strong></p>
<ul>
<li><p><strong>VPC 1:</strong> VPC: <code>test-vpc-1</code>, Subnet: <code>public-subnet-1</code>, Auto-assign Public IP: Enable</p>
</li>
<li><p><strong>VPC 2:</strong> VPC: <code>test-vpc-2</code>, Subnet: <code>public-subnet-2</code>, Auto-assign Public IP: Enable</p>
</li>
<li><p><strong>VPC 3:</strong> VPC: <code>test-vpc-3</code>, Subnet: <code>public-subnet-3</code>, Auto-assign Public IP: Enable</p>
</li>
</ul>
</li>
<li><p><strong>Storage:</strong> Default settings.</p>
</li>
<li><p><strong>Tags:</strong> Add Name tag (e.g., <code>instance-vpc-1</code>).</p>
</li>
<li><p><strong>Security group:</strong> Create a new security group with:</p>
<ul>
<li><p>Inbound rule: SSH (port 22) from your IP (e.g., <code>203.0.113.0/32</code>).</p>
</li>
<li><p>Inbound rule: All ICMP - IPv4 from anywhere (<code>0.0.0.0/0</code>) for testing.</p>
</li>
</ul>
</li>
<li><p><strong>Key pair:</strong> Select an existing key pair or create a new one.</p>
</li>
</ul>
</li>
<li><p>Launch each instance and note their public and private IPs.</p>
</li>
</ol>
<h3 id="heading-step-6-create-aws-transit-gateway">Step 6: Create AWS Transit Gateway</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Transit Gateways</strong> and click <strong>Create Transit Gateway</strong>.</p>
</li>
<li><p>Configure:</p>
<ul>
<li><p>Name tag: <code>test-tgw</code></p>
</li>
<li><p>Description: Optional (e.g., “Transit Gateway for ByteConnect”)</p>
</li>
<li><p>Amazon side ASN: Default (64512-65534)</p>
</li>
<li><p>DNS support: Enable</p>
</li>
<li><p>Default route table association/propagation: Enable</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create Transit Gateway</strong> and wait for the status to become “Available” (may take a few minutes).</p>
</li>
</ol>
<h3 id="heading-step-7-attach-vpcs-to-transit-gateway">Step 7: Attach VPCs to Transit Gateway</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Transit Gateway Attachments</strong> and click <strong>Create Transit Gateway Attachment</strong>.</p>
</li>
<li><p>For each VPC:</p>
<ul>
<li><p>Transit Gateway ID: <code>test-tgw</code></p>
</li>
<li><p>Attachment type: VPC</p>
</li>
<li><p>VPC ID: Select <code>test-vpc-1</code>, <code>test-vpc-2</code>, or <code>test-vpc-3</code></p>
</li>
<li><p>Subnet IDs: Select the public subnet (e.g., <code>public-subnet-1</code> for <code>test-vpc-1</code>)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create attachment</strong> for each VPC and verify attachments in the list.</p>
</li>
</ol>
<h3 id="heading-step-8-configure-route-tables-for-inter-vpc-communication">Step 8: Configure Route Tables for Inter-VPC Communication</h3>
<ol>
<li><p>In the VPC dashboard, select <strong>Route Tables</strong>.</p>
</li>
<li><p>Update each route table to include routes to other VPCs’ CIDR blocks:</p>
<ul>
<li><p><strong>rt-test-vpc-1:</strong></p>
<ul>
<li><p>Destination: <code>13.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
<li><p>Destination: <code>14.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
</ul>
</li>
<li><p><strong>rt-test-vpc-2:</strong></p>
<ul>
<li><p>Destination: <code>12.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
<li><p>Destination: <code>14.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
</ul>
</li>
<li><p><strong>rt-test-vpc-3:</strong></p>
<ul>
<li><p>Destination: <code>12.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
<li><p>Destination: <code>13.0.0.0/16</code>, Target: Transit Gateway (<code>test-tgw</code>)</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>For each route table, click <strong>Edit routes</strong>, add the routes, and click <strong>Save routes</strong>.</p>
</li>
</ol>
<h3 id="heading-step-9-configure-security-groups-for-inter-vpc-traffic">Step 9: Configure Security Groups for Inter-VPC Traffic</h3>
<ol>
<li><p>In the EC2 dashboard, select <strong>Security Groups</strong>.</p>
</li>
<li><p>For each instance’s security group, add inbound rules to allow traffic from other VPCs:</p>
<ul>
<li><p><strong>Instance in test-vpc-1:</strong></p>
<ul>
<li><p>Type: All ICMP - IPv4, Source: <code>13.0.0.0/16</code></p>
</li>
<li><p>Type: All ICMP - IPv4, Source: <code>14.0.0.0/16</code></p>
</li>
</ul>
</li>
<li><p><strong>Instance in test-vpc-2:</strong></p>
<ul>
<li><p>Type: All ICMP - IPv4, Source: <code>12.0.0.0/16</code></p>
</li>
<li><p>Type: All ICMP - IPv4, Source: <code>14.0.0.0/16</code></p>
</li>
</ul>
</li>
<li><p><strong>Instance in test-vpc-3:</strong></p>
<ul>
<li><p>Type: All ICMP - IPv4, Source: <code>12.0.0.0/16</code></p>
</li>
<li><p>Type: All ICMP - IPv4, Source: <code>13.0.0.0/16</code></p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Optionally, add rules for other protocols (e.g., TCP port 80 for HTTP) based on application needs.</p>
</li>
<li><p>Save changes for each security group.</p>
</li>
</ol>
<h3 id="heading-step-10-test-connectivity">Step 10: Test Connectivity</h3>
<ol>
<li><p>SSH into an EC2 instance (e.g., <code>instance-vpc-1</code>) using its public IP:</p>
<pre><code class="lang-bash"> ssh -i your-key.pem ec2-user@&lt;public-ip&gt;
</code></pre>
</li>
<li><p>Ping the private IP of an instance in another VPC (e.g., <code>instance-vpc-2</code>):</p>
<pre><code class="lang-bash"> ping &lt;private-ip-of-instance-vpc-2&gt;
</code></pre>
</li>
<li><p>Curl the private IP of an instance in another VPC instance (e.g., <code>instance-vpc-2</code>):</p>
<pre><code class="lang-bash"> curl &lt;private-ip-of-instance-vpc-2&gt;
</code></pre>
</li>
<li><p>Repeat from other instances to confirm bidirectional communication.</p>
</li>
<li><p>If pings fail, verify:</p>
<ul>
<li><p>Route table entries are correct.</p>
</li>
<li><p>Transit Gateway attachments are in “Available” state.</p>
</li>
<li><p>Security groups allow ICMP traffic.</p>
</li>
<li><p>Network ACLs (if customized) permit the traffic.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-additional-notes">Additional Notes</h2>
<ul>
<li><p><strong>CIDR Blocks:</strong> The CIDR blocks (12.0.0.0/16, 13.0.0.0/16, 14.0.0.0/16) are chosen to avoid overlap, which is critical for proper routing. Adjust as needed for your environment.</p>
</li>
<li><p><strong>Public Subnets:</strong> These subnets are configured for internet access, suitable for testing. For production, consider private subnets with NAT Gateways for enhanced security.</p>
</li>
<li><p><strong>Transit Gateway Route Table:</strong> By default, enabling route table propagation associates all attachments with the Transit Gateway’s default route table, allowing communication between VPCs. For isolation, create separate route tables (<a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html">Transit Gateway Route Tables</a>).</p>
</li>
<li><p><strong>Security Considerations:</strong> Restrict security group rules to specific IPs or CIDRs in production. Use network ACLs for additional control if needed.</p>
</li>
<li><p><strong>Scalability:</strong> Additional VPCs can be attached to the Transit Gateway without modifying existing configurations, making this architecture scalable.</p>
</li>
<li><p><strong>Cost:</strong> Transit Gateway incurs charges based on attachments and data transfer. Review <a target="_blank" href="https://aws.amazon.com/transit-gateway/pricing/">AWS Transit Gateway Pricing</a> for cost estimates.</p>
</li>
</ul>
<h2 id="heading-troubleshooting">Troubleshooting</h2>
<ul>
<li><p><strong>Ping Fails:</strong> Check security group rules, route tables, and Transit Gateway attachment status. Ensure instances are running and reachable.</p>
</li>
<li><p><strong>Attachment Issues:</strong> Verify subnets are correctly associated with Transit Gateway attachments. Attachments should be in “Available” state.</p>
</li>
<li><p><strong>Routing Errors:</strong> Confirm no overlapping CIDR blocks and that routes point to the correct Transit Gateway ID.</p>
</li>
</ul>
<h2 id="heading-example-configuration-summary">Example Configuration Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>VPC 1</td><td>VPC 2</td><td>VPC 3</td></tr>
</thead>
<tbody>
<tr>
<td><strong>VPC Name</strong></td><td>test-vpc-1</td><td>test-vpc-2</td><td>test-vpc-3</td></tr>
<tr>
<td><strong>CIDR Block</strong></td><td>12.0.0.0/16</td><td>13.0.0.0/16</td><td>14.0.0.0/16</td></tr>
<tr>
<td><strong>Subnet</strong></td><td>public-subnet-1 (12.0.0.0/24)</td><td>public-subnet-2 (13.0.1.0/24)</td><td>public-subnet-3 (14.0.1.0/24)</td></tr>
<tr>
<td><strong>Route Table Routes</strong></td><td>0.0.0.0/0 → igw</td><td></td><td></td></tr>
<tr>
<td>13.0.0.0/16 → tgw</td><td></td><td></td><td></td></tr>
<tr>
<td>14.0.0.0/16 → tgw</td><td>0.0.0.0/0 → igw</td><td></td><td></td></tr>
<tr>
<td>12.0.0.0/16 → tgw</td><td></td><td></td><td></td></tr>
<tr>
<td>14.0.0.0/16 → tgw</td><td>0.0.0.0/0 → igw</td><td></td><td></td></tr>
<tr>
<td>12.0.0.0/16 → tgw</td><td></td><td></td><td></td></tr>
<tr>
<td>13.0.0.0/16 → tgw</td><td></td><td></td><td></td></tr>
<tr>
<td><strong>EC2 Instance</strong></td><td>instance-vpc-1</td><td>instance-vpc-2</td><td>instance-vpc-3</td></tr>
</tbody>
</table>
</div><p>This setup ensures that instances in <code>test-vpc-1</code>, <code>test-vpc-2</code>, and <code>test-vpc-3</code> can communicate seamlessly, fulfilling ByteConnect Inc.’s requirement for inter-departmental VPC connectivity.</p>
<hr />
<h4 id="heading-key-citations">Key Citations</h4>
<ul>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html">Security group rules for different use cases</a></p>
</li>
<li><p><a target="_blank" href="https://repost.aws/knowledge-center/transit-gateway-fix-vpc-connection">Troubleshoot VPC-to-VPC connectivity through a transit gateway</a></p>
</li>
<li><p><a target="_blank" href="https://www.reddit.com/r/aws/comments/wlplqz/through_transit_gateway_i_cant_send_ping_request/">through transit gateway, I can't send ping request to instance in the other vpc</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html">Get started with using Amazon VPC Transit Gateways</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html">Security group rules</a></p>
</li>
<li><p><a target="_blank" href="https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/EC2/unrestricted-icmp-access.html">Unrestricted ICMP Access</a></p>
</li>
<li><p><a target="_blank" href="https://repost.aws/questions/QULtb_dXerQNeodQV03_s1ig/i-used-all-icmp-policy-in-security-group-to-make-ping-work-but-not-sure-why-how-does-icmp-work-with-network-filtering-aka-fw">I used All ICMP policy in Security Group to make ping work</a></p>
</li>
<li><p><a target="_blank" href="https://stackoverflow.com/questions/21981796/cannot-ping-aws-ec2-instance">Cannot ping AWS EC2 instance</a></p>
</li>
</ul>
<hr />
<p>Output Images/Images</p>
<ol>
<li><p>VPCs:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679756865/2060a5f8-24cb-492c-b0f7-19854825d166.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Subnets:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679783535/273aa621-b5a7-4e15-a928-427d725f4849.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Internet Gateways:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679822166/3cbd31d6-40ed-4a8e-86e3-290eb1969b94.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Route Tables:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679899890/74b67431-cdb3-46ba-bf71-1238593b0576.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>EC2 Instances:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679936448/2d727cd9-108d-4b67-ba31-231bdd220963.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>EC2 Security Group Inbound Rule:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745679995107/9c1a2878-c25d-43d9-b91b-b03686093ede.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Transit Gateway:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745680122255/4760f26d-370c-4202-a0b7-996b3adcdc90.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Transit Gateway Attachments:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745680154036/ba75dc80-85f2-4ede-94a0-01ff93ecab0c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Dept-A-Server:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745680044594/15f40b9b-dc3e-469a-8189-2d83307e8734.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Dept-B-Server:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745680051641/c5144c93-27e6-46c2-b1f5-a8674fcb532f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Dept-C-Server:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745680060043/bdde617d-f440-4ee0-9c3b-55f843df14c0.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[AWS Zero to Hero day 4 - part 4]]></title><description><![CDATA[Automating EC2 Instance Start/Stop with AWS Lambda and EventBridge

Introduction
In a cost-conscious cloud environment, optimizing resource usage is critical. One effective way to manage EC2 instance costs is to automate the process of starting and s...]]></description><link>https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-4</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-4</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[lambda]]></category><category><![CDATA[ec2]]></category><category><![CDATA[boto3]]></category><category><![CDATA[Cost Optimization]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><category><![CDATA[AWS EventBridge]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[email]]></category><category><![CDATA[notifications]]></category><category><![CDATA[AWS EventBridge Rules for Event-Driven Workflows]]></category><category><![CDATA[Cost efficiency]]></category><category><![CDATA[emailnotification]]></category><category><![CDATA[TrainWithShubham]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Thu, 24 Apr 2025 05:08:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745460637869/8b1c9e96-a52e-47a2-a0eb-cd38926db742.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-automating-ec2-instance-startstop-with-aws-lambda-and-eventbridge">Automating EC2 Instance Start/Stop with AWS Lambda and EventBridge</h1>
<hr />
<h2 id="heading-introduction">Introduction</h2>
<p>In a cost-conscious cloud environment, optimizing resource usage is critical. One effective way to manage EC2 instance costs is to automate the process of starting and stopping instances during business hours. In this blog, we’ll walk through a clean and efficient approach using AWS Lambda and EventBridge, with Python (Boto3) to control instances based on tags.</p>
<hr />
<h2 id="heading-objective">Objective</h2>
<p>To automatically start or stop specific EC2 instances during defined hours using a Lambda function. Instances are identified using a custom tag, enabling flexibility and control.</p>
<hr />
<h2 id="heading-scenario"><strong>Scenario</strong></h2>
<p>You're an AWS expert managing a budget-friendly project with EC2 instances. To save money, you're using AWS Lambda to automatically start and stop instances when they're not needed during non-business hours. 👇</p>
<h4 id="heading-what-needs-to-be-done"><strong>What needs to be done:</strong></h4>
<ul>
<li>Create an AWS Lambda function that will start/stop instances based on their instance tag.</li>
</ul>
<hr />
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before implementing the solution, ensure the following:</p>
<ol>
<li><p><strong>Tagged EC2 Instances</strong></p>
<ul>
<li><p>Tag Key: <code>AutoSchedule</code></p>
</li>
<li><p>Tag Value: <code>true</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745462113021/7c3b3b07-978d-4fb2-a709-2810bd14be6a.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>IAM Role for Lambda</strong><br /> Create and assign an IAM role to your Lambda function with the following permissions, or EC2 Full Access:</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
   <span class="hljs-attr">"Statement"</span>: [
     {
       <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
       <span class="hljs-attr">"Action"</span>: [
         <span class="hljs-string">"ec2:DescribeInstances"</span>,
         <span class="hljs-string">"ec2:StartInstances"</span>,
         <span class="hljs-string">"ec2:StopInstances"</span>
       ],
       <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
     }
   ]
 }
</code></pre>
</li>
</ol>
<hr />
<h2 id="heading-lambda-function-code">Lambda Function Code</h2>
<p>Below is the Python code using Boto3 to start or stop EC2 instances based on the <code>AutoSchedule</code> tag.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3

ec2 = boto3.client(<span class="hljs-string">'ec2'</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    action = event.get(<span class="hljs-string">'action'</span>)  <span class="hljs-comment"># 'start' or 'stop'</span>
    <span class="hljs-keyword">if</span> action <span class="hljs-keyword">not</span> <span class="hljs-keyword">in</span> [<span class="hljs-string">'start'</span>, <span class="hljs-string">'stop'</span>]:
        <span class="hljs-keyword">return</span> {<span class="hljs-string">'statusCode'</span>: <span class="hljs-number">400</span>, <span class="hljs-string">'body'</span>: <span class="hljs-string">"Invalid action."</span>}

    filters = [
        {<span class="hljs-string">'Name'</span>: <span class="hljs-string">'tag:AutoSchedule'</span>, <span class="hljs-string">'Values'</span>: [<span class="hljs-string">'true'</span>]},
        {<span class="hljs-string">'Name'</span>: <span class="hljs-string">'instance-state-name'</span>, <span class="hljs-string">'Values'</span>: [<span class="hljs-string">'stopped'</span>] <span class="hljs-keyword">if</span> action == <span class="hljs-string">'start'</span> <span class="hljs-keyword">else</span> [<span class="hljs-string">'running'</span>]}
    ]

    instances = ec2.describe_instances(Filters=filters)
    instance_ids = [i[<span class="hljs-string">'InstanceId'</span>] <span class="hljs-keyword">for</span> r <span class="hljs-keyword">in</span> instances[<span class="hljs-string">'Reservations'</span>] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> r[<span class="hljs-string">'Instances'</span>]]

    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> instance_ids:
        <span class="hljs-keyword">return</span> {<span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>, <span class="hljs-string">'body'</span>: <span class="hljs-string">f"No instances to <span class="hljs-subst">{action}</span>."</span>}

    <span class="hljs-keyword">if</span> action == <span class="hljs-string">'start'</span>:
        ec2.start_instances(InstanceIds=instance_ids)
    <span class="hljs-keyword">else</span>:
        ec2.stop_instances(InstanceIds=instance_ids)

    <span class="hljs-keyword">return</span> {<span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>, <span class="hljs-string">'body'</span>: <span class="hljs-string">f"<span class="hljs-subst">{action.capitalize()}</span>ed: <span class="hljs-subst">{instance_ids}</span>"</span>}
</code></pre>
<hr />
<h2 id="heading-manual-testing">Manual Testing</h2>
<p>Before scheduling, it's important to verify the Lambda function manually.</p>
<h3 id="heading-steps">Steps:</h3>
<ol>
<li><p>Navigate to your Lambda function in the AWS console.</p>
</li>
<li><p>Click <strong>Test</strong> &gt; <strong>Create new test event</strong>.</p>
</li>
<li><p>Use the following test input:</p>
<ul>
<li><p>For starting:</p>
<pre><code class="lang-json">  {
    <span class="hljs-attr">"action"</span>: <span class="hljs-string">"start"</span>
  }
</code></pre>
</li>
<li><p>For stopping:</p>
<pre><code class="lang-json">  {
    <span class="hljs-attr">"action"</span>: <span class="hljs-string">"stop"</span>
  }
</code></pre>
</li>
</ul>
</li>
<li><p>Execute the test and check the results in the output logs.</p>
</li>
</ol>
<hr />
<h2 id="heading-automating-with-eventbridge">Automating with EventBridge</h2>
<p>To run this function at specific times daily, use EventBridge (CloudWatch Events).</p>
<h3 id="heading-schedule-to-start-instances">Schedule to Start Instances</h3>
<ul>
<li><p>Go to <strong>EventBridge</strong> → <strong>Rules</strong> → <strong>Create Rule</strong></p>
</li>
<li><p>Name: <code>StartEC2Instances</code></p>
</li>
<li><p>Schedule pattern: <code>cron(0 9 * * ? *)</code> (Every day at 9 AM UTC)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745461784330/b88b83d9-46b6-421c-96fd-1b37b05fb7eb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Target: Your Lambda function</p>
</li>
<li><p>Input JSON:</p>
<pre><code class="lang-json">  {
    <span class="hljs-attr">"action"</span>: <span class="hljs-string">"start"</span>
  }
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745461926357/d070fc91-e672-4548-86b7-13f2db81cfed.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745462191801/3ccdf56f-debe-4d47-959d-c326d8506a6f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-schedule-to-stop-instances">Schedule to Stop Instances</h3>
<ul>
<li><p>Name: <code>StopEC2Instances</code></p>
</li>
<li><p>Schedule pattern: <code>cron(0 18 * * ? *)</code> (Every day at 6 PM UTC)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745462265683/31e22863-5490-491a-9369-427e22ceb031.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Input JSON:</p>
<pre><code class="lang-json">  {
    <span class="hljs-attr">"action"</span>: <span class="hljs-string">"stop"</span>
  }
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745462299961/46de4263-0992-4ec3-bbd6-c292720b5669.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745462429345/82de3183-dbee-4b76-80d7-c9ddbb2226f5.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745466756448/c68743a6-0259-46cb-a2c4-14c5a818eff8.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-guide-to-set-up-sns-notifications-for-ec2-startstop-events">Guide to set up SNS notifications for EC2 start/stop events:</h2>
<hr />
<h3 id="heading-step-1-create-an-sns-topic">Step 1: Create an SNS Topic</h3>
<ol>
<li><p>Open the <a target="_blank" href="https://us-east-1.console.aws.amazon.com/sns/v3/home"><strong>SNS Console</strong>.</a></p>
</li>
<li><p>Click <strong>Create topic</strong>.</p>
</li>
<li><p>Select <strong>Standard</strong> as the topic type.</p>
</li>
<li><p>Enter the following:</p>
<ul>
<li><strong>Name</strong>: <code>ec2-notifications</code></li>
</ul>
</li>
<li><p>Leave the default settings for other options and click <strong>Create topic</strong>.</p>
</li>
</ol>
<h3 id="heading-step-2-subscribe-to-the-topic-email">Step 2: Subscribe to the Topic (Email)</h3>
<ol>
<li><p>After the topic is created, click on the topic name.</p>
</li>
<li><p>Click <strong>Create subscription</strong>.</p>
</li>
<li><p>Configure the subscription:</p>
<ul>
<li><p><strong>Protocol</strong>: Email</p>
</li>
<li><p><strong>Endpoint</strong>: Enter your email address (e.g., <a target="_blank" href="mailto:you@example.com"><code>you@example.com</code></a>).</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create subscription</strong>.</p>
</li>
<li><p>Check your email inbox and confirm the subscription by clicking the confirmation link in the email.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465383153/938c7c15-b685-4c80-bcbd-7e7d695a33ba.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465366485/76b0c056-131b-492c-a5ab-73ae53157130.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-3-create-cloudwatcheventbridge-rule-for-ec2-start-amp-stop-notifications">Step 3: Create CloudWatch/EventBridge Rule for EC2 Start &amp; Stop Notifications</h3>
<p>You need to create two rules: one for EC2 instance start notifications and another for EC2 instance stop notifications.</p>
<h4 id="heading-1-create-rule-for-ec2-instance-start-notification">1. Create Rule for EC2 Instance Start Notification</h4>
<ol>
<li><p>Go to the <strong>Amazon CloudWatch Console</strong>.</p>
</li>
<li><p>In the left sidebar, select <strong>Rules</strong> under <strong>Events/EventBridge</strong> (depending on your UI version). For older UIs, navigate to <strong>Events &gt; Rules</strong>.</p>
</li>
<li><p>Click <strong>Create Rule</strong>.</p>
</li>
<li><p>In the <strong>Rule details</strong> section:</p>
<ul>
<li><p><strong>Name</strong>: <code>EC2InstanceStartNotify</code></p>
</li>
<li><p><strong>Description</strong> (optional): <code>Notify when EC2 instance starts</code>.</p>
</li>
</ul>
</li>
<li><p>Under <strong>Event Pattern</strong>, select <strong>Event Pattern</strong> and paste the following:</p>
</li>
</ol>
<pre><code class="lang-json">{
  <span class="hljs-attr">"source"</span>: [<span class="hljs-string">"aws.ec2"</span>],
  <span class="hljs-attr">"detail-type"</span>: [<span class="hljs-string">"EC2 Instance State-change Notification"</span>],
  <span class="hljs-attr">"detail"</span>: {
    <span class="hljs-attr">"state"</span>: [<span class="hljs-string">"running"</span>]
  }
}
</code></pre>
<ol start="6">
<li><p>Add the target:</p>
<ul>
<li><p>Click <strong>Add target</strong>.</p>
</li>
<li><p>Select <strong>SNS topic</strong>.</p>
</li>
<li><p>Choose the <strong>ec2-notifications</strong> topic you created earlier.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465415081/2d7b14d9-0ebc-4bff-8d42-dfb120fa77c2.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong> to finalize the rule.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465429262/f976a29a-6874-4128-bea7-64e306772066.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h4 id="heading-2-create-rule-for-ec2-instance-stop-notification">2. Create Rule for EC2 Instance Stop Notification</h4>
<ol>
<li><p>Repeat steps 1–4 to create another rule, but with the following changes:</p>
<ul>
<li><p><strong>Name</strong>: <code>EC2InstanceStopNotify</code></p>
</li>
<li><p><strong>Description</strong> (optional): <code>Notify when EC2 instance stops</code>.</p>
</li>
</ul>
</li>
<li><p>Under <strong>Event Pattern</strong>, paste the following for EC2 stop notifications:</p>
</li>
</ol>
<pre><code class="lang-json">{
  <span class="hljs-attr">"source"</span>: [<span class="hljs-string">"aws.ec2"</span>],
  <span class="hljs-attr">"detail-type"</span>: [<span class="hljs-string">"EC2 Instance State-change Notification"</span>],
  <span class="hljs-attr">"detail"</span>: {
    <span class="hljs-attr">"state"</span>: [<span class="hljs-string">"stopped"</span>]
  }
}
</code></pre>
<ol start="3">
<li><p>Add the target:</p>
<ul>
<li><p>Click <strong>Add target</strong>.</p>
</li>
<li><p>Select <strong>SNS topic</strong>.</p>
</li>
<li><p>Choose the <strong>ec2-notifications</strong> topic you created earlier.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465457693/41c3199a-a297-4dac-bdf4-3cc2236f58d8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong> to finalize the rule.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465480136/367a27f2-095f-4d3d-9cc1-4c214a3179af.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>EC2 Start Email Notification:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465542751/4a5e9409-395e-4231-b5c1-be6267d6941e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>EC2 Stop Email Notification:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745465528690/67ebbfd0-7809-45d1-8fc1-9fb3c9bc22ac.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>By combining AWS Lambda, EventBridge scheduling, and SNS notifications, you now have a fully serverless, automated framework for managing your EC2 instance lifecycle:</p>
<ul>
<li><p><strong>Cost optimization</strong>: Instances only run during business hours, reducing your hourly compute charges.</p>
</li>
<li><p><strong>Operational visibility</strong>: SNS alerts notify you immediately whenever an instance starts or stops.</p>
</li>
<li><p><strong>Scalability &amp; maintainability</strong>: EventBridge cron rules and Lambda functions require no servers to manage, and you can adjust tags or schedules without changing infrastructure.</p>
</li>
</ul>
<p>Next steps and best practices:</p>
<ol>
<li><p>Review your CloudWatch Logs and SNS subscription metrics to validate that notifications are delivered as expected.</p>
</li>
<li><p>Implement IAM least-privilege for your Lambda execution role and refine tag-based permissions.</p>
</li>
<li><p>Consider adding CloudWatch alarms on Lambda errors or unexpected instance-state counts to proactively detect issues.</p>
</li>
</ol>
<p>With these measures in place, you’ll maintain tight cost control and real-time awareness of your EC2 fleet. If you’ve adopted a similar pattern or have other ideas for optimizing AWS resource usage, I’d welcome your insights—let’s continue sharing best practices.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Zero to Hero Day 4 - part 2]]></title><description><![CDATA[Task 3: Deploy a scalable web application. The application consists of a MySQL database managed by Amazon RDS and a flask based web application that automatically scale based on demand using an Auto Scaling group and an Elastic Load Balancer.

Deploy...]]></description><link>https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-2</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-2</guid><category><![CDATA[AWS]]></category><category><![CDATA[rds]]></category><category><![CDATA[asg]]></category><category><![CDATA[Load Balancing]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Tue, 22 Apr 2025 07:03:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745301257018/060c47b4-6545-4c7a-ba9c-9973f6cd492f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Task 3: <strong>Deploy a scalable web application. The application consists of a MySQL database managed by Amazon RDS and a flask based web application that automatically scale based on demand using an Auto Scaling group and an Elastic Load Balancer.</strong></p>
<ul>
<li>Deploy this: <a target="_blank" href="https://github.com/LondheShubham153/two-tier-flask-app">Two-tier application</a></li>
</ul>
<h3 id="heading-ans"><strong><mark>Ans:</mark></strong></h3>
<ol>
<li><p>Created AWS RDS with MySQL:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745301296596/b6308b4e-9535-4578-b83b-19f0ac32cfd1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Created EC2 template with following user data:</p>
<pre><code class="lang-bash"> <span class="hljs-comment">#!/bin/bash</span>

 <span class="hljs-comment"># Update system</span>
 sudo apt update -y

 <span class="hljs-comment"># Install Docker &amp; MySQL client</span>
 sudo apt install docker.io mysql-client -y

 <span class="hljs-comment"># Add 'ubuntu' user to Docker group</span>
 sudo usermod -aG docker ubuntu

 <span class="hljs-comment"># Enable Docker service</span>
 sudo systemctl <span class="hljs-built_in">enable</span> docker
 sudo systemctl start docker

 <span class="hljs-comment"># Wait until the RDS instance is ready to accept connections (optional but useful)</span>
 until mysql -u admin -h database-1.cdsswsgcuo0c.us-east-1.rds.amazonaws.com -P 3306 -pdevops2004 -e <span class="hljs-string">"SELECT 1;"</span> &amp;&gt;/dev/null; <span class="hljs-keyword">do</span>
   <span class="hljs-built_in">echo</span> <span class="hljs-string">"Waiting for MySQL..."</span>
   sleep 5
 <span class="hljs-keyword">done</span>

 <span class="hljs-comment"># Create the database if it doesn't exist</span>
 mysql -u admin -h database-1.cdsswsgcuo0c.us-east-1.rds.amazonaws.com -P 3306 -pdevops2004 -e <span class="hljs-string">"CREATE DATABASE IF NOT EXISTS flaskdb;"</span>

 <span class="hljs-comment"># Pull your Flask app Docker image</span>
 sudo docker pull amitabhdevops/aws-flask-app:latest

 <span class="hljs-comment"># Run the Flask app</span>
 sudo docker run -d \
   --name flaskapp \
   -e MYSQL_HOST=database-1.cdsswsgcuo0c.us-east-1.rds.amazonaws.com \
   -e MYSQL_USER=admin \
   -e MYSQL_PASSWORD=devops2004 \
   -e MYSQL_DB=flaskdb \
   -p 80:5000 \
   amitabhdevops/aws-flask-app:latest
</code></pre>
</li>
<li><p>Then, Created AWS AutoScaling Group with this template</p>
</li>
<li><p>After that, I created an IAM role for EC2 service with RDS and CloudWatch full access, and attached it to my EC2 Instance</p>
</li>
<li><p>Modified the RDS to connect with my Instance</p>
</li>
<li><p>Connected to first instance, which is created from ASG</p>
</li>
<li><p>and checked the databases content</p>
</li>
<li><p>output images:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745301775791/e04d4eb9-9b80-4334-beaa-bc2f0b3f22e6.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745301757585/9fc57c5d-dcb8-4385-814f-36840b6082b6.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745301764154/d6b7016b-668f-4084-a4f4-34c81afb5ab6.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[AWS Zero to Hero Day 4 - part-1]]></title><description><![CDATA[Tasks for the day:

Read about AWS RDS, DynamoDB, and AWS Lambda, and write a post on LinkedIn with an example in your own words. Ans:

AWS RDS: Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate,...]]></description><link>https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-1</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-zero-to-hero-day-4-part-1</guid><category><![CDATA[AWS]]></category><category><![CDATA[rds]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[7daysofAWS]]></category><category><![CDATA[#AWSwithTWS  #7DaysOfAWS]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Mon, 21 Apr 2025 05:27:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745213162864/bf862fd4-292f-4115-b23f-23c9516b7b05.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tasks-for-the-day">Tasks for the day:</h2>
<ol>
<li><p><strong>Read about AWS RDS, DynamoDB, and AWS Lambda, and write a post on LinkedIn with an example in your own words.</strong><br /> <strong><mark>Ans:</mark></strong></p>
<ul>
<li><p><strong><em><mark>AWS RDS:</mark></em></strong> Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.</p>
</li>
<li><p><strong><em><mark>AWS DynamoDB:</mark></em></strong> Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance.</p>
</li>
<li><p><strong><em><mark>AWS Lambda:</mark></em></strong> With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there's no charge when your code isn't running. You can run code for virtually any type of application or backend service—all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.</p>
</li>
</ul>
</li>
</ol>
<hr />
<ol start="2">
<li><h4 id="heading-you-are-part-of-a-team-responsible-for-migrating-the-database-of-an-existing-e-commerce-platform-to-amazon-rds-the-goal-is-to-improve-scalability-performance-and-manageability-the-current-setup-uses-a-self-managed-mysql-database-on-an-on-premises-server">You are part of a team responsible for migrating the database of an existing e-commerce platform to Amazon RDS. The goal is to improve scalability, performance, and manageability. The current setup uses a self-managed MySQL database on an on-premises server. 👇</h4>
<h4 id="heading-what-needs-to-be-done">What needs to be done:</h4>
<ul>
<li><p>Set up and configure a MySQL database on AWS RDS, ensuring optimal performance.</p>
</li>
<li><p>Establish a connection between the RDS instance and your EC2 environment</p>
</li>
</ul>
</li>
</ol>
<p>    <strong><mark>Ans:</mark></strong></p>
<p>    1. Created an RDS with connecting to an EC2 Instance</p>
<ol start="2">
<li><p>After the creation of RDS, connected to the EC2 instance that is attached to RDS and updated the system</p>
</li>
<li><p>After that, I created an IAM role for EC2 service with RDS and CloudWatch full access, and attached it to my EC2 Instance.</p>
</li>
<li><p>After that, I launched the EC2 and installed the MySQL client using the command <code>sudo apt install mysql-client -y</code></p>
</li>
<li><p>and then ran this command and entered the password, and connected to my RDS MySQL command: <code>mysql -u admin -h [database-1.c3w68q0ggwwn.eu-west-1.rds.amazonaws.com](&lt;http://database-1.c3w68q0ggwwn.eu-west-1.rds.amazonaws.com/&gt;) -P 3306 -p</code></p>
<p> where <a target="_blank" href="http://database-1.c3w68q0ggwwn.eu-west-1.rds.amazonaws.com/"><code>database-1.c3w68q0ggwwn.eu-west-1.rds.amazonaws.com</code></a> It is the EndPoint of my RDS DB.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745211315043/387e86f2-8013-484c-bdbf-ac4f3c5818cb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Then I created a database ‘aws’ using the command <code>create database aws;</code></p>
</li>
<li><p>To check whether the database is created or not, I ran this to show all databases: <code>show databases;</code></p>
</li>
<li><p>Then I used that database using <code>use aws</code></p>
</li>
<li><p>After that, I created the table using the command <code>CREATE TABLE learners (learner_id INT, learner_name VARCHAR(50));</code></p>
</li>
<li><p>then i inserted two data into the table</p>
<ol>
<li><p><code>insert into learners (learner_id,learner_name) values (1,"shubham");</code></p>
</li>
<li><p><code>insert into learners (learner_id,learner_name) values (2,"Amitabh");</code></p>
</li>
<li><p>Output <code>select * from learners;</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745211451630/e52ddfbc-d61f-4eb7-9a3d-56d3ab06e0d8.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[AWS-RDS-DynamoDB-Lambda]]></title><description><![CDATA[AWS RDS:
Amazon Relational Database Service (RDS) is a managed service that simplifies the process of setting up, operating, and scaling relational databases in the cloud. It offers cost-efficient and scalable capacity while automating time-consuming...]]></description><link>https://blog.amitabh.cloud/aws-rds-dynamodb-lambda</link><guid isPermaLink="true">https://blog.amitabh.cloud/aws-rds-dynamodb-lambda</guid><category><![CDATA[AWS]]></category><category><![CDATA[rds]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[lambda]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Sun, 20 Apr 2025 02:50:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745117406904/87a6b523-6f7e-4518-b20b-a9090a1f83af.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-aws-rds">AWS RDS:</h2>
<p>Amazon Relational Database Service (RDS) is <strong><mark>a managed service that simplifies the process of setting up, operating, and scaling relational databases in the cloud</mark></strong>. It offers cost-efficient and scalable capacity while automating time-consuming database administration tasks, like backups, software patching, and monitoring. This allows developers to focus on their applications and business logic rather than managing the database infrastructure.</p>
<p><strong>Key features of Amazon RDS:</strong></p>
<ul>
<li><p><strong>Managed Service:</strong></p>
<p>  RDS handles many database administration tasks, including provisioning, backups, software updates, and patching.</p>
</li>
<li><p><strong>Scalable Capacity:</strong></p>
<p>  You can easily scale the compute resources and storage capacity of your database instances to meet your needs.</p>
</li>
<li><p><strong>High Availability:</strong></p>
<p>  RDS offers features like Multi-AZ deployments and read replicas to enhance database availability and improve performance.</p>
</li>
<li><p><strong>Security:</strong></p>
<p>  RDS provides robust security features, including encryption at rest and in transit, network isolation, and access control.</p>
</li>
<li><p><strong>Database Engines:</strong></p>
<p>  RDS supports various popular database engines, such as MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB.</p>
</li>
<li><p><strong>Aurora:</strong></p>
<p>  RDS also includes Amazon Aurora, a fully managed database engine that is built for the cloud and is compatible with MySQL and PostgreSQL.</p>
</li>
<li><p><strong>Cost-Effective:</strong></p>
<p>  You pay only for the resources you use, with no upfront investments required.</p>
</li>
<li><p><strong>Automated Backup and Recovery:</strong></p>
<p>  RDS automatically backs up your database, allowing you to restore to a previous point in time.</p>
</li>
<li><p><strong>Compatibility:</strong></p>
<p>  You can use existing applications and tools with your RDS databases.</p>
</li>
</ul>
<p>In essence, Amazon RDS provides a convenient and reliable way to manage relational databases in the cloud, allowing you to focus on application development and business goals while minimizing the effort required for database administration.</p>
<hr />
<h2 id="heading-aws-dynamodb">AWS DynamoDB:</h2>
<p>AWS DynamoDB is <strong><mark>a serverless, NoSQL database service provided by Amazon Web Services (AWS)</mark></strong>. It supports key-value and document data structures, enabling developers to build modern, scalable applications. DynamoDB is designed for high performance and can handle virtually any size of data.</p>
<p><strong>Here's a more detailed breakdown:</strong></p>
<ul>
<li><p><strong>NoSQL Database:</strong></p>
<p>  DynamoDB is a NoSQL database, meaning it doesn't rely on traditional relational database models like SQL. Instead, it uses a key-value or document-based data structure.</p>
</li>
<li><p><strong>Serverless:</strong></p>
<p>  As a serverless service, AWS manages the underlying infrastructure, so developers don't need to worry about server provisioning, patching, or maintenance.</p>
</li>
<li><p><strong>Key-Value and Document Data Models:</strong></p>
<p>  DynamoDB supports both key-value and document data models, allowing you to store data in a structured format.</p>
</li>
<li><p><strong>Scalable:</strong></p>
<p>  DynamoDB can scale to handle a wide range of workloads, from small, single-user applications to large, globally distributed systems. It automatically scales horizontally to accommodate growing data volumes and traffic.</p>
</li>
<li><p><strong>High Performance:</strong></p>
<p>  DynamoDB is designed for fast data access, with single-digit millisecond response times.</p>
</li>
<li><p><strong>Managed Service:</strong></p>
<p>  AWS manages the database, so developers can focus on building applications instead of managing the database itself.</p>
</li>
<li><p><strong>Global Tables:</strong></p>
<p>  DynamoDB supports Global Tables, allowing for multi-Region replication of data, ensuring high availability and low latency for globally distributed applications.</p>
</li>
<li><p><strong>Integration with Other AWS Services:</strong></p>
<p>  DynamoDB integrates well with other AWS services like AWS Lambda, Kinesis, and S3, enabling developers to build complete solutions.</p>
</li>
</ul>
<p>In essence, DynamoDB is a powerful, scalable, and managed NoSQL database service that is ideal for building modern, serverless applications on AWS.</p>
<hr />
<h2 id="heading-aws-lambda">AWS Lambda:</h2>
<p>AWS Lambda is <strong><mark>a service that lets you run code without managing servers</mark></strong>. It's an example of serverless computing, also known as Function as a Service (FaaS).</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td></td><td>AWS Lambda</td></tr>
</thead>
<tbody>
<tr>
<td>How it works</td><td>Runs code in response to events, such as user actions or changes to data</td></tr>
<tr>
<td>What you need to do</td><td>Upload your code, and Lambda handles the rest</td></tr>
<tr>
<td>What you pay for</td><td>Only the compute time you use</td></tr>
<tr>
<td>What it does</td><td>Scales automatically, monitors, and logs your code</td></tr>
<tr>
<td>What you can use it for</td><td>Backend services, web apps, mobile apps, and more</td></tr>
</tbody>
</table>
</div><p><strong>You can use Lambda to:</strong></p>
<ul>
<li><p>Extend other AWS services</p>
</li>
<li><p>Create your own backend services</p>
</li>
<li><p>Process streams of data</p>
</li>
<li><p>Call APIs</p>
</li>
<li><p>Integrate with other AWS services</p>
</li>
<li><p>Run code for applications that need to scale up and down</p>
</li>
</ul>
<p>You can write Lambda functions in languages like Node.js, Python, Go, and Java. You can use tools like AWS SAM or Docker CLI to build, test, and deploy your functions.</p>
<p>AWS Lambda is part of Amazon Web Services (AWS).</p>
<hr />
<p>AWS Lambda is <strong><mark>a serverless compute service that lets you run code without managing servers</mark></strong>. Lambda functions are pieces of code that perform specific tasks, such as processing data streams or responding to HTTP requests. You can trigger these functions with various events, like changes in data, user actions, or scheduled tasks. Lambda manages the underlying infrastructure, including scaling and maintenance, allowing you to focus on your code.</p>
<p><strong>Here's a more detailed explanation:</strong></p>
<ul>
<li><p><strong>Serverless Computing:</strong></p>
<p>  Lambda eliminates the need to provision and manage servers yourself. You only pay for the compute time your code uses, and there are no charges when your code is not running.</p>
</li>
<li><p><strong>Event-Driven:</strong></p>
<p>  Lambda functions are triggered by events, such as a file being uploaded to an S3 bucket, a message arriving in an SNS queue, or a request being sent to an API Gateway endpoint.</p>
</li>
<li><p><strong>Scalability:</strong></p>
<p>  Lambda automatically scales your code up or down based on demand, ensuring that it can handle a wide range of workloads without manual intervention.</p>
</li>
<li><p><strong>Code Execution:</strong></p>
<p>  You upload your code (in a variety of supported languages like Node.js, Python, Java, and Go) to Lambda, and it's executed in response to the triggered events.</p>
</li>
<li><p><strong>Infrastructure Management:</strong></p>
<p>  Lambda handles all the underlying infrastructure management, including server maintenance, capacity provisioning, and automatic scaling.</p>
</li>
<li><p><strong>Versatility:</strong></p>
<p>  You can use Lambda for a wide range of applications, from simple backend logic to complex data processing pipelines.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Amazon S3: Secure Cloud Storage]]></title><description><![CDATA[Amazon Simple Storage Service (S3) is an object storage service provided by Amazon Web Services (AWS) offering high scalability, data availability, security, and performance. It allows users to store and retrieve any amount of data from anywhere. S3 ...]]></description><link>https://blog.amitabh.cloud/amazon-s3-secure-cloud-storage</link><guid isPermaLink="true">https://blog.amitabh.cloud/amazon-s3-secure-cloud-storage</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS s3]]></category><dc:creator><![CDATA[Amitabh soni]]></dc:creator><pubDate>Sun, 20 Apr 2025 02:48:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745117230082/e9b88458-445d-4470-bada-8d6eaa878a29.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon Simple Storage Service (S3) is <strong><mark>an object storage service provided by Amazon Web Services (AWS) offering high scalability, data availability, security, and performance</mark></strong>. It allows users to store and retrieve any amount of data from anywhere. S3 stores data as objects within buckets. </p>
<p><strong>Key Features and Concepts:</strong></p>
<ul>
<li><p><strong>Object Storage:</strong> S3 stores data as objects, which are essentially files along with optional metadata. </p>
</li>
<li><p><strong>Buckets:</strong> S3 uses buckets as containers for storing objects. </p>
</li>
<li><p><strong>Scalability:</strong> S3 is designed to handle massive amounts of data and traffic. </p>
</li>
<li><p><strong>Data Durability:</strong> S3 offers high durability, with a stated goal of 11 nines (99.999999999%) data durability. </p>
</li>
<li><p><strong>Data Availability:</strong> S3 is designed for high availability, with a stated goal of 99.99%. </p>
</li>
<li><p><strong>Security:</strong> S3 provides various security features, including encryption, access control lists (ACLs), and IAM policies. </p>
</li>
<li><p><strong>Storage Classes:</strong> S3 offers different storage classes to optimize costs and performance based on access frequency and data lifecycle. </p>
</li>
<li><p><strong>Access Points:</strong> S3 Access Points allow you to create virtualized endpoints for accessing S3 data, which can improve performance and simplify application architecture. </p>
</li>
<li><p><strong>Lifecycle Management:</strong> S3 provides lifecycle management features to automate object transitions between storage classes and deletion. </p>
</li>
<li><p><strong>Replication:</strong> S3 supports cross-region replication, allowing you to replicate data across different AWS regions for redundancy and disaster recovery. </p>
</li>
<li><p><strong>Data Residency:</strong> S3 offers features for data residency, allowing you to store data in specific data perimeters. </p>
</li>
</ul>
<p><strong>Use Cases:</strong></p>
<ul>
<li><p><strong>Storing and retrieving data for various applications:</strong></p>
<p>  S3 is used for storing and retrieving data for a wide range of applications, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. </p>
</li>
<li><p><strong>Building data lakes:</strong></p>
<p>  S3 can be used to build data lakes, which are centralized repositories for storing structured and unstructured data at any scale. </p>
</li>
<li><p><strong>Storing static website content:</strong></p>
<p>  S3 can be used to store static website content, which can then be served to users. </p>
</li>
<li><p><strong>Storing backups and archives:</strong></p>
<p>  S3 can be used as a cost-effective storage solution for backups and archives. </p>
</li>
</ul>
<p><strong>Getting Started with S3:</strong></p>
<ol>
<li><p><strong>Create an AWS Account:</strong> You'll need an AWS account to use S3. </p>
</li>
<li><p><strong>Create an S3 Bucket:</strong> Create a bucket to store your objects. </p>
</li>
<li><p><strong>Upload Objects:</strong> Upload your files or data to the bucket. </p>
</li>
<li><p><strong>Configure Access:</strong> Configure access control to manage who can access your objects. </p>
</li>
<li><p><strong>Choose a Storage Class:</strong> Select the appropriate storage class for your data. </p>
</li>
<li><p><strong>Explore S3 Features:</strong> Learn about and utilize other S3 features like lifecycle management, replication, and access points.</p>
</li>
</ol>
]]></content:encoded></item></channel></rss>