From Zero to Kubernetes: Automating a Minimal Cluster on AWS EC2 (My DevOps Journey)

The Unofficial Challenge: Why Automate Kubernetes on AWS?

Ever wondered if you could spin up a fully functional Kubernetes cluster on AWS EC2 with just a few commands? Four years ago, during my DevOps Masters Program, I decided to make that a reality. While the core assignment was to learn Kubernetes (which can be done in many ways), I set myself an ambitious personal challenge: to fully automate the deployment of a minimal Kubernetes cluster on AWS EC2, from instance provisioning to node joining.

Manual Kubernetes setups can be incredibly time-consuming, prone to errors, and difficult to reproduce consistently. I wanted to leverage the power of Infrastructure as Code (IaC) to create a repeatable, disposable, and efficient way to deploy a minimal K8s environment for learning and experimentation. My goal wasn’t just to understand Kubernetes, but to master its deployment pipeline, integrate AWS services seamlessly, and truly push the boundaries of what I could automate within a cloud environment.

The full github link: https://github.com/jthoma/code-collection/tree/master/aws/aws-cf-kubecluster

The Architecture: A Glimpse Behind the Curtain

At its core, my setup involved an AWS CloudFormation template (managed by AWS SAM CLI) to provision EC2 instances, and a pair of shell scripts to initialize the Kubernetes control plane and join worker nodes.

Here’s a breakdown of the key components and their roles in bringing this automated cluster to life:

AWS EC2: These are the workhorses – the virtual machines that would host our Kubernetes control plane and worker nodes.
AWS CloudFormation (via AWS SAM CLI): This is the heart of our Infrastructure as Code. CloudFormation allows us to define our entire AWS infrastructure (EC2 instances, security groups, IAM roles, etc.) in a declarative template. The AWS Serverless Application Model (SAM) CLI acts as a powerful wrapper, simplifying the deployment of CloudFormation stacks and providing a streamlined developer experience.
Shell Scripts: These were the crucial “orchestrators” running within the EC2 instances. They handled the actual installation of Kubernetes components (kubeadm, kubelet, kubectl, Docker) and the intricate steps required to initialize the cluster and join nodes.

When I say “minimal” cluster, I’m referring to a setup with just enough components to be functional – typically one control plane node and one worker node, allowing for basic Kubernetes operations and application deployments.

The Automation Blueprint: Diving into the Files

The entire orchestration was handled by three crucial files, working in concert to bring the Kubernetes cluster to life:

template.yaml (The AWS CloudFormation Backbone): This YAML file is where the magic of Infrastructure as Code happens. It outlines our EC2 instances, their network configurations, and the necessary security groups and IAM roles. Critically, it uses the UserData property within the EC2 instance definition. This powerful property allows you to pass shell commands or scripts that the instance executes upon launch. This was our initial entry point for automation.

   You can view the `template.yaml` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/template.yaml).

kube-bootstrap.sh (The Instance Preparation Script): This script is the first to run on our EC2 instances. It handles all the prerequisites for Kubernetes: installing Docker, the kubeadm, kubectl, and kubelet binaries, disabling swap, and setting up the necessary kernel modules and sysctl parameters that Kubernetes requires. Essentially, it prepares the raw EC2 instance to become a Kubernetes node.

   You can view the `kube-bootstrap.sh` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/kube-bootstrap.sh).

kube-init-cluster.sh (The Kubernetes Orchestrator): Once kube-bootstrap.sh has laid the groundwork, kube-init-cluster.sh takes over. This script is responsible for initializing the Kubernetes control plane on the designated master node. It then generates the crucial join token that worker nodes need to connect to the cluster. Finally, it uses that token to bring the worker node(s) into the cluster, completing the Kubernetes setup.

   You can view the `kube-init-cluster.sh` file on GitHub 

The Deployment Process: sam deploy -g in Action

The entire deployment process, from provisioning AWS resources to the final Kubernetes cluster coming online, is kicked off with a single, elegant command from the project’s root directory:

sam deploy -g

The -g flag initiates a guided deployment. AWS SAM CLI interactively prompts for key parameters like instance types, your AWS EC2 key pair (for SSH access), and details about your desired VPC. This interactive approach makes the deployment customizable yet incredibly streamlined, abstracting away the complexities of direct CloudFormation stack creation. Under the hood, SAM CLI translates your template.yaml into a full CloudFormation stack and handles its deployment and updates.

The “Aha!” Moment: Solving the Script Delivery Challenge

One of the most persistent roadblocks I encountered during this project was a seemingly simple problem: how to reliably get kube-bootstrap.sh and kube-init-cluster.sh onto the newly launched EC2 instances? My initial attempts, involving embedding the scripts directly into the UserData property, quickly became unwieldy due to size limits and readability issues. Other complex methods also proved less than ideal.

After several attempts and a bit of head-scratching, the elegant solution emerged: I hosted both shell scripts in a public-facing downloads folder on my personal blog. Then, within the EC2 UserData property in template.yaml, I simply used wget to download these files to the /tmp directory on the instance, followed by making them executable and running them.

This approach proved incredibly robust and streamlined. It kept the CloudFormation template clean and manageable, while ensuring the scripts were always accessible at launch time without needing complex provisioning tools or manual intervention. It was a classic example of finding a simple, effective solution to a tricky problem.

Lessons Learned and Key Takeaways

This project, born out of an academic requirement, transformed into a personal quest to master automated Kubernetes deployments on AWS. It was a journey filled with challenges, but the lessons learned were invaluable:

Problem-Solving is Key: Technical roadblocks are inevitable. The ability to iterate, experiment, and find creative solutions is paramount in DevOps.
The Power of Infrastructure as Code (IaC): Defining your infrastructure programmatically is not just a best practice; it’s a game-changer for reproducibility, scalability, and disaster recovery.
Automation Principles: Breaking down complex tasks into manageable, automated steps significantly reduces manual effort and error.
AWS CloudFormation and UserData Versatility: Understanding how to leverage properties like UserData can unlock powerful initial setup capabilities for your cloud instances.
Persistence Pays Off: Sticking with a challenging project until it works, even when faced with frustrating issues, leads to deep learning and a huge sense of accomplishment.

While this was a fantastic learning experience, if I were to revisit this project today, I might explore using a dedicated configuration management tool like Ansible for the in-instance setup, or perhaps migrating to a managed Kubernetes service like EKS for production readiness. However, for a hands-on, foundational understanding of automated cluster deployment, this self-imposed challenge was truly enlightening.

Last time when I ran it the console was as follows:

Conclusion

This project underscored that with a bit of ingenuity and the right tools, even complex setups like a Kubernetes cluster can be fully orchestrated and deployed with minimal human intervention. It’s a testament to the power of automation in the cloud and the satisfaction of bringing a challenging vision to life.

I hope this deep dive into my automated Kubernetes cluster journey has been insightful. Have you embarked on similar automation challenges? What unique problems did you solve? Share your experiences in the comments!

Optimizing WordPress Performance with AWS, Docker and Jenkins

At Jijutm.com, I wanted to deliver a fast and reliable experience for our readers. To achieve this, I have implemented a containerized approach using Docker and Jenkins for managing this WordPress site. This article delves into the details of our setup and how it contributes to exceptional website performance.

Why Containers?

Traditional server management often involves installing software directly on the operating system. This can lead to dependency conflicts, versioning issues, and a complex environment. Docker containers provide a solution by encapsulating applications with all their dependencies into isolated units. This offers several advantages:

Consistency: Docker ensures a consistent environment regardless of the underlying operating system. This simplifies development, testing, and deployment.
Isolation: Applications running in containers are isolated from each other, preventing conflicts and improving security.
Portability: Docker containers are portable across different environments, making it easy to migrate your application between development, staging, and production.

The Containerized Architecture

This WordPress site leverages three Docker containers:

  1. Nginx: A high-performance web server that serves the content of this website efficiently.
  2. PHP-FPM: A FastCGI process manager that executes PHP code for dynamic content generation in WordPress.
  3. MariaDB: A robust and popular open-source relational database management system that stores the WordPress data and is fully compatible with MySQL.

These containers work together seamlessly to deliver a smooth user experience. Nginx acts as the front door, handling user requests and routing them to the PHP-FPM container for processing. PHP-FPM interacts with the MariaDB container to retrieve and update website data.

Leveraging Jenkins for Automation

While Docker simplifies application management, automating deployments is crucial for efficient workflow. This is where Jenkins comes in. Jenkins is an open-source automation server that we use to manage the build and deployment process for our WordPress site.

Here’s how Jenkins integrates into this workflow:

  1. Code Changes: Whenever we make changes to the WordPress codebase, we push them to a version control system like Git.
  2. Jenkins Trigger: The push to the Git repository triggers a job in Jenkins.
  3. Build Stage: Jenkins pulls the latest code, builds a new Docker image containing the updated WordPress application, and pushes it to a Docker registry.
  4. Deployment Stage: The new Docker image is deployed to our hosting environment, updating the running containers with the latest code.

This automation ensures that our website stays up-to-date with the latest changes without any manual intervention.

Hooked into WordPress Post or Page Publish.

Over and above maintaining the code using Jenkins, each content Publish action triggers another Jenkins project, which runs a sequence of commands. wget in mirror mode to convert the whole site to static HTML files. sed to rewrite the URLs from local host to realtime external domain specific. gzip to create .html.gz for each HTML files. aws cli to sync the static mirror folder with that in AWS S3 and finally apply meta headers to the files to specify the content type and content-encoding. When all the files are synced, the AWS CLI issues an invalidate request to the CloudFront distribution.

Benefits of this Approach

Improved Performance: Docker containers provide a lightweight and efficient environment, leading to faster loading times for this website.
Enhanced Scalability: I don’t need to bother about scaling this application by adding more containers to handle increased traffic, as that is handled by aws S3 and CloudFront.
Simplified Management: Docker and Jenkins automate a significant portion of the infrastructure management, freeing up time for development and content creation. With the docker and all components running in my Asus TUF A17 Laptop powered by XUbuntu the hosting charges are limited to AWS Route53, AWS S3 and AWS CloudFront only.
Reliable Deployments: Jenkins ensures consistent and reliable deployments, minimizing the risk of errors or downtime.
Well for minimal dynamic content like the download counters, AWS Serverless lambda functions are written and deployed for updating download requests into aDynamoDB table and to display the count near any downloadable content with proper markup. Along with this the comments are moved into Disqus, as it is a comment system that can be used on WordPress sites. It can replace the native WordPress comments system.

Conclusion

By leveraging Docker containers and Jenkins, I have established a robust and performant foundation for this site. This approach allows me to focus on delivering high-quality content to the readers while ensuring a smooth and fast user experience.

Additional Considerations

Security: While Docker containers enhance security, it’s essential to maintain secure practices like keeping Docker containers updated and following security best practices for each service.
Monitoring: Monitoring the health and performance of your containers is crucial. Tools like Docker Stats and Prometheus can provide valuable insights.

Hope this article provides a valuable perspective on how Docker and Jenkins can be used to optimize a WordPress website. If you have any questions, feel free to leave a comment below!

Creating a Time-lapse effect Video from a Single Photo Using Command Line Tools on Ubuntu

In this tutorial, I’ll walk you through creating a timelapse effect video that transitions from dark to bright, all from a single high-resolution photo. Using a Samsung Galaxy M14 5G, I captured the original image, then manipulated it using Linux command-line tools like ImageMagick, PHP, and ffmpeg. This approach is perfect for academic purposes or for anyone interested in experimenting with video creation from static images. Here’s how you can achieve this effect. And note that this is just an academic exploration and to be used as a professional tool the values and frames should be defined with utmost care.

Basics was to find the perfect image, and crop it to 9:16 since I was targetting facebook reels and the 50 MP images taken on Samsung Galaxy M14 5G are at 4:3 with 8160×6120 and Facebook reels or YouTube shorts follow the format of 9:16 and 1080×1920 or proportionate dimensions. My final source image was 1700×3022 added here for reference. Had to scale it down to keep inside the blog aesthetics.

Step 1: Preparing the Frame Rate and Length
To begin, I decided on a 20-second video with a frame rate of 25 frames per second, resulting in a total of 500 frames. Manually creating the 500 frames was tedious and any professionals would use some kind of automation. Being a devops enthusiast and a linux fanatic since 1998 my choice was shell scripting. But addiction to php as an aftermath of usage since 2002 kicked up inside me and the following code nippet was the outcome.

Step 2: Generating Brightness and Contrast Values Using PHP
The next step was to create an array of brightness and contrast values to give the impression of a gradually brightening scene. Using PHP, I mapped each frame to an optimal brightness-contrast value. Here’s the PHP snippet I used:

<?php


$dur = 20;
$fps = 25;
$frames = $dur * $fps;
$plen = strlen(''.$frames) + 1;
$val = -50;
$incr = (60 / $frames);

for($i = 0; $i < $frames; $i++){
   $pfx =  str_pad($i, $plen, '0', STR_PAD_LEFT);

    echo $pfx, " ",round($val,2),"\n";

    $val += $incr;
}

?>

Being in ubuntu the above code saved as gen.php and after updating the values for duration and framerate this was executed from the cli and output redirected to a text file values.txt with the following command.

php -q gen.php > values.txt 

Now to make things easy, the source file was copied as src.jpg into a temporary folder and a sub-folder ‘anim’ was created to hold the frames. Here I already had a script which will resume from where left off depending on the situation. the script is as follows.

#!/bin/bash


gdone=$(find ./anim/ -type f | grep -c '.jpg')
tcount=$(grep -c "^0" values.txt)
todo=$(( $tcount - $gdone))

echo "done $gdone of ${tcount}, to do $todo more "

tail -$todo values.txt | while read fnp val 
do 
    echo $fnp
    convert src.jpg -brightness-contrast ${val} anim/img_${fnp}.jpg
done

The process is quite simple, first code line defines a var gdone by counting ‘.jpg’ files in the ‘anim’ sub-directory and then taking total count from values.txt the difference is to be done the status is echoed to output and a loop is initiated with reading the last todo lines from values.txt and executing the conversion using the convert utility of imagemagick. In case this needs to be interrupted, I just close the terminal window from xwindows, as a subsequent execution will continue from where leftoff. Once this is completed, the frames are stitched together using ffmpeg using the following commad.

ffmpeg -i anim/img_%04d.jpg -an -y ../output.mp4

The filename pattern %04d is decided from the width of number of frames plus 1 as in the php code the var $plen on code line 4 is taken for the str_pad function input padd length.

The properties of final output generated by ffmpeg is as follows. Note the dimensions, duration and frame rate do comply as decided on startup.

Exploring Animation Creation with GIMP, Bash, and FFmpeg: A Journey into Headphone Speaker Testing

For a long time, I had a desire to create a video that helps people confirm that their headphones are worn correctly, especially when there are no left or right indicators. While similar solutions exist out there, I decided to take this on as an exploration of my own, using tools I’m already familiar with: GIMP, bash, and FFmpeg.

This project resulted in a short animation that visually shows which speaker—left, right, or both—is active, syncing perfectly with the narration.

Project Overview:
The goal of the video was simple: create an easy way for users to verify if their headphones are worn correctly. The animation features:

  • “Hear on both speakers”: Animation shows pulsations on both sides.
  • “Hear on left speaker only”: Pulsations only on the left.
  • “Hear on right speaker only”: Pulsations only on the right.
  • Silence: No pulsations at all. Tools Used:
  • Amazon Polly for generating text-to-speech narration.
  • Audacity for audio channel switching.
  • GIMP for creating visual frames of the animation.
  • Bash scripting to automate the creation of animation sequences.
  • FFmpeg to compile the frames into the final video.
  • LibreOffice Calc to calculate frame sequences for precise animation timing. Step-by-Step Workflow:
  1. Creating the Audio Narration:
    Using Amazon Polly, I generated a text-to-speech audio file with the necessary instructions. Polly’s lifelike voice makes it easy to understand. I then used Audacity to modify the audio channels, ensuring that the left, right, and both channels played at the appropriate times.
  2. Synchronizing Audio and Visuals:
    I needed the animation to sync perfectly with the audio. To achieve this, I first identified the start and end of each segment in the audio file and created a spreadsheet in LibreOffice Calc. This helped me calculate the number of frames per segment, ensuring precise timing for the animation.
  3. Creating Animation Frames in GIMP:
    The visual animation was created using a simple diaphragm depression effect. I made three frames in GIMP:
  • One for both speakers pulsating,
  • One for the left speaker only,
  • One for the right speaker only.
  1. Automation with Bash:
    Once the frames were ready, I created a guideline text using Gedit that outlined the sequence. I then used a bash while-read loop combined with a seq loop to generate 185 image files. These files followed a naming convention of anim_%03d.png, ensuring they were easy to compile later.
  2. Compiling with FFmpeg:
    After all frames were created, I used FFmpeg to compile the images into the final video. The result was a fluid, synchronized animation that matched the audio perfectly. The Finished Product:
    Here’s the final video that demonstrates the headphone speaker test:

https://youtu.be/_fskGicSSUQ

Why I Chose These Tools:
Being familiar with xUbuntu, I naturally gravitated toward tools that work seamlessly in this environment. Amazon Polly provided high-quality text-to-speech output, while Audacity handled the channel switching with ease. GIMP was my go-to for frame creation, and the combination of bash and FFmpeg made the entire animation process efficient and automated.

This project not only satisfied a long-held desire but also served as an exciting challenge to combine these powerful tools into one cohesive workflow. It was a satisfying dive into animation and audio synchronization, and I hope it can help others as well!

Conclusion:
If you’re into creating animated videos or simply exploring new ways to automate your creative projects, I highly recommend diving into tools like GIMP, bash, and FFmpeg. Whether you’re on xUbuntu like me or another system, the potential for customization is vast. Let me know if you found this helpful or if you have any questions!