Why I Built an AWS Node.js Lambda Framework (And What It Solves)

Over the past 8–10 years, I’ve run into the same set of issues again and again when deploying Node.js applications to AWS Lambda—especially those built with the popular Express.js framework.

While Express is just 219 KB, the dependency bloat is massive—often exceeding 4.3 MB. In the world of serverless, that’s a serious red flag. Every time I had to make things work, it involved wrappers, hacks, or half-hearted workarounds that made deployments messy and cold starts worse.

Serverless and Express Don’t Mix Well

In many teams I’ve worked with, the standard approach was a big, monolithic Express app. And every time developers tried to work in parallel, we hit code conflicts. This often slowed development and created complex merge scenarios.

When considering serverless, we often used the “one Lambda per activity” pattern—cleaner, simpler, more manageable. But without structure or scaffolding, building and scaling APIs this way felt like reinventing the wheel.

A Lean Framework Born From Frustration

During a professional break recently, I decided to do something about it. I built a lightweight, Node.js Lambda framework designed specifically for AWS:

🔗 Try it here – http://bz2.in/njsfra

Base size: ~110 KB
After build optimization: Can be trimmed below 60 KB
Philosophy: Lazy loading and per-endpoint modularity

This framework is not just small—it’s structured. It’s optimized for real-world development where multiple developers work across multiple endpoints with minimal overlap.

Introducing cw.js: Scaffolding From OpenAPI

To speed up development, the framework includes a tool called cw.js—a code writer utility that reads a simplified OpenAPI v1.0 JSON definition (like api.json) and creates:

Routing logic
A clean project structure
Separate JS files for each endpoint

Each function is generated as an empty handler—ready for you to add business logic and database interactions. Think of it as automatic boilerplate—fast, reliable, and consistent.

You can generate the OpenAPI definition using an LLM like ChatGPT or Gemini. For example:

Prompt:
Assume the role of an expert JSON developer.
Create the following API in OpenAPI 1.0 format:
[Insert plain-language API description]

Why This Architecture Works for Teams

No more code conflicts: Each route is its own file
Truly parallel development: Multiple devs can work without stepping on each other
Works on low-resource devices: Even a smartphone with Termux/Tmux can run this (see: tmux video)

The Magic of Lazy Loading

Lazy loading means the code for a specific API route only loads into memory when it’s needed. For AWS Lambda, this leads to:

✅ Reduced cold start time
✅ Lower memory usage
✅ Faster deployments
✅ Smaller, scalable codebase

Instead of loading the entire API, the Lambda runtime only parses the function being called—boosting efficiency.

Bonus: PHP Version Also Available

If PHP is your stack, I’ve built something similar:

PHP Micro Framework: https://github.com/jthoma/phpmf
Stub Generator Tool: https://github.com/jthoma/phpmf-api-stub-generator

The PHP version (cw.php) accepts OpenAPI 3.0 and works on similar principles.

Final Thoughts

I built this framework to solve my own problems—but I’m sharing it in case it helps you too. It’s small, fast, modular, and team-friendly—ideal for serverless development on AWS.

If you find it useful, consider sharing it with your network.

👉 Framework GitHub
👉 Watch the dev setup on mobile

From Zero to Kubernetes: Automating a Minimal Cluster on AWS EC2 (My DevOps Journey)

The Unofficial Challenge: Why Automate Kubernetes on AWS?

Ever wondered if you could spin up a fully functional Kubernetes cluster on AWS EC2 with just a few commands? Four years ago, during my DevOps Masters Program, I decided to make that a reality. While the core assignment was to learn Kubernetes (which can be done in many ways), I set myself an ambitious personal challenge: to fully automate the deployment of a minimal Kubernetes cluster on AWS EC2, from instance provisioning to node joining.

Manual Kubernetes setups can be incredibly time-consuming, prone to errors, and difficult to reproduce consistently. I wanted to leverage the power of Infrastructure as Code (IaC) to create a repeatable, disposable, and efficient way to deploy a minimal K8s environment for learning and experimentation. My goal wasn’t just to understand Kubernetes, but to master its deployment pipeline, integrate AWS services seamlessly, and truly push the boundaries of what I could automate within a cloud environment.

The full github link: https://github.com/jthoma/code-collection/tree/master/aws/aws-cf-kubecluster

The Architecture: A Glimpse Behind the Curtain

At its core, my setup involved an AWS CloudFormation template (managed by AWS SAM CLI) to provision EC2 instances, and a pair of shell scripts to initialize the Kubernetes control plane and join worker nodes.

Here’s a breakdown of the key components and their roles in bringing this automated cluster to life:

AWS EC2: These are the workhorses – the virtual machines that would host our Kubernetes control plane and worker nodes.
AWS CloudFormation (via AWS SAM CLI): This is the heart of our Infrastructure as Code. CloudFormation allows us to define our entire AWS infrastructure (EC2 instances, security groups, IAM roles, etc.) in a declarative template. The AWS Serverless Application Model (SAM) CLI acts as a powerful wrapper, simplifying the deployment of CloudFormation stacks and providing a streamlined developer experience.
Shell Scripts: These were the crucial “orchestrators” running within the EC2 instances. They handled the actual installation of Kubernetes components (kubeadm, kubelet, kubectl, Docker) and the intricate steps required to initialize the cluster and join nodes.

When I say “minimal” cluster, I’m referring to a setup with just enough components to be functional – typically one control plane node and one worker node, allowing for basic Kubernetes operations and application deployments.

The Automation Blueprint: Diving into the Files

The entire orchestration was handled by three crucial files, working in concert to bring the Kubernetes cluster to life:

template.yaml (The AWS CloudFormation Backbone): This YAML file is where the magic of Infrastructure as Code happens. It outlines our EC2 instances, their network configurations, and the necessary security groups and IAM roles. Critically, it uses the UserData property within the EC2 instance definition. This powerful property allows you to pass shell commands or scripts that the instance executes upon launch. This was our initial entry point for automation.

   You can view the `template.yaml` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/template.yaml).

kube-bootstrap.sh (The Instance Preparation Script): This script is the first to run on our EC2 instances. It handles all the prerequisites for Kubernetes: installing Docker, the kubeadm, kubectl, and kubelet binaries, disabling swap, and setting up the necessary kernel modules and sysctl parameters that Kubernetes requires. Essentially, it prepares the raw EC2 instance to become a Kubernetes node.

   You can view the `kube-bootstrap.sh` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/kube-bootstrap.sh).

kube-init-cluster.sh (The Kubernetes Orchestrator): Once kube-bootstrap.sh has laid the groundwork, kube-init-cluster.sh takes over. This script is responsible for initializing the Kubernetes control plane on the designated master node. It then generates the crucial join token that worker nodes need to connect to the cluster. Finally, it uses that token to bring the worker node(s) into the cluster, completing the Kubernetes setup.

   You can view the `kube-init-cluster.sh` file on GitHub 

The Deployment Process: sam deploy -g in Action

The entire deployment process, from provisioning AWS resources to the final Kubernetes cluster coming online, is kicked off with a single, elegant command from the project’s root directory:

sam deploy -g

The -g flag initiates a guided deployment. AWS SAM CLI interactively prompts for key parameters like instance types, your AWS EC2 key pair (for SSH access), and details about your desired VPC. This interactive approach makes the deployment customizable yet incredibly streamlined, abstracting away the complexities of direct CloudFormation stack creation. Under the hood, SAM CLI translates your template.yaml into a full CloudFormation stack and handles its deployment and updates.

The “Aha!” Moment: Solving the Script Delivery Challenge

One of the most persistent roadblocks I encountered during this project was a seemingly simple problem: how to reliably get kube-bootstrap.sh and kube-init-cluster.sh onto the newly launched EC2 instances? My initial attempts, involving embedding the scripts directly into the UserData property, quickly became unwieldy due to size limits and readability issues. Other complex methods also proved less than ideal.

After several attempts and a bit of head-scratching, the elegant solution emerged: I hosted both shell scripts in a public-facing downloads folder on my personal blog. Then, within the EC2 UserData property in template.yaml, I simply used wget to download these files to the /tmp directory on the instance, followed by making them executable and running them.

This approach proved incredibly robust and streamlined. It kept the CloudFormation template clean and manageable, while ensuring the scripts were always accessible at launch time without needing complex provisioning tools or manual intervention. It was a classic example of finding a simple, effective solution to a tricky problem.

Lessons Learned and Key Takeaways

This project, born out of an academic requirement, transformed into a personal quest to master automated Kubernetes deployments on AWS. It was a journey filled with challenges, but the lessons learned were invaluable:

Problem-Solving is Key: Technical roadblocks are inevitable. The ability to iterate, experiment, and find creative solutions is paramount in DevOps.
The Power of Infrastructure as Code (IaC): Defining your infrastructure programmatically is not just a best practice; it’s a game-changer for reproducibility, scalability, and disaster recovery.
Automation Principles: Breaking down complex tasks into manageable, automated steps significantly reduces manual effort and error.
AWS CloudFormation and UserData Versatility: Understanding how to leverage properties like UserData can unlock powerful initial setup capabilities for your cloud instances.
Persistence Pays Off: Sticking with a challenging project until it works, even when faced with frustrating issues, leads to deep learning and a huge sense of accomplishment.

While this was a fantastic learning experience, if I were to revisit this project today, I might explore using a dedicated configuration management tool like Ansible for the in-instance setup, or perhaps migrating to a managed Kubernetes service like EKS for production readiness. However, for a hands-on, foundational understanding of automated cluster deployment, this self-imposed challenge was truly enlightening.

Last time when I ran it the console was as follows:

Conclusion

This project underscored that with a bit of ingenuity and the right tools, even complex setups like a Kubernetes cluster can be fully orchestrated and deployed with minimal human intervention. It’s a testament to the power of automation in the cloud and the satisfaction of bringing a challenging vision to life.

I hope this deep dive into my automated Kubernetes cluster journey has been insightful. Have you embarked on similar automation challenges? What unique problems did you solve? Share your experiences in the comments!

Get My IP and patch AWS Security Group

My particular use case was that In my own AWS Account where I do most of the R&D I had one security group which was only for me doing SSH into EC2 instances. Way back in 2020 during pandemic season, had to go freelance for sometime while in notice period with one company and in negotiation with another one. Well this time I was mostly connected from mobile hotspot switching from JIO on Galaxy M14 to Airtel on Galaxy A54 and BSNL on second sim of M14 and this was causing my security group update a real pain.

Basically being lazy and having devops and automation since long back. Started working on an idea an the outcome was an AWS Serverless clone of what is my ip service which is named echo my ip. Check it out on github. The nodejs code and aws sam template to deploy is given over there.

Next using the standard Ubuntu terminal text editor added the following to the .bash_aliases file.

sgupdate()
{
  currentip=$(curl --silent https://{api gateway url}/Prod/ip/)
  /usr/local/bin/aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > /dev/shm/permissions.json
  grep CidrIp /dev/shm/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     /usr/local/bin/aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  /usr/local/bin/aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"
}

alias aws-permit-me='sgupdate'

I already have a .env file for every project I am handling and a cd command will check for existance of .env and source it in case it exists.

cwd(){
  cd $1
  if [ -f .env ] ; then
    . .env
  fi
}

alias cd='cwd'

The env file is of structure as follows with coresponding values after the ‘=’ ofcourse.

export AWS_DEFAULT_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SECURITY_GROUP=
export AWS_SSH_ID=
export AWS_ACCOUNT=

It’s a common problem for people working from home with dynamic IPs to manage firewall rules. Automating the process with a serverless function and a shell alias is a great way to simplify things. Sharing on github is to help others and provide back to the community.

This method provides some advantages

  • Automation: Eliminates the tedious manual process of updating security group rules.
  • Serverless: Cost-effective, as you only pay for the compute time used.
  • Shell Alias: Provides a convenient and easy-to-remember way to trigger the update.
  • GitHub Sharing: Makes the solution accessible to others.
  • Secure: Security Group Modification uses aws cli and credentials in terminal environment

AWS DynamoDB bulk migration between regions was a real pain.

Go and try searching for “migrate 20 dynamodb tables from singapore to Mumbai” on google and sure that you will get mostly migrating between accounts. But the real pain is that even though the documents say that full backup and restore is possible, the table has to be created with all the inherent configurations and when number of tables increases like 10 to 50 it becomes a real headache. I am attempting to automate this to the maximum extend possible using couple of shell scripts and a javascript code to rewrite exported json structure to that of a structure that can be taken by create option in the aws cli v2.

See the rest for real at the github repository

This post is Kept in Short and Simple format to transfer all importance to the github code release.

Automating Church Membership Directory Creation: A Case Study in Workflow Efficiency

Maintaining and publishing a church membership directory is a meticulous process that often requires managing sensitive data and adhering to strict timelines. Traditionally, this would involve significant manual effort, often taking days to complete. In this blog post, I will share how I streamlined this process by automating the workflow using open-source tools. This approach not only reduced the time from several hours to under 13 minutes but also ensured accuracy and repeatability, setting a benchmark for efficiency in handling similar projects. Specifically it should be noted that the complicated sorting they needed for the final output could have taken the same time if done manually in case of last minute changes like addition or removal of a member that too if a head of the family expired and has to be updated before taking final output the whole prayer group sorting can affect. Consider the head of the family name was starting with Z and when removed automatic upgrade of the next member to head of family and the name starts with A the whole prayer group layout has a chance to take drastic change and manual layout would be herculian in this case. But with this implementation of automation, that is another 15 minutes to the maximum just a flag change in the xls and the command line “make directory” will run through the full process.

Workflow Overview

The project involves converting an xls file containing membership data into a print-ready PDF. The data and member photographs are maintained by a volunteer team on Google Sheets and google drive, these are shared via Google Drive. Each family has a unique register number, and members are assigned serial numbers for photo organization. The workflow is orchestrated using GNU Make, with specific tasks divided into stages for better manageability.

Stage 1: Photo Processing

Tools Used:

  • Bash Shell Scripts for automation
  • ImageMagick for photo dimension checking and resizing

The photo directory is processed using identify (ImageMagick) to determine the dimensions of each image. This ensures that all photos meet the required quality (300 DPI for print). Images that are too large or too small are adjusted using convert, ensuring consistency across all member profiles.

Stage 2: Importing Data into MySQL

Tools Used:

  • MySQL for data management
  • Libre Office Calc to export xls to csv
  • Bash and PHP Scripts for CSV import

The exported CSV data is imported into a MySQL database. This allows for sorting, filtering, and advanced layout calculations, providing a structured approach to organizing the data.

Stage 3: Data Sorting and Layout Preparation

Tools Used:

  • MySQL Queries for layout calculations

The data is grouped and sorted based on location and family register numbers. For each member, a layout height and page number are calculated and updated in the database. This ensures a consistent and visually appealing directory design.

Stage 4: PDF Generation

Tools Used:

  • PHP and FPDF Library

Using PHP and FPDF, the data is read from MySQL, and PDFs are generated for each of the 12 location-based groups. During this stage, indexes are also created to list register numbers and member names alongside their corresponding page numbers.

Stage 5: Final Assembly and Indexing

Tools Used:

  • GNU Make for orchestration
  • PDF Merge Tools

The 12 individual PDFs generated in the previous stage are stitched together into a single document. The two indexes (by register number and by member name) are combined and appended to the final PDF. This single document is then ready for print.

Efficiency Achieved

Running the entire workflow on an ASUS A17 with XUbuntu, the process completes in less than 13 minutes. By comparison, a traditional approach using desktop publishing (DTP) software could take 20–30 hours, even with a skilled team working in parallel. The automated workflow eliminates manual errors, ensures uniformity, and significantly improves productivity.

Key Advantages of the Automated Workflow

  1. Time Efficiency: From 20–30 hours to 13 minutes.
  2. Accuracy: Eliminates manual errors through automation.
  3. Scalability: Easily accommodates future data updates or layout changes.
  4. Cost-Effective: Utilizes free and open-source tools.
  5. Repeatability: The process can be executed multiple times with minimal adjustments.

Tools and Technology Stack

  • Operating System: XUbuntu on ASUS A17
  • Photo Processing: ImageMagick (identify and convert)
  • Database Management: MySQL
  • Scripting and Automation: Bash Shell, GNU Make
  • PDF Generation: PHP, FPDF Library
  • File Management: Google Drive for data sharing

Conclusion

This project highlights the power of automation in handling repetitive and labor-intensive tasks. By leveraging open-source tools and orchestrating the workflow with GNU Make, the entire process became not only faster but also more reliable. This method can serve as a template for similar projects, inspiring others to embrace automation for efficiency gains.

Feel free to share your thoughts or ask questions in the comments below. If you’d like to adopt a similar workflow for your organization, I’d be happy to provide guidance!

Creating a Dynamic Image Animation with PHP, GIMP, and FFmpeg: A Step-by-Step Guide

Introduction

In this blog post, I’ll walk you through a personal project that combines creative image editing with scripting to produce an animated video. The goal was to take one image from each year of my life, crop and resize them, then animate them in a 3×3 grid. The result is a visually engaging reel targeted for Facebook, where the images gradually transition and resize into place, accompanied by a custom audio track.

This project uses a variety of tools, including GIMP, PHP, LibreOffice Calc, ImageMagick, Hydrogen Drum Machine, and FFmpeg. Let’s dive into the steps and see how all these tools come together.

Preparing the Images with GIMP

The first step was to select one image from each year that clearly showed my face. Using GIMP, I cropped each image to focus solely on the face and resized them all to a uniform size of 1126×1126 pixels.

I also added the year in the bottom-left corner and the Google Plus Code (location identifier) in the bottom-right corner of each image. To give the images a scrapbook-like feel, I applied a torn paper effect around the edges. Which was generated using Google Google Gemini using prompt “create an image of 3 irregular vertical white thin strips on a light blue background to be used as torn paper edges in colash” #promptengineering

Key actions in GIMP:

  • Crop and resize each image to the same dimensions.
  • Add text for the year and location.
  • Apply a torn paper frame effect for a creative touch.

Organizing the Data in LibreOffice Calc

Before proceeding with the animation, I needed to plan out the timing and positioning of each image. I used LibreOffice Calc to calculate:

  • Frame duration for each image (in relation to the total video duration).
  • The positions of each image in the final 3×3 grid.
  • Resizing and movement details for each image to transition smoothly from the bottom to its final position.

Once the calculations were done, I exported the data as a JSON file, which included:

  • The image filename.
  • Start and end positions.
  • Resizing parameters for each frame.

Automating the Frame Creation with PHP

Now came the fun part: using PHP to automate the image manipulation and generate the necessary shell commands for ImageMagick. The idea was to create each frame of the animation programmatically.

I wrote a PHP script that:

  1. Reads the JSON file and converts it to PHP arrays, which were manually hard-coded into the generator script. This is to define the positioning and resizing data.
  2. Generates ImageMagick shell commands to:
  • Place each image on a 1080×1920 blank canvas.
  • Resize each image gradually from 1126×1126 to 359×375 over several frames.
  • Move each image from the bottom of the canvas to its final position in the 3×3 grid.

Here’s a snippet of the PHP code that generates the shell command for each frame:

This script dynamically generates ImageMagick commands for each image in each frame. The resizing and movement of each image happens frame-by-frame, giving the animation its smooth, fluid transitions.


Step 4: Creating the Final Video with FFmpeg

Once the frames were ready, I used FFmpeg to compile them into a video. Here’s the command I referred, for the exact project the filnenames and paths were different.

ffmpeg -framerate 30 -i frames/img_%04d.png -i audio.mp3 -c:v libx264 -pix_fmt yuv420p -c:a aac final_video.mp4

This command:

  • Takes the image sequence (frames/img_0001.png, frames/img_0002.png, etc.) and combines them into a video.
  • Syncs the video with a custom audio track created in Hydrogen Drum Machine.
  • Exports the final result as final_video.mp4, ready for Facebook or any other platform.

Step 5: The Final Touch — The 3×3 Matrix Layout

The final frame of the video is particularly special. All nine images are arranged into a 3×3 grid, where each image gradually transitions from the bottom of the screen to its position in the matrix. Over the course of a few seconds, each image is resized from its initial large size to 359×375 pixels and placed in its final position in the grid.

This final effect gives the video a sense of closure and unity, pulling all the images together in one cohesive shot.

Conclusion

This project was a fun and fulfilling exercise in blending creative design with technical scripting. Using PHP, GIMP, ImageMagick, and FFmpeg, I was able to automate the creation of an animated video that showcases a timeline of my life through images. The transition from individual pictures to a 3×3 grid adds a dynamic visual effect, and the custom audio track gives the video a personalized touch.

If you’re looking to create something similar, or just want to learn how to automate image processing and video creation, this project is a great starting point. I hope this blog post inspires you to explore the creative possibilities of PHP and multimedia tools!

The PHP Script for Image Creation

Here’s the PHP script I used to automate the creation of the frames for the animation. Feel free to adapt and use it for your own projects:

<?php

// list of image files one for each year
$lst = ['2016.png','2017.png','2018.png','2019.png','2020.png','2021.png','2022.png','2023.png','2024.png'];

$wx = 1126; //initial width
$hx = 1176; //initial height

$wf = 359;  // final width
$hf = 375;  // final height

// final position for each year image
// mapped with the array index
$posx = [0,360,720,0,360,720,0,360,720];
$posy = [0,0,0,376,376,376,752,752,752];

// initial implant location x and y
$putx = 0;
$puty = 744;

// smooth transition frames for each file
// mapped with array index
$fc = [90,90,90,86,86,86,40,40,40];

// x and y movement for each image per frame
// mapped with array index
$fxm = [0,4,8,0,5,9,0,9,18];
$fym = [9,9,9,9,9,9,19,19,19];

// x and y scaling step per frame 
// for each image mapped with index
$fxsc = [9,9,9,9,9,9,20,20,20];
$fysc = [9,9,9,10,10,10,21,21,21];

// initialize the file naming with a sequential numbering

$serial = 0;

// start by copying the original blank frame to ramdisk
echo "cp frame.png /dev/shm/mystage.png","\n";

// loop through the year image list

foreach($lst as $i => $fn){
    // to echo the filename such that we know the progress
    echo "echo '$fn':\n"; 

    // filename padded with 0 to fixed width
    $newfile = 'frames/img_' . str_pad($serial, 4,'0',STR_PAD_LEFT) . '.png';

// create the first frame of an year
    echo "composite -geometry +".$putx."+".$puty."  $fn /dev/shm/mystage.png  $newfile", "\n";

    $tmx = $posx[$i] - $putx;

    $tmy = $puty - $posy[$i];

    // frame animation
    $maxframe = ($fc[$i] + 1);
    for($z = 1; $z < $maxframe ; $z++){

        // estimate new size 
        $nw = $wx - ($fxsc[$i] * $z );
        $nh = $hx - ($fysc[$i] * $z );

        $nw = ($wf > $nw) ? $wf : $nw;
        $nh = ($hf > $nh) ? $hf : $nh;

        $tmpfile = '/dev/shm/resized.png';
        echo "convert $fn  -resize ".$nw.'x'.$nh.'\!  ' . $tmpfile . "\n";

        $nx = $putx + ( $fxm[$i] * $z );
        $nx = ($nx > $posx[$i]) ? $posx[$i] : $nx; 

        if($posy[$i] > $puty){
            $ny = $puty + ($fym[$i] * $z) ;
            $ny = ($ny > $posy[$i]) ? $posy[$i] : $ny ;
        }else{
            $ny = $puty - ($fym[$i] * $z);
            $ny = ($posy[$i] > $ny) ? $posy[$i] : $ny ;
        }

        $serial += 1;
        $newfile = 'frames/img_' . str_pad($serial, 4,'0',STR_PAD_LEFT) . '.png';
        echo 'composite -geometry +'.$nx.'+'.$ny."  $tmpfile /dev/shm/mystage.png  $newfile", "\n";
    }

    // for next frame use last one
     // thus build the final matrix of 3 x 3
    echo "cp $newfile /dev/shm/mystage.png", "\n";
}

Creating a Time-lapse effect Video from a Single Photo Using Command Line Tools on Ubuntu

In this tutorial, I’ll walk you through creating a timelapse effect video that transitions from dark to bright, all from a single high-resolution photo. Using a Samsung Galaxy M14 5G, I captured the original image, then manipulated it using Linux command-line tools like ImageMagick, PHP, and ffmpeg. This approach is perfect for academic purposes or for anyone interested in experimenting with video creation from static images. Here’s how you can achieve this effect. And note that this is just an academic exploration and to be used as a professional tool the values and frames should be defined with utmost care.

Basics was to find the perfect image, and crop it to 9:16 since I was targetting facebook reels and the 50 MP images taken on Samsung Galaxy M14 5G are at 4:3 with 8160×6120 and Facebook reels or YouTube shorts follow the format of 9:16 and 1080×1920 or proportionate dimensions. My final source image was 1700×3022 added here for reference. Had to scale it down to keep inside the blog aesthetics.

Step 1: Preparing the Frame Rate and Length
To begin, I decided on a 20-second video with a frame rate of 25 frames per second, resulting in a total of 500 frames. Manually creating the 500 frames was tedious and any professionals would use some kind of automation. Being a devops enthusiast and a linux fanatic since 1998 my choice was shell scripting. But addiction to php as an aftermath of usage since 2002 kicked up inside me and the following code nippet was the outcome.

Step 2: Generating Brightness and Contrast Values Using PHP
The next step was to create an array of brightness and contrast values to give the impression of a gradually brightening scene. Using PHP, I mapped each frame to an optimal brightness-contrast value. Here’s the PHP snippet I used:

<?php


$dur = 20;
$fps = 25;
$frames = $dur * $fps;
$plen = strlen(''.$frames) + 1;
$val = -50;
$incr = (60 / $frames);

for($i = 0; $i < $frames; $i++){
   $pfx =  str_pad($i, $plen, '0', STR_PAD_LEFT);

    echo $pfx, " ",round($val,2),"\n";

    $val += $incr;
}

?>

Being in ubuntu the above code saved as gen.php and after updating the values for duration and framerate this was executed from the cli and output redirected to a text file values.txt with the following command.

php -q gen.php > values.txt 

Now to make things easy, the source file was copied as src.jpg into a temporary folder and a sub-folder ‘anim’ was created to hold the frames. Here I already had a script which will resume from where left off depending on the situation. the script is as follows.

#!/bin/bash


gdone=$(find ./anim/ -type f | grep -c '.jpg')
tcount=$(grep -c "^0" values.txt)
todo=$(( $tcount - $gdone))

echo "done $gdone of ${tcount}, to do $todo more "

tail -$todo values.txt | while read fnp val 
do 
    echo $fnp
    convert src.jpg -brightness-contrast ${val} anim/img_${fnp}.jpg
done

The process is quite simple, first code line defines a var gdone by counting ‘.jpg’ files in the ‘anim’ sub-directory and then taking total count from values.txt the difference is to be done the status is echoed to output and a loop is initiated with reading the last todo lines from values.txt and executing the conversion using the convert utility of imagemagick. In case this needs to be interrupted, I just close the terminal window from xwindows, as a subsequent execution will continue from where leftoff. Once this is completed, the frames are stitched together using ffmpeg using the following commad.

ffmpeg -i anim/img_%04d.jpg -an -y ../output.mp4

The filename pattern %04d is decided from the width of number of frames plus 1 as in the php code the var $plen on code line 4 is taken for the str_pad function input padd length.

The properties of final output generated by ffmpeg is as follows. Note the dimensions, duration and frame rate do comply as decided on startup.

Leveraging WordPress and AWS S3 for a Robust and Scalable Website

Introduction

In today’s digital age, having a strong online presence is crucial for businesses of all sizes. WordPress, a versatile content management system (CMS), and Amazon S3, a scalable object storage service, offer a powerful combination for building and hosting dynamic websites.

Understanding the Setup

To effectively utilize WordPress and S3, here’s a breakdown of the key components and their roles:

  1. WordPress:
  • Content Management: WordPress provides an intuitive interface for creating and managing website content.
  • Plugin Ecosystem: A vast array of plugins extends WordPress’s functionality, allowing you to add features like SEO, e-commerce, and security.
  • Theme Customization: You can customize the appearance of your website using themes, either by choosing from a wide range of pre-built themes or creating your own. Get it free from the maintainers directly and free: https://wordpress.org/download/
  1. AWS S3:
  • Scalable Storage: S3 offers virtually unlimited storage capacity to accommodate your website’s growing content.
  • High Availability: S3 ensures your website is always accessible by distributing data across multiple servers.
  • Fast Content Delivery: Leveraging AWS CloudFront, a content delivery network (CDN), can significantly improve website performance by caching static assets closer to your users.

The Deployment Process

Here’s a simplified overview of the deployment process:

  1. Local Development:
  • Set up a local WordPress development environment using tools like XAMPP, MAMP, or Docker.
  • Create and test your website locally.
  1. Static Site Generation:
  • Use a tool like WP-CLI or a plugin to generate static HTML files from your WordPress site.
  • This process converts dynamic content into static files, which can be optimized for faster loading times.
  1. S3 Deployment:
  • Upload the generated static files to an S3 bucket.
  • Configure S3 to serve the files directly or through a CloudFront distribution.
  1. CloudFront Distribution:
  • Set up a CloudFront distribution to cache your static assets and deliver them to users from edge locations.
  • Configure custom domain names and SSL certificates for your website.

Benefits of Using WordPress and S3

  • Scalability: Easily handle increased traffic and content without compromising performance.
  • Cost-Effective: S3 offers affordable storage and bandwidth options.
  • High Availability: Ensure your website is always accessible to users.
  • Security: Benefit from AWS’s robust security measures.
  • Flexibility: Customize your website to meet your specific needs.
  • Performance: Optimize your website’s performance with caching and CDN.

Conclusion

By combining the power of WordPress and AWS S3, you can create a robust, scalable, and high-performance website. This setup offers a solid foundation for your online presence, whether you are a small business owner or a large enterprise.

Start your cloud journey for free today with AWS! Sign up now: https://aws.amazon.com/free/

Automating Laptop Charging with AWS: A Smart Solution to Prevent Overheating

In today’s fast-paced digital world, laptops have become indispensable tools. However, excessive charging can lead to overheating, which can significantly impact performance and battery life. In this blog post, we’ll explore a smart solution that leverages AWS services to automate laptop charging, prevent overheating, and optimize battery health. I do agree that Asus does provide premium support for a subscription, but this research and excercise was to brush up my brains and learn to create on aws with some useful solution. The solution is still in concept and once I start using it in production to the full extend, the shell scripts and cloudformation template will be pushed into github handle jthoma repository code-collection/aws

Understanding the Problem:

Overcharging can cause the battery to degrade faster and generate excessive heat. Traditional manual charging methods often lead to inconsistent charging patterns, potentially harming the battery’s lifespan.

The Solution: Automating Laptop Charging with AWS

To address this issue, we’ll utilize a combination of AWS services to create a robust and efficient automated charging system:

  1. AWS IoT Core: Purpose: This service enables secure and reliable bi-directional communication between devices and the cloud.
    How it’s used: We’ll connect a smart power outlet to AWS IoT Core, allowing it to send real-time battery level data to the cloud.
    Link: https://aws.amazon.com/iot-core/
    Getting Started: Sign up for an AWS account and create an IoT Core project.
  2. AWS Lambda: Purpose: This serverless computing service allows you to run code without provisioning or managing servers.
    How it’s used: We’ll create a Lambda function triggered by IoT Core messages. This function will analyze the battery level and determine whether to charge or disconnect the power supply.
    Link: https://aws.amazon.com/lambda/
    Getting Started: Create a Lambda function and write the necessary code in your preferred language (e.g., Python, Node.js, Java).
  3. Amazon DynamoDB: Purpose: This fully managed NoSQL database service offers fast and predictable performance with seamless scalability.
    Link: https://aws.amazon.com/dynamodb/
  4. Amazon CloudWatch: Purpose: This monitoring and logging service helps you collect and analyze system and application performance metrics.
    How it’s used: We’ll use CloudWatch to log system health and generate alarms based on battery level or temperature threshold. Also it helps to monitor the performance of our Lambda functions and IoT Core devices, ensuring optimal system health.
    Link: https://aws.amazon.com/cloudwatch/
    Getting Started: Configure CloudWatch to monitor your AWS resources and set up alarms for critical events.

How it Works:

  1. Data Collection: My Ubuntu system with the help of a shell script uses aws cli to send real-time battery level data to the cloud watch logs.
  2. Data Processing: Cloud watch metric filter alarms will trigger lambda function which is set for appropriate actions.
  3. Action Execution: The Lambda function sends commands to the smart power outlet to control the charging process.
  4. Data Storage: Historical battery level data is stored in Cloud Watch logs for analysis using Athena and further optimization.
  5. Monitoring and Alerting: CloudWatch monitors the system’s health and sends alerts if any issues arise.

Benefits of Automated Charging:

Optimized Battery Health: Prevents overcharging and undercharging, extending battery life.
Reduced Heat Generation: Minimizes thermal stress on the laptop.
Improved Performance: Ensures optimal battery performance, leading to better system responsiveness.
Energy Efficiency: Reduces energy consumption by avoiding unnecessary charging.

Conclusion

By leveraging AWS services, a sophisticated automated charging system that safeguards the laptop’s battery health and enhances its overall performance is reached. This solution empowers you to take control of your device’s charging habits and enjoy a longer-lasting, cooler, and more efficient laptop.

Start Your AWS Journey Today, Signup for free !

Ready to embark on your cloud journey? Sign up for an AWS account and explore the vast possibilities of cloud computing. With AWS, you can build innovative solutions and transform your business.

Exploring Animation Creation with GIMP, Bash, and FFmpeg: A Journey into Headphone Speaker Testing

For a long time, I had a desire to create a video that helps people confirm that their headphones are worn correctly, especially when there are no left or right indicators. While similar solutions exist out there, I decided to take this on as an exploration of my own, using tools I’m already familiar with: GIMP, bash, and FFmpeg.

This project resulted in a short animation that visually shows which speaker—left, right, or both—is active, syncing perfectly with the narration.

Project Overview:
The goal of the video was simple: create an easy way for users to verify if their headphones are worn correctly. The animation features:

  • “Hear on both speakers”: Animation shows pulsations on both sides.
  • “Hear on left speaker only”: Pulsations only on the left.
  • “Hear on right speaker only”: Pulsations only on the right.
  • Silence: No pulsations at all. Tools Used:
  • Amazon Polly for generating text-to-speech narration.
  • Audacity for audio channel switching.
  • GIMP for creating visual frames of the animation.
  • Bash scripting to automate the creation of animation sequences.
  • FFmpeg to compile the frames into the final video.
  • LibreOffice Calc to calculate frame sequences for precise animation timing. Step-by-Step Workflow:
  1. Creating the Audio Narration:
    Using Amazon Polly, I generated a text-to-speech audio file with the necessary instructions. Polly’s lifelike voice makes it easy to understand. I then used Audacity to modify the audio channels, ensuring that the left, right, and both channels played at the appropriate times.
  2. Synchronizing Audio and Visuals:
    I needed the animation to sync perfectly with the audio. To achieve this, I first identified the start and end of each segment in the audio file and created a spreadsheet in LibreOffice Calc. This helped me calculate the number of frames per segment, ensuring precise timing for the animation.
  3. Creating Animation Frames in GIMP:
    The visual animation was created using a simple diaphragm depression effect. I made three frames in GIMP:
  • One for both speakers pulsating,
  • One for the left speaker only,
  • One for the right speaker only.
  1. Automation with Bash:
    Once the frames were ready, I created a guideline text using Gedit that outlined the sequence. I then used a bash while-read loop combined with a seq loop to generate 185 image files. These files followed a naming convention of anim_%03d.png, ensuring they were easy to compile later.
  2. Compiling with FFmpeg:
    After all frames were created, I used FFmpeg to compile the images into the final video. The result was a fluid, synchronized animation that matched the audio perfectly. The Finished Product:
    Here’s the final video that demonstrates the headphone speaker test:

https://youtu.be/_fskGicSSUQ

Why I Chose These Tools:
Being familiar with xUbuntu, I naturally gravitated toward tools that work seamlessly in this environment. Amazon Polly provided high-quality text-to-speech output, while Audacity handled the channel switching with ease. GIMP was my go-to for frame creation, and the combination of bash and FFmpeg made the entire animation process efficient and automated.

This project not only satisfied a long-held desire but also served as an exciting challenge to combine these powerful tools into one cohesive workflow. It was a satisfying dive into animation and audio synchronization, and I hope it can help others as well!

Conclusion:
If you’re into creating animated videos or simply exploring new ways to automate your creative projects, I highly recommend diving into tools like GIMP, bash, and FFmpeg. Whether you’re on xUbuntu like me or another system, the potential for customization is vast. Let me know if you found this helpful or if you have any questions!