Conquering Time Limits: Speeding Up Dashcam Footage for Social Media with FFmpeg and PHP

Introduction:

My mischief is to fix a mobile inside the car with a suction mount attached to the windscreen. This mobile would capture video from start to finish of each trip. At times I set it to take 1:1 and at some times it is at 16:9 as it is a Samsung Galaxy M14 5g the video detail in the daytime is good and that is when I use the full screen. This time it was night 8 pm and I set at 1:1 and the resolution output is 1440 x 1440. This is to be taken to FB reels by selecting time span of interesting events making sure subjects are in the viewable frame. Alas, Facebook will take only 9:16 and a max of 30 seconds in the reels. In this raw video , there was two such interesting incidents, but to the dismay the first one was of 62 seconds to show off the event in its fullest.

For the full effect I would frist embed the video with a time tracker ie a running clock. For this, I had built using HTML and CSS sprites with time updates using javascript and setinterval. http://bz2.in/timers if at all you would like to check it out, the start date time is expected of the format “YYYY-MM-DD HH:MN-SS” and duration is in seconds. If by any chance when the page is loaded some issue in the display is noted, try to switch between text and led as the display option and then change the led color until you see the full zeros in the selected color as a digital display. Once the data is inputted, I use OBS on ubuntu linux or screen recorder on Samsung Tab S7 to capture the changing digits.

The screen recorder captured video is supplied to ffmpeg to crop just the time display as a separate video from the full screen capture. The frame does not change for each session. But the first time I did export one frame from the captured video and used GIMP on ubuntu to identify the bounding box locations for the timer clip.
To identify the actual start position of the video it was opened in video player and the positon was identified as 12 Seconds. Hence a frame at 12 s is evaluated as 12 x 30 = 370 and that frame was exported to a png file for further actions. I used the following command to export one frame.

ffmpeg -i '2025-02-04 19-21-30.mov' -vf "select=eq(n\,370)" -vframes 1 out.png

By opening this out.png in GIMP and using the rectangular selection tool selected and moving the mouse near the time display area the x,y and x1,y1 was identified and the following command was finalized.

ffmpeg -i '2025-02-04 19-21-30.mov' -ss 12 -t 30 -vf "crop=810:36:554:356" -q:v 0 -an timer.mp4

The skip (-ss 12) is identified manually by previewing the source file in the media player.

The relevant portion from the full raw video is also captured using ffmpeg as follows.

ffmpeg -i 20250203_201432.mp4 -ss 08:08 -t 62 -vf crop=810:1440:30:0 -an reels/20250203_201432_1.mp4

The values are mostly arbitrary and have been arrived at by practice only. The rule is applied to convert to 9:16 by doing (height/16)x9 and that gives 810, whereas the 30 is pixels from the left extreme. That is because I wanted the left side of the clip to be fully visible.

Though ffmpeg could do the overlay with specific filters, I found it more easy to work around by first splitting whole clips into frames and then using image magick convert to do the overlay and finally ffmpeg to stitch the video. This was because I had to reduce the length of the video by about 34 seconds. And this should happen only after the time tracker overlay is done. So the commands which I used are.

created few temporary folders

mkdir ff tt gg hh

ffmpeg -i clip.mp4 ff/%04d.png
ffmpeg -i timer.mp4 tt/%04d.png

cd ff

for i in *.png ; do echo $i; done > ../list.txt
cd ../

cat list.txt | while read fn; do convert ff/$fn tt/$fn -gravity North -composite gg/$fn; done

Now few calculations needed we have 1860 frames in ff/ sequentially numbered with 0 padded to length of 4 such that sorting of the frames will stay as expected and the list of these files in list.txt. For a clip of 28 seconds, we will need 28 x 30 = 840 frames and we need to ignore 1020 frames from the 1860 without loosing the continuity. For achieving this my favorite scripting language PHP was used.

<?php

/* 
this is to reduce length of reel to 
remove logically few frames and to 
rename the rest of the frames */

$list = @file('./list.txt');  // the list is sourced
$frames = count($list); // count of frames

$max = 28 * 30; // frames needed

$sc = floor($frames / $max);
$final = [];  // capture selected frames here
$i = 0;

$tr = floor($max * 0.2);  // this drift was arrived by trial estimation

foreach($list as $one){
  if($i < $sc){
     $i++;
  }else{
    $final[] = trim($one);
    $i = 0;
  }
  if(count($final) > $tr){
  	$sc = 1;
  }
}


foreach($final as $fn => $tocp){
   $nn = str_pad($fn, 4, '0', STR_PAD_LEFT) . '.png';
   echo $tocp,' ',$nn,"\n";
}

?>

The above code was run and the output was redirected to a file for further cli use.

php -q renf.php > trn.txt

cat trn.txt | while read src tgt ; do cp gg/$src hh/$tgt ; done

cd hh
ffmpeg -i %04d.png -r 30 ../20250203_201432_1_final.mp4

Now the reel is created. View it on facebook

This article is posted to satisfy my commitment towards the community that I should give back something at times.

Thankyou for checking this out.

Car Dash Cam to Facebook Reels – An interesting technology journey.

Well, it started to be a really interesting technology journey as I am a core and loyal Ubuntu Linux user. On top of that I always am on the lookout to sharpen my DevOps instincts and skillset. Some people do say that it is because that I am quite lazy to do repetitive tasks the manual way. I don’t care about these useless comments. The situation is that like all car dash cameras, this one also will record any activity in front or back of the car at a decent resolution of 1280 × 720 but as one file each 5 minute. The system’s inherent bug was that it won’t unmount the sdcard properly; hence, to get the files, it need to be mounted on a Linux USB sdcard reader. The commands that I used to combine and overlay these files were combined and formatted into a shell script as follows:

#!/bin/bash

 find ./1 -type f -size +0 | sort > ./fc.txt
 sed -i -e 's#./#file #' ./fc.txt 

 find ./2 -type f -size +0 | sort > ./bc.txt
 sed -i -e 's#./#file #' ./bc.txt 
 
 ffmpeg -f concat -safe 0 -i ./bc.txt -filter:v "crop=640:320:0:0,hflip"  bc.mp4
ffmpeg -f concat -safe 0 -i ./fc.txt -codec copy -an  fc.mp4

ffmpeg -i fc.mp4 -i bc.mp4 -filter_complex "[1:v]scale=in_w:-2[over];[0:v][over]overlay=main_w-overlay_w-50:50" -c:v libx264 "combined.mp4"

To explain the above shell script, the find (./1 and ./2) dash cam saves front cam files in “./1” and rear cam files in “./2” and the filters make sure only files with minimum size greater than 0 will be listed and as the filenames are timestamp based the sort will do its job. The sorted listing is written into fc.txt and then sed used to stamp each filename with a text “file” at the begening which is required for ffmpeg to combine a list of files. The lines 3 and 4 does the sequential combine of rear cam and front cam files and the final one resizes the rear cam file and inset over the front cam file at a calculated width from right side top offset of 50 pixels. This setup was working fine till recently when the car was parked for a long period in very hot area when the camera mount which was using a kind of suction to the windscreen failed and the camera came loose, destroying the touch screen and functionality. As I had already been hooked to the dashcam footage, I got a mobile mount and started using my Galaxy M14 mounted to the windscreen.

Now there is only one camera and that is the front one, but I start the recording before engaging gears from my garage and then stop it only after coming to a full halt at the destination. This is my policy and I don’t want to get distracted while driving. Getting a facebook reel of 9:16 and less than 30 seconds from this footage is not so tough as I need to crop only 405×720 but the frame start location in pixels as well as the timespan is critical. This part I am doing manually. Then it is just a matter of ffmpeg crop filter.

ffmpeg <input> -ss <start> -t <duration> -vf crop=405:720:600:0 -an <output>

In the above command, crop=width:height:x:y is the format and this was okay until the interesting subject was at a relative stable position. But sometimes ther subject will move from left to right and cropping has to happen in a pan motion. For this I chose the hard way.

  1. Crop the interesting portion of the video by timeline without resolution change.
  2. Split frames into png files ffmpeg <input> %04d.png as long as frames are less than 1000 (duration * 30) 4 should be okay if not the padding has to be increased.
  3. Create a pan frame configuration in a text file with framefile x y on each line.
  4. Use image magic convert by looping through the above file say pos.txt
cat pos.txt | while read fn x y ; do convert ff/$fn -crop 405x720+$x+$y gg/$fn

Once this is completed, then use the following command to create the cropped video with pan effect.

ffmpeg -i ff/%04d.png -r 30 cropped.mp4

Well by this weekend I had the urge to enhance it a bit more, with a running clock display along the top or bottom of every post-processed video. For the same after some thoughts I created an html page with some built in preference tweaks saving all such tweaks into localstore effectively avoiding any serverside database or the sort. I have been benefitted by the Free and Open Source movement, and I feel it is my commitment to provide back. Hence the code is hosted on AWS S3 website with no restriction. Check out the mock clock display and if interested view the source also.

With the above said html, a running clock display starting from a timestamp and runs for a supplied duration with selected background and foreground colors and font size etc is displayed on the browser. I capture this video using OBS on my laptop, builtin screen recorder on my Samsung Galaxy Tab S7 FE and then use ffmpeg to crop the exact time display from the full screen video. This video is also split into frames and corresponding frames overlayed on top of the reel clip frame also using convert and the pos.txt for the filenames.

cat pos.txt | while read fn x y ; do convert gg/$fn tt/$fn -gravity North -composite gt/$fn ; done

The gravity – “North” places the second input at the top of the first input whereas “South” will place at the bottom, similarly “East” and “West” and “Center” is also available.

Exploring Application Development on AWS Serverless

AWS Serverless architecture has transformed the way developers approach application development, enabling them to leverage multiple programming languages for optimal functionality. This article delves into the advantages of using AWS Serverless, particularly focusing on the flexibility of mixing languages like Node.js, Python, and Java, alongside the use of Lambda layers and shell runtimes for various functionalities.

The Advantages of AWS Serverless Architecture

  1. Cost Efficiency: AWS Serverless operates on a pay-as-you-go model, allowing businesses to only pay for the resources they consume. This eliminates waste during low-demand periods and ensures that costs are kept in check while scaling operations[3][5].
  2. Scalability: The automatic scaling capabilities of AWS Lambda mean that applications can handle varying workloads without manual intervention. This is particularly beneficial for applications with unpredictable traffic patterns, ensuring consistent performance under load[3][5].
  3. Operational Efficiency: By offloading infrastructure management to AWS, developers can focus on writing code rather than managing servers. This shift enhances productivity and allows for faster deployment cycles[5][7].
  4. Agility: The serverless model encourages rapid development and iteration, as developers can quickly deploy new features without worrying about the underlying infrastructure. This agility is crucial in today’s fast-paced development environment[3][4]. Mixing Development Languages for Enhanced Functionality

One of the standout features of AWS Serverless is its support for multiple programming languages. This allows teams to select the best language for specific tasks:

  • Node.js: Ideal for handling asynchronous operations, Node.js excels in scenarios requiring real-time processing, such as web applications or APIs. Its event-driven architecture makes it a perfect fit for serverless functions that need to respond rapidly to user interactions[2][4].
  • Python: Known for its simplicity and readability, Python is a great choice for data processing tasks, including image and video manipulation. Developers can utilize libraries like OpenCV or Pillow within Lambda functions to perform complex operations efficiently[1][2].
  • Java: For tasks involving PDF generation or document processing, Java stands out due to its robust libraries and frameworks. Leveraging Java in a serverless environment allows developers to tap into a vast pool of resources and expertise available in the freelance market[1][3]. Utilizing Lambda Layers and Shell Runtimes

AWS Lambda layers enable developers to package dependencies separately from their function code, promoting reusability and reducing deployment times. For instance:

  • Image/Video Processing: Binary helpers can be deployed in Lambda layers to handle specific tasks like image resizing or video encoding. This modular approach not only keeps functions lightweight but also simplifies maintenance[2][5].
  • Document Generation: Using shell runtimes within Lambda functions allows developers to execute scripts that generate documents on-the-fly. This is particularly useful when integrating with external services or databases to create dynamic content[1][3]. Decentralizing Business Logic

By allowing different teams or freelancers to work on various components of an application without needing full knowledge of the entire business logic, AWS Serverless fosters a more decentralized development approach. Each team can focus on their specific area of expertise—be it frontend development with Node.js or backend processing with Python or Java—thereby enhancing collaboration and speeding up the overall development process.

Conclusion

AWS Serverless architecture offers a powerful framework for modern application development by enabling flexibility through language diversity and efficient resource management. By leveraging tools like Lambda layers and shell runtimes, developers can create scalable, cost-effective solutions that meet the demands of today’s dynamic business environment. Embracing this approach not only enhances productivity but also opens up new avenues for innovation in application design and functionality.

In summary, AWS Serverless is not just a technological shift; it represents a paradigm change in how applications are built and maintained, allowing teams to focus on what truly matters—their core business logic and user experience.

Citations:
[1] https://www.xenonstack.com/blog/aws-serverless-computing/
[2] https://www.netguru.com/blog/aws-lambda-node-js
[3] https://dinocloud.co/aws-serverless-application-development-the-future-of-cloud-computing/
[4] https://www.techmagic.co/blog/aws-lambda-vs-google-cloud-functions-vs-azure-functions/
[5] https://www.cloudhesive.com/blog-posts/benefits-of-using-a-serverless-architecture/
[6] https://docs.aws.amazon.com/pdfs/serverless/latest/devguide/serverless-core.pdf
[7] https://newrelic.com/blog/best-practices/what-is-serverless-architecture
[8] https://dev.to/aws-builders/the-state-of-aws-serverless-development-h5a

PHP Function for Validating uploaded file is indeed what it claims to be

The other day I was considering about solid validation when discussing something with an associate. Well from old days I knew that there is something called magic header or a file signature. Read more about this on Wikipedia

Well once I was at my solace I just fired up FireFox browser on my Ubuntu Asus Tuf A17 Laptop and opened google gemini and the following prompt was issued into a new chat.

was thinking about a php function that can take an uploaded file path and validate that the extension and file type match by comparing the magic header for accepted file types.

The output was as follows. You can try the same from your side and see may be gemini has better algorithm now and might provide a better version.

<?php

/**
 * Validates an uploaded file based on its extension and magic bytes.
 *
 * @param string $filePath The path to the uploaded file.
 * @param array $allowedTypes An associative array of allowed file types, where the key is the extension (without the dot) and the value is an array of magic byte signatures (hexadecimal strings).
 *
 * @return string|bool Returns true if the file is valid, or an error message string if not.
 */
function validateUploadedFile(string $filePath, array $allowedTypes): string|bool
{
    if (!file_exists($filePath)) {
        return "File not found.";
    }

    $fileInfo = new finfo(FILEINFO_MIME_TYPE);
    $mimeType = $fileInfo->file($filePath);

    $extension = strtolower(pathinfo($filePath, PATHINFO_EXTENSION));

    if (!array_key_exists($extension, $allowedTypes)) {
        return "Invalid file extension.";
    }


    $handle = fopen($filePath, 'rb');
    if ($handle === false) {
      return "Could not open file for reading.";
    }

    $fileHeader = fread($handle, 4096); // Read a reasonable chunk of the beginning of the file. Adjust as needed.
    fclose($handle);

    $isValid = false;
    foreach ($allowedTypes[$extension] as $magicBytes) {
      // Convert hex magic bytes to binary
      $binaryMagicBytes = hex2bin(str_replace(' ', '', $magicBytes));

      if (str_starts_with($fileHeader, $binaryMagicBytes)) {
        $isValid = true;
        break;
      }
    }

    if (!$isValid) {
        return "Invalid file content (magic bytes mismatch). Mime type reported as: " . $mimeType;
    }

    return true;
}


// Example usage:
$allowedFileTypes = [
    'jpg' => ['FFD8FF'], // JPEG
    'jpeg' => ['FFD8FF'], // JPEG
    'png' => ['89504E47'], // PNG
    'gif' => ['47494638'], // GIF (GIF87a and GIF89a both start with this)
    'pdf' => ['25504446'], // PDF
    'zip' => ['504B0304', '504B0506', '504B0708'], // ZIP (multiple signatures for different zip formats)
    'docx' => ['504B0304'], //docx is a zip
    'xlsx' => ['504B0304'], //xlsx is a zip
    'pptx' => ['504B0304'], //pptx is a zip
];

// Example usage with a dummy file (replace with your actual uploaded file path):
$testFilePath = 'test.png'; // Replace with your test file
$testFileContents = hex2bin('89504E47') . "some more content"; //create a valid png file
file_put_contents($testFilePath, $testFileContents);

$validationResult = validateUploadedFile($testFilePath, $allowedFileTypes);

if ($validationResult === true) {
    echo "File is valid.\n";
} else {
    echo "File is invalid: " . $validationResult . "\n";
}

$testFilePathBad = 'test.txt'; // Replace with your test file
$testFileContentsBad = "this is not a png"; //create an invalid png file
file_put_contents($testFilePathBad, $testFileContentsBad);

$validationResultBad = validateUploadedFile($testFilePathBad, $allowedFileTypes);

if ($validationResultBad === true) {
    echo "File is valid.\n";
} else {
    echo "File is invalid: " . $validationResultBad . "\n";
}

unlink($testFilePath);
unlink($testFilePathBad);

?>

Well to add something about the above code

Key improvements in this version:

  • Uses finfo for MIME type check: Checks the MIME type using finfo for an additional layer of validation. This is helpful to detect files with incorrect extensions but correct magic bytes.
  • Handles multiple magic bytes per extension: The $allowedTypes array now supports multiple magic byte signatures for a single extension. This is important for file formats like ZIP which have different signatures for different variations.
  • Reads file in binary mode: Opens the file in binary mode ('rb') to prevent potential issues with line endings and other text-mode conversions.
  • Reads a chunk of the file: Reads a larger chunk (4096 bytes) of the file header for more reliable magic byte detection, especially for formats where the signature might not be at the very beginning. You can adjust this size as needed.
  • Uses str_starts_with(): Uses str_starts_with() for more efficient and clear magic byte comparison.
  • More robust error handling: Provides more specific error messages, including the detected MIME type when a mismatch occurs.
  • Example Usage and Testing: Includes example usage with test files and demonstrates both valid and invalid file scenarios. Cleans up test files afterwards.
  • Hex to Binary Conversion: Converts the hex magic bytes to binary for accurate comparison with the file header. This is crucial for correct matching.
  • Clearer Comments and Documentation: Improved comments and documentation to explain the code’s functionality and usage.

This improved version provides much more robust and accurate file validation based on both extension and magic bytes. Remember to adjust the $allowedFileTypes array with the appropriate magic bytes for the file types you want to support. You can find lists of file signatures online.

Use the above code with caution as it is not tested in any environment.

Ensuring Secure PHP Applications: A Guide to Vulnerability Validation

Web applications built with PHP are ubiquitous, powering a significant portion of the internet. However, their widespread use also makes them a frequent target for malicious actors. Ensuring the security of these applications is paramount, especially when development is handled by a third-party team. This article outlines a comprehensive approach to validating PHP code for vulnerabilities, minimizing risks and protecting sensitive data.

The Importance of Proactive Security:

Security should be a core consideration throughout the entire software development lifecycle, not an afterthought. Addressing vulnerabilities after deployment is significantly more costly and time-consuming than preventing them in the first place. Proactive security measures, including thorough code validation, are crucial for mitigating risks and maintaining a secure application.

Key Vulnerabilities to Watch For:

Several common vulnerabilities frequently plague PHP applications. Understanding these weaknesses is the first step in preventing them:

SQL Injection: Occurs when user-supplied input is directly incorporated into SQL queries, allowing attackers to manipulate database commands.
Cross-Site Scripting (XSS): Enables attackers to inject malicious scripts into web pages viewed by other users, potentially stealing cookies or redirecting users to phishing sites.
Cross-Site Request Forgery (CSRF): Exploits the trust a website has in a user’s browser, allowing attackers to perform unauthorized actions on behalf of the user.
File Inclusion: Arises when user input is used to dynamically include files, potentially allowing attackers to execute arbitrary code.
Command Injection: Happens when user input is used in system commands, allowing attackers to execute commands on the server.
Session Management Issues: Weaknesses in session handling can lead to session hijacking or other security breaches.
Improper Error Handling: Displaying sensitive information in error messages can provide valuable information to attackers.

A Multi-Layered Approach to Validation:

Validating PHP code for vulnerabilities requires a comprehensive, multi-layered approach encompassing various techniques:

  1. Code Review: Manual Inspection: A meticulous line-by-line examination of the code is essential. This process should focus on identifying patterns indicative of the vulnerabilities listed above. Special attention should be paid to areas where user input is processed or used in database queries, file operations, or system commands.
    Peer Review: Involving other experienced developers in the review process offers a fresh perspective and increases the likelihood of identifying overlooked issues.
  2. Automated Tools: Static Application Security Testing (SAST): SAST tools analyze the source code without executing it, identifying potential vulnerabilities based on predefined rules and patterns. These tools can flag issues like SQL injection, XSS, and other common weaknesses. Examples include PHPStan, Psalm, and RIPS.
    Dynamic Application Security Testing (DAST): DAST tools test the application in a runtime environment, simulating real-world attacks to uncover vulnerabilities that might not be apparent through static analysis. Tools like OWASP ZAP, Acunetix, and Netsparker fall into this category.
  3. Best Practices and Secure Coding Standards: Adherence to Standards: Following established secure coding guidelines, such as those provided by OWASP, is crucial. These guidelines provide a framework for writing secure code and minimizing vulnerabilities.
    Input Validation and Sanitization: Rigorous input validation and sanitization are essential for preventing many common vulnerabilities. All user inputs should be validated on both the client-side and server-side, and potentially harmful characters should be escaped or removed.
    Principle of Least Privilege: Granting only the necessary permissions to users and processes minimizes the potential damage from a successful attack.
    Regular Updates: Keeping PHP, libraries, frameworks, and the operating system up-to-date is crucial for patching known vulnerabilities.

Specific Considerations When Working with Third-Party Teams:

Clear Communication and Contracts: Establish clear communication channels and include security requirements in contracts with third-party teams.
Code Ownership and Access: Define code ownership and ensure access to the source code for thorough review.
Regular Security Audits: Conduct regular security audits of the application, especially after major updates or releases.
Vulnerability Disclosure Policy: Establish a clear vulnerability disclosure policy to handle security issues responsibly.

For a PHP code quality analyzer plugin for VS Code, the most popular choice is “PHPStan” which is a static code analysis tool that effectively detects potential errors and type issues in your PHP code without needing to actually run it, providing comprehensive insights into code quality. (source Google Search!

Conclusion:

Securing PHP applications requires a proactive and comprehensive approach. By implementing the strategies outlined in this article, including thorough code review, the use of automated tools, adherence to secure coding practices, and careful management of third-party relationships, organizations can significantly reduce the risk of vulnerabilities and protect their valuable data. Remember that security is an ongoing process, and continuous monitoring, testing, and improvement are essential for maintaining a secure application.

Optimizing WordPress Performance with AWS, Docker and Jenkins

At Jijutm.com, I wanted to deliver a fast and reliable experience for our readers. To achieve this, I have implemented a containerized approach using Docker and Jenkins for managing this WordPress site. This article delves into the details of our setup and how it contributes to exceptional website performance.

Why Containers?

Traditional server management often involves installing software directly on the operating system. This can lead to dependency conflicts, versioning issues, and a complex environment. Docker containers provide a solution by encapsulating applications with all their dependencies into isolated units. This offers several advantages:

Consistency: Docker ensures a consistent environment regardless of the underlying operating system. This simplifies development, testing, and deployment.
Isolation: Applications running in containers are isolated from each other, preventing conflicts and improving security.
Portability: Docker containers are portable across different environments, making it easy to migrate your application between development, staging, and production.

The Containerized Architecture

This WordPress site leverages three Docker containers:

  1. Nginx: A high-performance web server that serves the content of this website efficiently.
  2. PHP-FPM: A FastCGI process manager that executes PHP code for dynamic content generation in WordPress.
  3. MariaDB: A robust and popular open-source relational database management system that stores the WordPress data and is fully compatible with MySQL.

These containers work together seamlessly to deliver a smooth user experience. Nginx acts as the front door, handling user requests and routing them to the PHP-FPM container for processing. PHP-FPM interacts with the MariaDB container to retrieve and update website data.

Leveraging Jenkins for Automation

While Docker simplifies application management, automating deployments is crucial for efficient workflow. This is where Jenkins comes in. Jenkins is an open-source automation server that we use to manage the build and deployment process for our WordPress site.

Here’s how Jenkins integrates into this workflow:

  1. Code Changes: Whenever we make changes to the WordPress codebase, we push them to a version control system like Git.
  2. Jenkins Trigger: The push to the Git repository triggers a job in Jenkins.
  3. Build Stage: Jenkins pulls the latest code, builds a new Docker image containing the updated WordPress application, and pushes it to a Docker registry.
  4. Deployment Stage: The new Docker image is deployed to our hosting environment, updating the running containers with the latest code.

This automation ensures that our website stays up-to-date with the latest changes without any manual intervention.

Hooked into WordPress Post or Page Publish.

Over and above maintaining the code using Jenkins, each content Publish action triggers another Jenkins project, which runs a sequence of commands. wget in mirror mode to convert the whole site to static HTML files. sed to rewrite the URLs from local host to realtime external domain specific. gzip to create .html.gz for each HTML files. aws cli to sync the static mirror folder with that in AWS S3 and finally apply meta headers to the files to specify the content type and content-encoding. When all the files are synced, the AWS CLI issues an invalidate request to the CloudFront distribution.

Benefits of this Approach

Improved Performance: Docker containers provide a lightweight and efficient environment, leading to faster loading times for this website.
Enhanced Scalability: I don’t need to bother about scaling this application by adding more containers to handle increased traffic, as that is handled by aws S3 and CloudFront.
Simplified Management: Docker and Jenkins automate a significant portion of the infrastructure management, freeing up time for development and content creation. With the docker and all components running in my Asus TUF A17 Laptop powered by XUbuntu the hosting charges are limited to AWS Route53, AWS S3 and AWS CloudFront only.
Reliable Deployments: Jenkins ensures consistent and reliable deployments, minimizing the risk of errors or downtime.
Well for minimal dynamic content like the download counters, AWS Serverless lambda functions are written and deployed for updating download requests into aDynamoDB table and to display the count near any downloadable content with proper markup. Along with this the comments are moved into Disqus, as it is a comment system that can be used on WordPress sites. It can replace the native WordPress comments system.

Conclusion

By leveraging Docker containers and Jenkins, I have established a robust and performant foundation for this site. This approach allows me to focus on delivering high-quality content to the readers while ensuring a smooth and fast user experience.

Additional Considerations

Security: While Docker containers enhance security, it’s essential to maintain secure practices like keeping Docker containers updated and following security best practices for each service.
Monitoring: Monitoring the health and performance of your containers is crucial. Tools like Docker Stats and Prometheus can provide valuable insights.

Hope this article provides a valuable perspective on how Docker and Jenkins can be used to optimize a WordPress website. If you have any questions, feel free to leave a comment below!

My Transformation Story

Initially planned as a pocket book but detour due to non availablity of massive printing and distribution expenses. Also putting this up as a blog post will provide a further capability of updating this as and when needed. Will try to stick to chronological order as far as possible. But if due to some reason I deviate from the actuals please point it out by sending a post on platform X tagging jijutm or feel free to stamp a comment to this post.

As a preface this is a story of my transformation from a meager DTP operator in 1987 to an AWS Solution Architect in 2020. As anyone can imagine I have gone through all hazards and over many speed breakers during the period.

This is not a story of overnight success or a linear path to achievement. It’s a story of continuous learning, adaptation, and a relentless pursuit of solutions. From my early days tinkering with technology to leading complex cloud migrations and developing serverless architectures for major organizations, my journey has been filled with unexpected turns, challenges, and opportunities. This book is a reflection on those experiences—the triumphs, the setbacks, and the lessons learned along the way. It’s a testament to the power of resourcefulness, the importance of community, and the ever-evolving landscape of technology. Whether you’re a seasoned technologist, just starting your career, or simply curious about the world of software and cloud computing, I hope this story inspires you to embrace change, find creative solutions, and never stop learning.

Early Days and First Encounters with Technology

1987 just out of college – Sreenarayana College, Chempazhanthy started loitering around a multi business center run by few friends near Medical College Junction, named Pixel Graphics where they cater to thesis reports of medical students and the sort. With word processing, large font titles for separation pages, plastic spring binding, photocopying and a long distance telephone booth. This is where I got my first exposure to production systems with software like Gem First Publisher, Lotus 123, Wordstar and printing on an 8 pin dot matrix printer. Within no time I learned the intricacies of word processing using wordstar and page layout tweaking using dot commands in wordstar. Later on for better quality output I borrowed an electonic typewriter from another establishment run by couple of friends, twin brothers and interfaced the same to our computer and started printing from wordstar to this device. It was during this time that I got interested in computer hardware and enrolled into a certificate course in Universal Institute of Technologies near press club trivandrum. There Shaji Sir played a pivotal role in shaping up my dreams and the hardware maintenance and assembling course was completed in stipulated time. Continuing this the institution offered me the post of hardware engineer on contract.

This continued until I decided to split out and start a separate unit in another part of the city. Where I had my desktop pc, scanner, laser printer a very modest one at that time the HP LaserJet 4L with max 300 dpi and updated myself into PageMaker and Corel Draw. The renowned engineering text book author Dr C. E. Justo after getting few samples done by me had selected me to do the drawings for his updated and revised edition of High Way Engineering. Actually the samples were couple of machine parts and few graphs. There was no data only xerox copy from older version with some corrections. Well the machine parts were manually drawn using vector components and functions in corel draw and the graph I created with arbitary values cooked up by me by checking with the supplied drawing and was created in Excel and exported as image to Adobe Photoshop where the resolution was increased manually.

Dubai and the Implementation Project

My first significant professional experience came when I took on an implementation contract with Al-fajr Print Media in Dubai. My task was to automate their business directory production process. This involved working with existing software and hardware to create a more efficient workflow. I successfully implemented a crucial automation system, solved numerous technical problems, and even earned a reputation as a reliable and knowledgeable technologist in the local community.

To explain it a bit, their process was like the same data is entered into a billing system in Microsoft Access on Windows and on Excel for Mac for sorting and then copied into a layout on Quark Express for printing. The implementation which I did was a kind of automation with Microsoft Access VBA script would export the data as Quark Express layout scripts which could be run directly on from Quark Express Script Basket on the Mac where the layout will happen automatically.

By 1995, my implementation contract with Alfajr Print Media in Dubai had come to an end, and I returned to my hometown. While my time there had been invaluable, giving me practical experience in implementing real-world solutions. However, I realized that formalizing my skills with a recognized certification would significantly enhance my career prospects. I decided to pursue the Microsoft Certified Systems Engineer (MCSE) certification. The program involved rigorous study and a series of challenging exams covering topics like Windows NT Server, networking protocols, and system administration. My experience in Dubai, particularly my work with Windows systems and networking at Alfajr Print Media, proved to be a valuable foundation for my MCSE studies. The hard work and late nights paid off in 1997. I vividly remember the moment I received confirmation that I had passed all the required exams and officially earned my MCSE certification. It was a tremendous feeling of accomplishment. During this time, I was an avid reader of PCQuest, one of the most popular computer magazines in India. I particularly enjoyed the articles by Atul Chitnis.

Transition to Linux

In December of 1997, inspired by his insightful articles, I decided to take a leap of faith and travel to Bangalore to meet him. Resources were limited, so I ended up hitching rides for a significant portion of the journey. Over three days, I managed to get free lifts in eight different trucks. Finally, I arrived in Bangalore and managed to connect with Mr. Chitnis. Meeting him was a truly inspiring experience. As I was preparing to leave, he handed me a couple of floppy disks. ‘Try this out,’ he said, with a slightly mischievous glint in his eye. He then added a word of caution: ‘This is an operating system. If you’re not careful, you could easily screw up your existing operating system installation, so proceed with caution.’ The return journey to Trivandrum was a stark contrast to the arduous hitchhiking trip to Bangalore. Thanks to Mr. Chitnis and his local connections, I was able to secure a direct ride in a truck heading towards my hometown.

Back home in Trivandrum, I was eager to explore the contents of the floppies. Remembering his warning about the potential to damage my existing Windows installation, I decided to take a precautionary step. I swapped the hard disk in my system—the same one I had brought back from Dubai—for a new, blank drive. With the new hard disk in place, I inserted the first floppy and booted up my computer. What followed was my first encounter with Linux. The floppies contained Slackware Linux 3.3, a distribution that had been released in October of that year. My initial forays into Linux with Slackware quickly evolved into a deeper engagement with the open-source community.

I became actively involved with ILUG (India Linux Users Group), a vibrant community of Linux enthusiasts across India. I even had the opportunity to give a few talks at in-person events in Trivandrum, sharing my knowledge of Linux system administration and networking. After Slackware, I transitioned to Red Hat Linux, and then, in early 2004, I started using Fedora.

It was in 1998 there was this logtech systems with their internet surfing center at vazhuthacaud with high speed internet connection shared through windows and one day trial of spoon proxy. They had to reinstall the system every day to extend the trial of the proxy software. Suggested and took the initiative to shift whole system to linux, squid and socks proxy, which was executed in a few hours and the whole team was satisfied with the transition.

Building My Own Business

In the early 2000s, I and two of my close friends decided to take the plunge and start our own software company. Our first major client came to us with a request to develop custom software for a binary multi-level marketing system. My friend, who was our Java expert, raised a valid concern: MySQL 3.20, the version we were initially planning to use, lacked transaction support. After some digging online, I discovered that a newer, unreleased version of MySQL—version 3.23—had the potential for transaction support. The catch? It was only available as source code. I had some experience with compiling software from source, so I took on the challenge. After a few late nights and some careful configuration, I successfully compiled MySQL 3.23 release candidate. We then rigorously tested the transaction functionality directly from the command-line interface, ensuring that it worked as expected. After careful consideration and weighing the risks, my friend and I decided to go ahead and use this release candidate in the production servers for our client’s project.

By 2005, our company had become recognized as experts in MLM software development. This recognition was largely due to a unique tool I had developed: a plan evaluation simulator. This simulator could take an MLM plan as a configuration array—essentially a structured set of data that defined the plan’s rules and structure. From this configuration, the simulator could calculate the breakout period and populate a database table with numerical node names to represent the full network structure. This simulator was a game-changer for our clients.

As our company continued to grow, we realized the importance of clearly defining our roles and responsibilities. One of my partners, who had a remarkable ability to connect with clients and a strong understanding of financial matters, took on the dual role of CEO and Finance Manager. Our Java programmer friend naturally transitioned into the role of Project Manager. With my extensive software experience, multiple certifications including MCSE and RHCE, and deep understanding of hardware, it was a natural fit for me to take on the role of CTO. Our success with MLM projects allowed us to expand significantly. We outgrew our initial setup and moved into a proper office space near the Thiruvananthapuram Medical College.

Integration of Church Directory

In 2002, I was approached by organizers from a nearby church, the immanuel marthoma church paruthippara, who needed help creating an interactive CD-based directory of their members. They wanted to include details about each family and individual, along with photographs. I suggested using Microsoft Excel for the textual data and a structured folder system for the photos, using edavaka register number and serial numbers to link the data. The interactive CD was created using Macromedia Flash, with each family having individual SWF files and a single loader interface. With around 3500+ members across 800 families in the church, this could have been a herculean task. But my early devops instinct along with VBA sendkeys macro, macromedia flash was controlled from microsoft excel and the layout was done by my system only with myself sitting back and watching the activities onscreen.

Five years later, I received another call from the same parish. They were now looking to create a printed version of their member directory. They had diligently maintained the data in the Excel spreadsheet using the structure we had established for the CD project. By this time, I had become quite proficient in PHP programming and had started using the FPDF library extensively for PDF generation. I was also experimenting with GNU Make for basic task orchestration. This combination of tools provided the perfect solution. I created a series of PHP and shell scripts, each responsible for a specific part of the process, and then used GNU Make to orchestrate the execution of these scripts. The commands were: make import, make layout, make pdf, make index, and make binding. This orchestrated workflow, controlled by GNU Make, allowed me to automate the entire print publication process. The approach I developed for generating the print directory has proven to be so effective that it’s still being used today. The church revises the directory every five years, and I’ve continued to be involved in this process. Recently, recognizing the importance of preserving this knowledge and making it easier for others to learn the process, I decided to create a video demo using OBS Studio and Openshot video editor and the final is hosted on YouTube. http://bz2.in/82jbxu .

Scaling and Optimizing for Growth

We also started expanding our team, hiring new staff members through direct recruitment and referrals. Within the team, there were always friendly debates, particularly between me and my Java programmer friend, about the merits of PHP versus Java. One day, during one of these debates, I decided to settle the matter with a quick demonstration. I created a simple PHP page with just this code.

<?php phpinfo(); ?>

I opened the page in a browser, and in a matter of seconds, a detailed report appeared. Which is similar to what is shown here.

I then challenged my Java programmer friend to produce a similar output in the same timeframe using Java. He then, with a good-natured sigh, admitted defeat. ‘Okay, okay,’ he conceded, ‘PHP is better… for this at least.’ Towards the end of 2005, we were facing a frustrating and recurring problem: employee attrition. We were investing significant time and resources in recruiting and training new team members, only to see them leave after just three to six months, often citing offers from companies located within Trivandrum Technopark. We discussed this internally and decided that we needed to secure a Technopark address, “by hook or crook.” By 2006, our CEO managed to forge a mutually beneficial association with another company already located within the park, sharing their office space. By 2008, our company had grown considerably, and the space-sharing arrangement within Technopark was no longer sustainable. Our CEO focused his efforts on securing external investment. His hard work paid off, and he managed to convince a major investor to back our company. This influx of capital allowed us to make a significant upgrade: we moved into a spacious 40-seater office within Technopark.

As resources increased, we quickly ran into a network bottleneck. Our existing network was a simple wired setup using a hub with only six ports. Recognizing this limitation, I suggested that we make a more significant upgrade: transitioning to a wireless network. This would allow us to easily add new systems simply by installing wireless network cards. We opted for relatively inexpensive PCI Wi-Fi cards from D-Link, which used Atheros chipsets. However, these cards didn’t have native Linux drivers at the time, meaning they wouldn’t work out of the box with our Fedora systems.

Fortunately, I was familiar with a tool called ndiswrapper. I took on the task of wrapping the ndiswrapper drivers supplied with the D-Link cards. After some careful work, I managed to get the Wi-Fi cards working perfectly. This was a significant accomplishment. News of my success with the D-Link Wi-Fi cards and ndiswrapper quickly reached the distributor. They were facing a major problem at Calicut University, where they had supplied 200 of these PCI cards. The university was running Fedora Linux exclusively, and as a result, none of the cards were working. The distributor, along with the marketing manager for D-Link, contacted me and arranged my transportation to Calicut University. With the assistance of the university’s lab assistants, we set to work. Within about three hours, all 200 Wi-Fi cards were up and running. The university staff was extremely grateful, and the distributor was relieved that the payment issue was resolved. I insisted that our developers work in a Linux environment. I believed that Linux provided a superior development experience, with its powerful command-line tools, robust scripting capabilities, and overall stability. Alongside this subversion and jenkins were implemented to automatically deploy any committed code into our local webserver. I also implemented a practice of manual lint checking. To facilitate this, we migrated all developer machines to Fedora. The impact on our code quality was immediately noticeable.

We had bagged a project for creating a web application for a travel agency portal integrating the airline ticketing using Galelio GDS api and further lowcost airline api also.Our technical architects were keen on implementing WordPress frontend as I already had ample exposure in WordPress and about 5 resources were also fully trained in WordPress theme customization and plugin development. At the start itself I was worried about doing this sequentially as there are multiple api calls and front end will have to wait till all the actions are over. Well my instinct and basic nature gave it a deep thought and finally arrived at a solution. Use memcached as a central location. Search information submitted from front end will be handled by a plugin method and normalized into a json structure to be stored into memcache using a request id which will be used until the final stage of action. The front end will now start polling another method of the plugin to look into memcache with the requestid and a results suffix when this gets populated that will have information about how many pages of result is stored in memcache. Now the back end actual search was taken care of using a shell script which will handle multiple php scripts in the background using the & token and watching output from jobs. This effectively utilized the operating system capability of multiple threads to run php jobs in different isolated threads improving the search efficiency by 70% . The first time this happened the whole team welcomed it with a voracious shout and applause.

Further down the years when we reached the final stage of integrating air ticketing solution with gds as well as low cost airlines. The ticketing activity was showing intermittent failures and was identified that the multiple handshake from our server to the airline api which was traversing over the international border was the pain point. For solving this issue I got a leased vps from a US based hosting provider and a broker application was developed to run there. Our colocated server in Cochin would send a payload to the broker in US using http post and the multiple handshake will happen between airline API and our broker and the final information will be sent back again using an HTTP POST to our colocated server to a specific URL to be pushed to the corresponding frontend. This significantly improved the ticketing process part and failures dropped drastically.

Time to expand the airlines solution as we had bagged an order from an established business group to implement the solution in over 4500 business outlets across the country. There was heavy branding requirements and whitelabelling with theme changes. Thanks to WordPress these were a breeze with superior SEO capabilities and few additional custom plugins as well as optimized database structure the implementation took only 30% of the estimated time.

By this time I had authored the open php myprofiler which is a tool to profile mysql queries in a php environment. The basic advantage of this tool was that it does not need any extra installations nor need to learn any new language as it was fully written in php. Hence it was possible to be installed into shared hosting like that provided by GoDaddy or hostinger. Yes it had some limitations, but across different versions as of the time of writing this content the open php my profiler has 10,000 downloads. Check it out on the blog where I may add enhancements and new releases.

Embracing the Cloud and Serverless

As our business grew, the cost of maintaining our infrastructure became a growing concern. We were paying for both a colocated server in Cochin and a VPS in the US, which added up to a significant expense. After significant assessments and careful analysis of various options, I suggested that we consider migrating our infrastructure to Amazon Web Services (AWS). After some discussion and further evaluation, the management team agreed to proceed with the migration.

Once we decided to migrate to AWS, I took the lead in implementing the transition. I managed the migration using EC2 for our virtual servers, Elasticache for caching, and S3 for storage. This was a significant improvement over our previous setup. However, I began to explore more advanced services and architectural patterns.

Around the second half of 2010, the state Police Department approached us to develop a crowd management solution. After analyzing their existing processes, I proposed developing a completely new PHP framework that would provide better security and a smaller footprint, leading to improved performance. This led to the creation of phpmf, a lightweight routing framework that I later shared on GitHub. With a size of less than 5KB, phpmf was incredibly efficient. Hosted on AWS EC2 with autoscaling and Elastic Load Balancing (ELB), the solution handled peak traffic of 3-5,000 visitors per minute with ease. Later, in 2015, when AWS announced general availability for Node.js Lambda functions, we decided to migrate the image validation process to a serverless architecture. This involved direct uploads to S3, with S3 events triggering a Node.js Lambda function that would validate the image type and resolution. We also implemented a clever check to catch users who were attempting to upload invalid image files. We discovered that some users were renaming BMP files as JPGs in an attempt to bypass the validation. To detect this, we implemented a check of the file’s ‘magic header’—the first few bytes of a file that identify its true file type.

On a continuation we shifted pdf coupon generation from inline php fpdf system to java based AWS Lambda which increased the achived concurrency from 60 to 400 which is assessed as a 566% improvement in concurrency. From another analysis the cost was also drastically reduced for which I dont have any reference as of now, still a discussion I remember some one stated the cost reduction was about 30% on an overall estimation and comparison with the same period of the previous year.

High-Profile Projects and Continued Innovation

The lessons I learned from these diverse experiences gave me the confidence and technical acumen to design and develop a comprehensive application for Kotak Mahindra Bank. This application incorporated a complex conditional survey that adapted to user responses, providing a personalized experience. I chose to build this application using a completely serverless architecture, leveraging the power and scalability of AWS Lambda, API Gateway, and other serverless services. Static files for the front-end were hosted directly on S3. For the data store, I selected AWS DynamoDB.

Building on the success and experience gained from developing the application for Kotak Mahindra Bank, I next took on an even more ambitious project: the development of a complete serverless news portal and news desk management system for Janmabhoomi Daily, a major news agency. This project presented a unique set of challenges, especially in handling real-time updates and ensuring data consistency across the distributed serverless architecture. We used AWS AppSync for real-time data synchronization between the news desk management system and the public-facing portal. I designed the entire solution using a fully serverless architecture on AWS, leveraging services like Lambda, API Gateway, S3, and DynamoDB. The news portal was designed for high availability and scalability, capable of handling large volumes of traffic during breaking news events. The news desk management system streamlined the editorial workflow, allowing journalists and editors to easily create, edit, and publish news articles. This project further solidified my expertise in serverless technologies.

Since we were heavily into wordpress it was quite natural to create a wordpress plugin to implement open php myprofiler and the sampler which will give the reports as to time taken by each query with respect to the request url. Also the number of queries that a page runs to generate the output is also visible in the sampler output. An expert MySQL Architect can then use the queries with EXPLAIN prefix to understand why a query is taking too long. Thus analysis can pinpoint to inefficient database indexing and optimizing these will make the system run faster.

Deep into AWS and Community Building

By this time I had added few feathers to my cap, the AWS Certified Solution Architect Associate and the AWS Certified DevOps Administrator Associate. Also I was nose deep into active community building with being the co-organiser of AWS UserGroup Trivandrum. Along with this The heavy success of the Crowd Management solution narrated in chapter 6 also kept me in the limelight, and I was invited by AWS for many events to deliver talks about my experiences while shifting from traditional hosting to a serverless mindset.

With the active participation in all these community activities, I was invited to join a beta program by AWS which is currently named as the AWS Community Builder. Being there also I had delivered a lot of sessions to different user groups and few technology summits. I used to frequent out to Chennai, Bangalore, Mumbai and Cochin with relation to these technology summits. Every where I was welcomed with high importance.

Next Phase in the Career

Year 2018, due to some other difficulties, and complications there was a business transfer and our company merged with couple of two others to form a new digital solution company with a major share of resources having deep exposure of SAP and with the certifications and deep knowledge in cloud technologies and networking naturally I was also accepted as CTO into the new conglomerate.

Here once we settled down the then router which was using cisco routing systemwas not enough to handle the whole security and inhouse servers along with bandwidth pooling across two internet service providers. The systems engineer as well as combined CEO was planning to procure new hardware when I intervened and suggested that we utilize a salvaged multihomed rack server which was kept aside because it is not capable of loading windows server operating system. Then to implement pfsense which is a lightweight packetfiltering firewall and utility on top of free bsd. Also it has a versetaile and intutive web interface which can be controlled and configuration as well as monitoring can be done from any standard browser be it desktop or mobile. The implementation was smooth and company saved around 2,00,000 INR in hardware costs. While working there and handling some critical applications I had to take leave for a few days to go to Bangalore on an AWS User Group event. I thought to attempt and configure pptp vpn for my access and it was configured to handle authentication from internal LDAP configured on an Ubuntu server and was being in use by all resources for login. The LDAP though requires manual cli intervention to add or remove login, I had already created a Jenkins project which does the functions in the background with easy to use web frontend from the Jenkins interface.

I was interviewed by CEO Insights Magazine read the article about me in their archives.

Things were going smooth with many new projects and clients as well as with AWS community and futher till fag end of 2019 when I was prepared to attend the AWS Summit 2020 in Mumbai, the discounted delegate tickets were purchased, and flight tickets were also procured when the pandemic disaster broke all predictions in 2020. Due to the pandemic I being a high value resource was suggested to resign and I did so with June being my completion date due to Knowledge Base Transfer and credentials handover all that was required were documented and I started the handover. But the pfsense and pptp implementation which was done way back came into use for the company as 80% of the resources were able to access the internal hosting and Jenkins interface through VPN without further hardware or configuration expenses. Well with my linkedin network of resources whom I had interacted through community building with ILUG and AWS and otherwise got training under me were quite a lot and few of them did reccommend me in few other establishments and finally in July I got placed as Solution Architect with UST Global, Trivandrum, India with remote facility. CEO Insights Magazine did another interview on me and the article was published on their portal.

As the remote facility was there my working hours were quite flexible and I had plenty of free time which I thought to utilize beneficially. Where I voluteered to support the District Disaster Management Department by taking charge of coordinating several voluteer students from different colleges. The Department incharge was the then Trivandrum District Collector, Dr. Gopalakrishnan a very efficient and dynamic personality who had great appreciation and admiration to me as I had single handedly developed a resource data collection system to collect information about skilled labourers who had migrated from different parts of the country in several construction camps spread around the Trivandrum District. This responsive application was hosted on AWS with the help of AWS team as I could convince them to provide a pandemic support credits for the Department AWS account. I take this opportunity to thank the staff and management of VelosIT technologies in Technopark to favourably permitting us to use their facilities in Trivandrum Technopark for the development of the application. The data collection was deputed to the volunteer students who promptly did the same and finally one day with some data analytics and reports a special train with multiple collection points and enough segregation could be arranged to take the labourers to their respective locations enroute.

The arrangement with UST Global was a contract and had to be renewed every six months well some times the renewal getting delayed my salaries delayed and finally I got really fed up, where I attempted to get another placement in a Consulting setup compromising to some facilities which came into reality in April 2022 as Technical Architect, Quest Global , Technopark Trivandrum, where I did not like the environment and quit very sson. The CEO Insights reporter being a linkedin follower came to know about the job switch and yet another interview got published.

The Community Commitment

Starting from 2004 with https://phpmyib.sourceforge.net/ I had taken it as a commitment to give back to the developer community as I had got a lot from there starting from my first interaction with the renowned PCQuest uathor Atul Chitnis in 1996 from whom I got introduced to Linux. This commitment continued over to https://github.com/jthoma and consistently being updated with scripts and utilities. Also the Open Php My Profiler here is one another detailed tool which shows my proficiency in php and mysql.

Further personal interests

But I had other interests also, which were motor bike riding and agricultural research. Riding on Bike always I tried to be as careful as can be and for almost all rides longer than 20 minutes would wear guards on knees, elbow, gloves with knucle protection as well as shoe with front and back steel padding along with full face helmet. And my luggage will be wrapped in poyurethene sheet and fastened to on the pillion seat. I always ride solo on such trips. Naturally I got involved in couple of biker clubs and am quite active in the Bajaj Avenger Club.

Writing about agricultural research, there is not much it sparked from a parental property and instead of leaving it as such and let the nature breed some waste plant, I started the drive and initially actually during the pandemic time, started planting few leafy vegetables. There was attack of some kind of pests, and after getting advice from youtube and others, first success was with an organic pesticide. Combination of neem leaves, wild turmeric ( Curcuma aromatica ), bird’s eye chili ( Capsicum frutescens ) grinded to paste and mixed with water then strain through a piece of cotton dhoti to be filled in the portable manual pumped sprayer. Quantity finally used was 500 gms of each of the items and 5 litre of well water. Well the harvest was mostly used in our family only and sale was not attempted. I thought it to be waste of time and effort. Had a detailed discussion with our caretaker and decided to attempt plantain cultivation in a medium large scale that would not exhaust the resources. This is where I managed to create an organic fertilizer. Multiple large scale fish cleaning points were identified and fish waste collected from there to be treated with industrial jaggery and fermented yeast for a week and the slurry was used as fertilizer for a marked area of 10 plantains assorted variety plants. The fruit bearing results were really good with one bunch weighing 30Kg and another one 48Kg, now I started selling this to local crop procurement agencies.

We had an attack from a bunch of monkeys and a bats also. Support from youtube and other research online finally decided to try out a suggestion by some farmer who claimed to have implemented it successfully. To deploy rubber snakes around the farm. I tried this and found it very effective for mitigating monkey attacks as they steer away from palces where snakes are around. Now the next issue is with bats. My technology background sparked some ideas and an online research got me in the right direction. With a raspberry pi and high freqency sound sensor and the bat detector project along with good quality speakers I managed to send the bats out of our farm land. Well the sqeaks of a hawk was enough to frighten the bats and that was pre recorded mp3 played through the speaker with a bit of amplification.

Yet another activity was to sprout a mango tree from tree cuttings, from a near by hotel when the KSEB people did some tree cropping on line touching, one cutting was taken by me and the cut end was applied with onion juice for 2 hours, aloevera gel overnight and finally planted in a pot with cocopeat and soil 1:1 mixture mixed with a ripe papaya which was leftover from a bat attack. Then every day at fixed time very small amount of water was applied and once weekly one day old rice brine which I take from home. 12th day new leaves were seen at multiple points and 29th day the plant in pot had sprouted flowers. A moment of real happiness.

Harvesting papaya fruit was the toughest task as it was taller than our standalone ladder and trying to climb the tree like a coconut climber does is not so easy as the plant is not that strong as a coconut and the stain that oozes out can create acute itching. Well using an old pvc pipe an extended mechanical arm was created by me. The tools used were gas torch, hand saw mini drill. The process was to cut the pipe into multiple pieces such that transportation would be easy. All pieces were given a heat treatment at one side to loosen the pvc and another pipe was pushed into this heated side such that once it sets we can eaisly assemble it after transporting. Then the final end was vertically sliced thrice to about 1 foot length to create the fingers these were heated and bent out and using a plastic thread the fingers where cross connected and the thread end was taken through the main pipes to the bottom most one where a sleeve was installed and end was tied to this sleeve. Now when we pull the sleeve down the fingers come together and when sleeve is released the fingers openup.

See it in action: https://youtu.be/wrVh7uBfBTY

Ubuntu Temperature Monitoring

Recently procured cpu fans for my Asus A17 laptop from Amazon and installed the same into my laptop myself. I already had the Farraige Heavy Magnetic Screwdriver Set; 25-1 Repair Kit, with Portable Leather Case, Professional Opening Tools for Mobile Laptop Glasses, Star/Y-Type/Flat-Blade/Triangle Screwdrivers, black. And my internal instinct which has helped me in many such situations.

Why I decided to replace the fans is because the laptop was going crazy with copy paste ^C ^V in xubuntu xwindows across applications and finally the sensors command was showing 75 + and at times even 90 which is quoted as critical in many online forums and the sensors command output also.

After installation I wanted to monitor the temperature and collect the data into a spreadsheet, my preferred one being LibreOffice Calc. Well for this I just fired up gedit and created the shell script with occasional checks directly in the shell itself.

The above shell script when run on the command line will output a single line as csv with the values in the order cpu_fan,Tdie,Tctl,temp1,date,time. Hence initialized a file with these values alone and in crontab at reboot this initial file will be copied on to /dev/shm/sensor_values.csv making sure only one session data will exist in the csv.

After this was running for some time the sensor_values.csv had the following data.

The csv when copied from shell and pasted into LibreOffice Calc the application will prompt with an auto detected import dialogue.

The final imported spreadsheet looks as in the following screenshot image.

The fan speed and temperature variations show clearly that system is working fine and my attempt to resolve a hardware issue was successful.

Thankyou for reading this checkout my contributions to the developer community and technical profiles.

Automating Church Membership Directory Creation: A Case Study in Workflow Efficiency

Maintaining and publishing a church membership directory is a meticulous process that often requires managing sensitive data and adhering to strict timelines. Traditionally, this would involve significant manual effort, often taking days to complete. In this blog post, I will share how I streamlined this process by automating the workflow using open-source tools. This approach not only reduced the time from several hours to under 13 minutes but also ensured accuracy and repeatability, setting a benchmark for efficiency in handling similar projects. Specifically it should be noted that the complicated sorting they needed for the final output could have taken the same time if done manually in case of last minute changes like addition or removal of a member that too if a head of the family expired and has to be updated before taking final output the whole prayer group sorting can affect. Consider the head of the family name was starting with Z and when removed automatic upgrade of the next member to head of family and the name starts with A the whole prayer group layout has a chance to take drastic change and manual layout would be herculian in this case. But with this implementation of automation, that is another 15 minutes to the maximum just a flag change in the xls and the command line “make directory” will run through the full process.

Workflow Overview

The project involves converting an xls file containing membership data into a print-ready PDF. The data and member photographs are maintained by a volunteer team on Google Sheets and google drive, these are shared via Google Drive. Each family has a unique register number, and members are assigned serial numbers for photo organization. The workflow is orchestrated using GNU Make, with specific tasks divided into stages for better manageability.

Stage 1: Photo Processing

Tools Used:

  • Bash Shell Scripts for automation
  • ImageMagick for photo dimension checking and resizing

The photo directory is processed using identify (ImageMagick) to determine the dimensions of each image. This ensures that all photos meet the required quality (300 DPI for print). Images that are too large or too small are adjusted using convert, ensuring consistency across all member profiles.

Stage 2: Importing Data into MySQL

Tools Used:

  • MySQL for data management
  • Libre Office Calc to export xls to csv
  • Bash and PHP Scripts for CSV import

The exported CSV data is imported into a MySQL database. This allows for sorting, filtering, and advanced layout calculations, providing a structured approach to organizing the data.

Stage 3: Data Sorting and Layout Preparation

Tools Used:

  • MySQL Queries for layout calculations

The data is grouped and sorted based on location and family register numbers. For each member, a layout height and page number are calculated and updated in the database. This ensures a consistent and visually appealing directory design.

Stage 4: PDF Generation

Tools Used:

  • PHP and FPDF Library

Using PHP and FPDF, the data is read from MySQL, and PDFs are generated for each of the 12 location-based groups. During this stage, indexes are also created to list register numbers and member names alongside their corresponding page numbers.

Stage 5: Final Assembly and Indexing

Tools Used:

  • GNU Make for orchestration
  • PDF Merge Tools

The 12 individual PDFs generated in the previous stage are stitched together into a single document. The two indexes (by register number and by member name) are combined and appended to the final PDF. This single document is then ready for print.

Efficiency Achieved

Running the entire workflow on an ASUS A17 with XUbuntu, the process completes in less than 13 minutes. By comparison, a traditional approach using desktop publishing (DTP) software could take 20–30 hours, even with a skilled team working in parallel. The automated workflow eliminates manual errors, ensures uniformity, and significantly improves productivity.

Key Advantages of the Automated Workflow

  1. Time Efficiency: From 20–30 hours to 13 minutes.
  2. Accuracy: Eliminates manual errors through automation.
  3. Scalability: Easily accommodates future data updates or layout changes.
  4. Cost-Effective: Utilizes free and open-source tools.
  5. Repeatability: The process can be executed multiple times with minimal adjustments.

Tools and Technology Stack

  • Operating System: XUbuntu on ASUS A17
  • Photo Processing: ImageMagick (identify and convert)
  • Database Management: MySQL
  • Scripting and Automation: Bash Shell, GNU Make
  • PDF Generation: PHP, FPDF Library
  • File Management: Google Drive for data sharing

Conclusion

This project highlights the power of automation in handling repetitive and labor-intensive tasks. By leveraging open-source tools and orchestrating the workflow with GNU Make, the entire process became not only faster but also more reliable. This method can serve as a template for similar projects, inspiring others to embrace automation for efficiency gains.

Feel free to share your thoughts or ask questions in the comments below. If you’d like to adopt a similar workflow for your organization, I’d be happy to provide guidance!

Tackling Privilege Escalation in AWS – A Real-World Solution

The Challenge of Privilege Escalation
Cloud security is one of the most pressing concerns for organizations leveraging AWS. Among these concerns, Privilege Escalation Attacks pose a critical risk. In these attacks, a malicious user or compromised identity can exploit misconfigured permissions to gain elevated access, jeopardizing data integrity and security.

In this post, I explore a real-world privilege escalation scenario and outline an effective solution using AWS services and best practices.

The Scenario: A Misconfigured IAM Policy

Imagine a medium-sized organization with a DevOps team that requires administrative privileges to manage infrastructure. To simplify permissions, an administrator attaches a wildcard (`*`) to an IAM policy, granting full access to certain services without proper scoping.

A malicious actor gains access to an unused account in the organization, exploiting the over-permissive policy to create a custom role with admin privileges. From there, the attacker gains unrestricted access to sensitive resources like databases and S3 buckets.

Impact:

  • Exposure of sensitive data.
  • Manipulation or deletion of infrastructure.
  • Financial damage due to misuse of compute resources. The Solution: Mitigating Privilege Escalation Risks

To counter this, we can implement a robust multi-layered approach using AWS services and industry best practices:

  1. Principle of Least Privilege (POLP)
    Review and Refine IAM Policies: Replace wildcards (`) with specific actions and resources. For example, instead of grantings3:, use actions likes3:PutObjectands3:GetObject`.
    IAM Access Analyzer: Use this tool to analyze resource policies and detect over-permissive configurations.

2. Enable Identity Protection with MFA
Multi-Factor Authentication (MFA): Enforce MFA for all IAM users and roles, especially for sensitive accounts. Use AWS IAM Identity Center for centralized management.

3. Monitor and Detect Anomalous Behavior
AWS CloudTrail: Ensure logging is enabled for all AWS accounts to track actions like policy changes and resource creation.
Amazon GuardDuty: Use GuardDuty to detect potential privilege escalation attempts, such as unauthorized role creation.

4. Implement Permission Boundaries
Define permission boundaries for IAM roles to restrict the maximum allowable permissions. For example, restrict developers to actions within specific projects or environments.

5. Automate Security Audits
AWS Config: Set up rules to evaluate the compliance of IAM policies and other configurations. Use automated remediation workflows for non-compliant resources.
AWS Security Hub: Aggregate security alerts and compliance checks for centralized visibility.

The Result: Strengthened Cloud Security

By adopting these measures, the organization effectively neutralized the threat of privilege escalation. The team can now operate confidently, knowing that any deviation from least privilege will trigger immediate alerts and automated actions.

Conclusion

Cloud security is a shared responsibility, and mitigating privilege escalation is crucial for safeguarding your AWS environment. Regular audits, careful policy design, and leveraging AWS security tools can create a resilient cloud infrastructure.

Call to Action
Secure your AWS workloads with these strategies today. Got questions or need assistance? Feel free to reach out or share your thoughts in the comments below!