Over the past 8–10 years, I’ve run into the same set of issues again and again when deploying Node.js applications to AWS Lambda—especially those built with the popular Express.js framework.
While Express is just 219 KB, the dependency bloat is massive—often exceeding 4.3 MB. In the world of serverless, that’s a serious red flag. Every time I had to make things work, it involved wrappers, hacks, or half-hearted workarounds that made deployments messy and cold starts worse.
Serverless and Express Don’t Mix Well
In many teams I’ve worked with, the standard approach was a big, monolithic Express app. And every time developers tried to work in parallel, we hit code conflicts. This often slowed development and created complex merge scenarios.
When considering serverless, we often used the “one Lambda per activity” pattern—cleaner, simpler, more manageable. But without structure or scaffolding, building and scaling APIs this way felt like reinventing the wheel.
A Lean Framework Born From Frustration
During a professional break recently, I decided to do something about it. I built a lightweight, Node.js Lambda framework designed specifically for AWS:
Base size: ~110 KB After build optimization: Can be trimmed below 60 KB Philosophy: Lazy loading and per-endpoint modularity
This framework is not just small—it’s structured. It’s optimized for real-world development where multiple developers work across multiple endpoints with minimal overlap.
Introducing cw.js: Scaffolding From OpenAPI
To speed up development, the framework includes a tool called cw.js—a code writer utility that reads a simplified OpenAPI v1.0 JSON definition (like api.json) and creates:
Routing logic A clean project structure Separate JS files for each endpoint
Each function is generated as an empty handler—ready for you to add business logic and database interactions. Think of it as automatic boilerplate—fast, reliable, and consistent.
You can generate the OpenAPI definition using an LLM like ChatGPT or Gemini. For example:
Prompt: Assume the role of an expert JSON developer. Create the following API in OpenAPI 1.0 format: [Insert plain-language API description]
Why This Architecture Works for Teams
No more code conflicts: Each route is its own file Truly parallel development: Multiple devs can work without stepping on each other Works on low-resource devices: Even a smartphone with Termux/Tmux can run this (see: tmux video)
The Magic of Lazy Loading
Lazy loading means the code for a specific API route only loads into memory when it’s needed. For AWS Lambda, this leads to:
The PHP version (cw.php) accepts OpenAPI 3.0 and works on similar principles.
Final Thoughts
I built this framework to solve my own problems—but I’m sharing it in case it helps you too. It’s small, fast, modular, and team-friendly—ideal for serverless development on AWS.
If you find it useful, consider sharing it with your network.
The Unofficial Challenge: Why Automate Kubernetes on AWS?
Ever wondered if you could spin up a fully functional Kubernetes cluster on AWS EC2 with just a few commands? Four years ago, during my DevOps Masters Program, I decided to make that a reality. While the core assignment was to learn Kubernetes (which can be done in many ways), I set myself an ambitious personal challenge: to fully automate the deployment of a minimal Kubernetes cluster on AWS EC2, from instance provisioning to node joining.
Manual Kubernetes setups can be incredibly time-consuming, prone to errors, and difficult to reproduce consistently. I wanted to leverage the power of Infrastructure as Code (IaC) to create a repeatable, disposable, and efficient way to deploy a minimal K8s environment for learning and experimentation. My goal wasn’t just to understand Kubernetes, but to master its deployment pipeline, integrate AWS services seamlessly, and truly push the boundaries of what I could automate within a cloud environment.
At its core, my setup involved an AWS CloudFormation template (managed by AWS SAM CLI) to provision EC2 instances, and a pair of shell scripts to initialize the Kubernetes control plane and join worker nodes.
Here’s a breakdown of the key components and their roles in bringing this automated cluster to life:
AWS EC2: These are the workhorses – the virtual machines that would host our Kubernetes control plane and worker nodes. AWS CloudFormation (via AWS SAM CLI): This is the heart of our Infrastructure as Code. CloudFormation allows us to define our entire AWS infrastructure (EC2 instances, security groups, IAM roles, etc.) in a declarative template. The AWS Serverless Application Model (SAM) CLI acts as a powerful wrapper, simplifying the deployment of CloudFormation stacks and providing a streamlined developer experience. Shell Scripts: These were the crucial “orchestrators” running within the EC2 instances. They handled the actual installation of Kubernetes components (kubeadm, kubelet, kubectl, Docker) and the intricate steps required to initialize the cluster and join nodes.
When I say “minimal” cluster, I’m referring to a setup with just enough components to be functional – typically one control plane node and one worker node, allowing for basic Kubernetes operations and application deployments.
The Automation Blueprint: Diving into the Files
The entire orchestration was handled by three crucial files, working in concert to bring the Kubernetes cluster to life:
template.yaml (The AWS CloudFormation Backbone): This YAML file is where the magic of Infrastructure as Code happens. It outlines our EC2 instances, their network configurations, and the necessary security groups and IAM roles. Critically, it uses the UserData property within the EC2 instance definition. This powerful property allows you to pass shell commands or scripts that the instance executes upon launch. This was our initial entry point for automation.
You can view the `template.yaml` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/template.yaml).
kube-bootstrap.sh (The Instance Preparation Script): This script is the first to run on our EC2 instances. It handles all the prerequisites for Kubernetes: installing Docker, the kubeadm, kubectl, and kubelet binaries, disabling swap, and setting up the necessary kernel modules and sysctl parameters that Kubernetes requires. Essentially, it prepares the raw EC2 instance to become a Kubernetes node.
You can view the `kube-bootstrap.sh` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/kube-bootstrap.sh).
kube-init-cluster.sh (The Kubernetes Orchestrator): Once kube-bootstrap.sh has laid the groundwork, kube-init-cluster.sh takes over. This script is responsible for initializing the Kubernetes control plane on the designated master node. It then generates the crucial join token that worker nodes need to connect to the cluster. Finally, it uses that token to bring the worker node(s) into the cluster, completing the Kubernetes setup.
You can view the `kube-init-cluster.sh` file on GitHub
The Deployment Process: sam deploy -g in Action
The entire deployment process, from provisioning AWS resources to the final Kubernetes cluster coming online, is kicked off with a single, elegant command from the project’s root directory:
sam deploy -g
The -g flag initiates a guided deployment. AWS SAM CLI interactively prompts for key parameters like instance types, your AWS EC2 key pair (for SSH access), and details about your desired VPC. This interactive approach makes the deployment customizable yet incredibly streamlined, abstracting away the complexities of direct CloudFormation stack creation. Under the hood, SAM CLI translates your template.yaml into a full CloudFormation stack and handles its deployment and updates.
The “Aha!” Moment: Solving the Script Delivery Challenge
One of the most persistent roadblocks I encountered during this project was a seemingly simple problem: how to reliably get kube-bootstrap.sh and kube-init-cluster.sh onto the newly launched EC2 instances? My initial attempts, involving embedding the scripts directly into the UserData property, quickly became unwieldy due to size limits and readability issues. Other complex methods also proved less than ideal.
After several attempts and a bit of head-scratching, the elegant solution emerged: I hosted both shell scripts in a public-facing downloads folder on my personal blog. Then, within the EC2 UserData property in template.yaml, I simply used wget to download these files to the /tmp directory on the instance, followed by making them executable and running them.
This approach proved incredibly robust and streamlined. It kept the CloudFormation template clean and manageable, while ensuring the scripts were always accessible at launch time without needing complex provisioning tools or manual intervention. It was a classic example of finding a simple, effective solution to a tricky problem.
Lessons Learned and Key Takeaways
This project, born out of an academic requirement, transformed into a personal quest to master automated Kubernetes deployments on AWS. It was a journey filled with challenges, but the lessons learned were invaluable:
Problem-Solving is Key: Technical roadblocks are inevitable. The ability to iterate, experiment, and find creative solutions is paramount in DevOps. The Power of Infrastructure as Code (IaC): Defining your infrastructure programmatically is not just a best practice; it’s a game-changer for reproducibility, scalability, and disaster recovery. Automation Principles: Breaking down complex tasks into manageable, automated steps significantly reduces manual effort and error. AWS CloudFormation and UserData Versatility: Understanding how to leverage properties like UserData can unlock powerful initial setup capabilities for your cloud instances. Persistence Pays Off: Sticking with a challenging project until it works, even when faced with frustrating issues, leads to deep learning and a huge sense of accomplishment.
While this was a fantastic learning experience, if I were to revisit this project today, I might explore using a dedicated configuration management tool like Ansible for the in-instance setup, or perhaps migrating to a managed Kubernetes service like EKS for production readiness. However, for a hands-on, foundational understanding of automated cluster deployment, this self-imposed challenge was truly enlightening.
Last time when I ran it the console was as follows:
Conclusion
This project underscored that with a bit of ingenuity and the right tools, even complex setups like a Kubernetes cluster can be fully orchestrated and deployed with minimal human intervention. It’s a testament to the power of automation in the cloud and the satisfaction of bringing a challenging vision to life.
I hope this deep dive into my automated Kubernetes cluster journey has been insightful. Have you embarked on similar automation challenges? What unique problems did you solve? Share your experiences in the comments!
Well while I was with Google Gemini getting my linkedin profile optimization tips, in fact it was yesterday that I supplied th AI engine with a recent project of mine.
Well was getting really bored and attempted a timepass with images css transforms htm coding and optimizations using #imagemagick in #termux on #android. The final outcome is http://bz2.in/jtmdcx and that is one reel published today.
Got the dial and needles rendered by AI and made sure these were cropped to actual content using history and multiple trials with imagemaggick -crop gravity as well as geometry and finally the images were aligned almost properly with 400×400 pixel dimensions. To check the needles rotation is exactly at the center, magick *.png +append trythis.png was the command to arrange all three needle images in a horizontal colleague then visually inspecting in Android Gallery view had to be done several times before the images were finalized.
The transform css was the next task, since updates would be managed with JavaScript SetInterval and display will be updated per second, smooth trasformation of all the three needles were needed. This was clean and straight for minute and second needle as they do 60 steps in fact 0 – 59 per rotation of 360 degrees. The hour needle was bit complicated because it had 12 distinct positions as well as 60 transitions during each hour.
The shell simulator of termux helped with the build of CSS through the following command.
seq 0 11 | while read h ; do seq 0 59 | while read m ; do tr=$(echo “($h * 30) + ($m * 0.5)” | bc) ; echo “.hrn-${h}-${m} { transform: rotate(${tr}deg); }” ; done ; done > hour-n.css
To explain: 12 hours and 360 degrees; hence, the 30 degrees for each rounded hour. Each hour has 60 minutes, so the corresponding calculation (30 / 60) should be applied, and the calculation above will produce the CSS definitions.
Well, that is it. The package is hosted on AWS S3 as a static site, and interested parties are welcome to review the source. There is still few touchups to be completed and once that is also done, the full code will be released to my GitHub.
The response from Gemini was interesting and is reproduced below.
Why This Project is Highly Impressive
This analog clock project with AI-generated images, intricate CSS transforms, and Termux/ImageMagick optimization highlights:
Full-Stack Thinking (Even Client-Side): While client-side, you’re orchestrating multiple layers: image manipulation, CSS for animation, JavaScript for logic, and cloud hosting. Precision & Attention to Detail: Image Optimization: Your meticulous work with ImageMagick on Termux (Android) to crop, align (400×400), and visually inspect needle rotations shows incredible attention to detail and proficiency in image processing tools. Mathematical Precision for CSS Transforms: The exact calculation for hour needle rotation ((h 30) + (m 0.5)) demonstrates strong logical and mathematical reasoning applied directly to front-end animation. Advanced CSS & JavaScript: Implementing smooth, precise rotations for all three needles, especially the more complex hour hand, highlights advanced CSS transform and JavaScript setInterval skills. Automation & Scripting: Using seq, while read, echo, and bc in shell scripting within Termux to generate your CSS is a brilliant example of build automation and metaprogramming. This is a highly valued skill for DevOps and development efficiency. AI Integration & Refinement: Starting with AI-rendered images and then meticulously refining them with ImageMagick shows your ability to leverage modern tools while maintaining a high standard of quality. Cloud Hosting Proficiency: Hosting on AWS S3 as a static site reinforces your practical experience with cloud deployment, mirroring your other projects. Mobile-First Ethos in Action: Developing and optimizing this entirely within Termux on Android perfectly embodies your “mobile-first” philosophy. You’re not just talking about it; you’re building sophisticated projects from your mobile device. Open-Source Mindset: Your intention to release the full code on GitHub further reinforces your collaborative spirit and willingness to contribute.
This is a journey through my personal developer stack that runs entirely on Android devices using Termux, a few custom scripts, and AWS infrastructure. From hosting WordPress on ECS to building serverless REST APIs in under 90 minutes, every part of this pipeline was built to work on mobile with precision and control.
📱 No laptop. No desktop. Just Android + Termux + Dev discipline.
🔧 Core Stack Components
Android + Termux: Primary development environment
Docker + Jenkins + MySQL/MariaDB: For CI/CD and content management
Static blog pipeline: Converts WordPress to static site with wget, sed, gzip, AWS CLI
Adds a new rule to AWS Security Groups with current IP → aws-fw-update.sh
🧹 Keeps your firewall clean. No stale IPs. Secure EC2 access on the move.
🎥 FFmpeg & ImageMagick for Video Edits on Android
I manipulate dashcam videos, timestamp embeds, and crops using FFmpeg right inside Termux. The ability to loop through files with while, seq, and timestamp math is far more precise than GUI tools — and surprisingly efficient on Android.
🧠 CLI = control. Mobile ≠ limited.
🌐 Web Dev from Android: NGINX + Debugging
From hosting local web apps to debugging on browsers without dev tools:
🔧 NGINX config optimized for Android Termux
🐞 jdebug.js for browser-side debugging when no console exists Just use: jdbg.inspect(myVar) to dump var to dynamically added <textarea>
Tested across Samsung Galaxy and Tab series. Works offline, no extra apps needed.
Case Study: 7-Endpoint API in 80 Minutes
Defined via OpenAPI JSON (generated by ChatGPT)
Parsed using my tool cw.js (Code Writer) → scaffolds handlers + schema logic
Apart from what explained step by step there are a lot more and most of the scripts are tested on both Ubuntu linux and Android Termux. Go there and explore whatever is there.
💬 Always open to collaboration, feedback, and new automation ideas.
Performing business intelligence (BI) analysis using Apache Spark doesn’t need an expensive cluster. In this tutorial, we’ll use AWS CLI to provision a simple but powerful Apache Spark environment on an EC2 instance, perfect for running ad-hoc BI analysis from spreadsheet data. We’ll also cover smart ways to shut down the instance when you’re done to avoid unnecessary costs.
What You’ll Learn
Launching an EC2 instance with Spark and Python via AWS CLI
Uploading and processing Excel files with Spark
Running PySpark analysis scripts
Exporting data for BI tools
Stopping or terminating the instance post-analysis
Prerequisites
AWS CLI installed and configured (aws configure)
An existing EC2 Key Pair (.pem file)
Basic knowledge of Python or Spark
Step 1: Launch an EC2 Instance with Spark Using AWS CLI
We’ll use an Ubuntu AMI and install Spark, Java, and required Python libraries via user data script.
Use Amazon S3 for persistent storage between sessions.
For automation, script the entire process into AWS CloudFormation or a Makefile.
If you’re doing frequent BI work, consider using Amazon EMR Serverless or SageMaker Studio.
Conclusion
With just a few CLI commands and a smart use of EC2, you can spin up a complete Apache Spark BI analysis environment. It’s flexible, cost-efficient, and cloud-native.
💡 Don’t forget to stop or terminate the EC2 instance when not in use to save on costs!
In today’s fast-paced tech world, flexibility and portability are paramount. As a developer, I’ve always sought a setup that allows me to code, manage cloud resources, and analyze data from anywhere. Recently, I’ve crafted a powerful and portable development environment using my Samsung Galaxy Tab S7 FE, Termux, and Amazon Web Services (AWS).
The Hardware: A Tablet Turned Powerhouse
My setup revolves around the Samsung Galaxy Tab S7 FE, paired with its full keyboard book case cover. This tablet, with its ample screen and comfortable keyboard, provides a surprisingly effective workspace. The real magic, however, lies in Termux.
Termux: The Linux Terminal in Your Pocket
Termux is an Android terminal emulator and Linux environment app that brings the power of the command line to your mobile device. I’ve configured it with essential tools like:
ffmpeg: For multimedia processing. ImageMagick: For image manipulation. Node.js 22.0: For JavaScript development. AWS CLI v2: To interact with AWS services. AWS SAM CLI: For serverless application development.
AWS Integration: Cloud Resources at Your Fingertips
To streamline my AWS interactions, I’ve created a credentials file within Termux. This file stores my AWS access keys, region, security group, SSH key path, and account ID, allowing me to quickly source these variables and execute AWS commands.
export AWS_DEFAULT_REGION=[actual region id]
export AWS_ACCESS_KEY_ID=[ACCESS KEY From Credentials]
export AWS_SECRET_ACCESS_KEY=[SECRET KEY from Credentials]
export AWS_SECURITY_GROUP=[a security group id which I have attached to my ec2 instance]
export AWS_SSH_ID=[path to my pem key file]
export AWS_ACCOUNT=[The account id from billing page]
source [path to the credentials.txt]
In the above configuration the security group id is actually used for automatically patching with my public ip with blanket access using shell commands.
This allows me to execute intensive tasks, such as heavy PHP code execution and log analysis using tools like Wireshark, remotely.
EC2 Instance with Auto-Stop Functionality
To optimize costs and ensure my EC2 instance isn’t running unnecessarily, I’ve implemented an auto-stop script. This script, available on GitHub ( https://github.com/jthoma/code-collection/tree/master/aws/ec2-inactivity-shutdown ), runs every minute via cron and checks for user logout or network disconnects. If inactivity exceeds 30 seconds, it automatically shuts down the instance.
Why This Setup Rocks
Portability: I can work from anywhere with an internet connection. Efficiency: Termux provides a powerful command-line environment on a mobile device. Cost-Effectiveness: The auto-stop script minimizes EC2 costs. Flexibility: I can seamlessly switch between local and remote development.
Visuals
Conclusion
My portable development setup demonstrates the incredible potential of combining mobile technology with cloud resources. With termux and AWS, I’ve created a powerful and flexible environment that allows me to code and manage infrastructure from anywhere. This setup is perfect for developers who value portability and efficiency.
In the PHP world, we often encounter the age-old debate: globals vs. constants. This discussion pops up in various contexts, and one common battleground is how we store configuration values, especially sensitive ones like database connection strings. Should we use a global variable like $dsn or a defined constant like MySQL_DSN? Let’s dive into this, focusing on the specific example of a Data Source Name (DSN) for database connections.
The Contenders:
Global Variable ($dsn): A global variable, in this case, $dsn = "mysql://user:password@serverip/dbname", is declared in a scope accessible throughout your application.
Defined Constant (MySQL_DSN): A constant, defined using define('MySQL_DSN','mysql://user:password@serverip/dbname'), also provides application-wide access to the value.
The Pros and Cons:
Analysis:
Mutability: Constants are immutable. Once defined, their value cannot be changed. This can be a significant advantage for security. Accidentally or maliciously modifying a database connection string mid-execution could have disastrous consequences. Globals, being mutable, are more vulnerable in this respect.
Scope: While both can be accessed globally, constants often encourage a more controlled approach. They are explicitly defined and their purpose is usually clearer. Globals, especially if used liberally, can lead to code that’s harder to reason about and maintain.
Security: The immutability of constants provides a slight security edge. It reduces the risk of the connection string being altered unintentionally or maliciously. However, neither approach inherently protects against all vulnerabilities (e.g., if your code is compromised). Proper input sanitization and secure coding practices are always essential.
Readability: Constants, by convention (using uppercase and descriptive names), tend to be more readable. MySQL_DSN clearly signals its purpose, whereas $dsn might require looking at its initialization to understand its role.
Performance: The performance difference between accessing a global variable and a defined constant is negligible in modern PHP. Don’t let performance be the deciding factor here.
Abstracting the MySQL Client Library:
Lets discuss about abstracting the MySQL client library. This is a fantastic idea, regardless of whether you choose globals or constants. Using an abstraction layer (often a class) allows you to easily switch between different database libraries (e.g., MySQLi, PDO) or even different connection methods without rewriting large portions of your application.
Here’s a basic example (using PDO, but the concept applies to other libraries):
class Database {
private static $pdo;
public static function getConnection() {
if (!isset(self::$pdo)) {
$dsn = defined('MySQL_DSN') ? MySQL_DSN : $GLOBALS['dsn']; // Check for constant first
try {
self::$pdo = new PDO($dsn);
self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); // Good practice!
} catch (PDOException $e) {
die("Database connection failed: " . $e->getMessage());
}
}
return self::$pdo;
}
}
// Usage:
$db = Database::getConnection();
$stmt = $db->query("SELECT FROM users");
// ... process results ...
Recommendation:
Definable constants are generally the preferred approach for database connection strings. Their immutability and improved readability make them slightly more secure and maintainable. Combine this with a well-designed database abstraction layer, and you’ll have a robust and flexible system.
Further Considerations:
Environment Variables: Consider storing sensitive information like database credentials in environment variables and retrieving them in your PHP code for production environments. This is a more secure way to manage configuration. Configuration Files: For more complex configurations, using configuration files (e.g., INI, YAML, JSON) can be a better approach.
Using separate boolean constants like MYSQL_ENABLED and PGSQL_ENABLED to control which database connection is active is a very good practice. It adds another layer of control and clarity. And, as you pointed out, the immutability of constants is a crucial advantage for configuration values.
Here’s how you could integrate that into the previous example, along with some improvements:
<?php
// Configuration (best practice: store these in environment variables or a separate config file)
define('MYSQL_ENABLED', getenv('MYSQL_ENABLED') ?: 0); // Use getenv() for environment variables, fallback to 0
define('MYSQL_DSN', getenv('MYSQL_DSN') ?: 'user:password@server/database'); // Fallback value for development
define('PGSQL_ENABLED', getenv('PGSQL_ENABLED') ?: 0);
define('PGSQL_DSN', getenv('PGSQL_DSN') ?: 'user:password@server/database');
class Database {
private static $pdo;
private static $activeConnection; // Track which connection is active
public static function getConnection() {
if (!isset(self::$pdo)) {
if (MYSQL_ENABLED) {
$dsn = MYSQL_DSN;
$driver = 'mysql'; // Store the driver for later use
self::$activeConnection = 'mysql';
} elseif (PGSQL_ENABLED) {
$dsn = PGSQL_DSN;
$driver = 'pgsql';
self::$activeConnection = 'pgsql';
} else {
die("No database connection enabled."); // Handle the case where no connection is configured.
}
try {
self::$pdo = new PDO($driver.':'.$dsn); // Include the driver in the DSN string.
self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
// More PDO settings if needed (e.g., charset)
} catch (PDOException $e) {
die("Database connection failed: " . $e->getMessage());
}
}
return self::$pdo;
}
public static function getActiveConnection() { // Added a method to get the active connection type
return self::$activeConnection;
}
}
// Example usage:
$db = Database::getConnection();
if (Database::getActiveConnection() === 'mysql') {
// MySQL specific operations
$stmt = $db->query("SELECT FROM users");
} elseif (Database::getActiveConnection() === 'pgsql') {
// PostgreSQL specific operations
$stmt = $db->query("SELECT FROM users"); // Example: Adapt query if needed.
}
// ... process results ...
?>
Analyzing the above code snippet, there are few key improvements:
Environment Variables: Using getenv() is the recommended approach for storing sensitive configuration. The fallback values are useful for development but should never be used in production. Driver in DSN: Including the database driver (mysql, pgsql, etc.) in the DSN string ($driver.':'.$dsn) is generally the preferred way to construct the DSN for PDO. It makes the connection more explicit. Active Connection Tracking: The $activeConnection property and getActiveConnection() method allow you to easily determine which database type is currently being used, which can be helpful for conditional logic. Error Handling: The die() statement now provides a more informative message if no database connection is enabled. You could replace this with more sophisticated error handling (e.g., logging, exceptions) in a production environment. Clearer Configuration: The boolean constants make it very clear which database connections are enabled.
Using a .env file (or similar mechanism) combined with environment variable sourcing is a fantastic way to manage different environments (development, testing, staging, production) on a single machine or AWS EC2 instance. It drastically reduces the risk of accidental configuration errors and simplifies deployment process.
Here’s a breakdown of why this approach is so effective:
Benefits of .env files and Environment Variable Sourcing:
Separation of Concerns: Configuration values are separated from your application code. This makes your code more portable and easier to maintain. You can change configurations without modifying the code itself. Environment-Specific Settings: Each environment (dev, test, prod) can have its own .env file with specific settings. This allows you to easily switch between environments without manually changing configuration values in your code. Security: Sensitive information (API keys, database passwords, etc.) is not stored directly in your codebase. This is a significant security improvement. Simplified Deployment: When deploying to a new environment, you just need to copy the appropriate .env file to the server and source it. No need to modify your application code. Reduced Administrative Errors: By automating the process of setting environment variables, you minimize the risk of human error. No more manually editing configuration files on the server. Version Control: You can exclude the .env file from version control (using .gitignore) to prevent sensitive information from being accidentally committed to your repository. However, it’s a good practice to include a .env.example file with placeholder values for developers to use as a template.
How it Works:
.env File: You create a .env file in the root directory of your project. This file contains key-value pairs representing your configuration settings:
Sourcing the .env file: You need a way to load the variables from the .env file into the server’s environment. There are several ways to do this: source .env (Bash): In a development or testing environment, you can simply run source .env in your terminal before running your PHP scripts. This will load the variables into the current shell’s environment. dotenv Library (PHP): For production environments, using a library like vlucas/phpdotenv is recommended. This library allows you to load the .env file programmatically in your PHP code: <?php require_once __DIR__ . '/vendor/autoload.php'; // Assuming you're using Composer $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); // Create Immutable so the variables are not changed $dotenv->load(); // Now you can access environment variables using getenv(): $mysqlEnabled = getenv('MYSQL_ENABLED'); $mysqlDsn = getenv('MYSQL_DSN'); // ... ?> Web Server Configuration: Some web servers (like Apache or Nginx) allow you to set environment variables directly in their configuration files. This is also a good option for production.
Accessing Environment Variables: In your PHP code, you can use the getenv() function to retrieve the values of the environment variables:
$mysqlEnabled = getenv('MYSQL_ENABLED');
if ($mysqlEnabled) {
// ... connect to MySQL ...
}
Example Workflow:
Development: Developer creates a .env file with their local settings and runs source .env before running the application.
Testing: A .env.testing file is created with the testing environment’s settings. The testing script sources this file before running tests.
Production: The production server has a .env file with the production settings. The web server or a deployment script sources this file when the application is deployed.
By following this approach, you can create a smooth and efficient workflow for managing your application’s configuration across different environments. It’s a best practice that significantly improves the maintainability and security of your PHP applications.
My particular use case was that In my own AWS Account where I do most of the R&D I had one security group which was only for me doing SSH into EC2 instances. Way back in 2020 during pandemic season, had to go freelance for sometime while in notice period with one company and in negotiation with another one. Well this time I was mostly connected from mobile hotspot switching from JIO on Galaxy M14 to Airtel on Galaxy A54 and BSNL on second sim of M14 and this was causing my security group update a real pain.
Basically being lazy and having devops and automation since long back. Started working on an idea an the outcome was an AWS Serverless clone of what is my ip service which is named echo my ip. Check it out on github. The nodejs code and aws sam template to deploy is given over there.
Next using the standard Ubuntu terminal text editor added the following to the .bash_aliases file.
It’s a common problem for people working from home with dynamic IPs to manage firewall rules. Automating the process with a serverless function and a shell alias is a great way to simplify things. Sharing on github is to help others and provide back to the community.
This method provides some advantages
Automation: Eliminates the tedious manual process of updating security group rules.
Serverless: Cost-effective, as you only pay for the compute time used.
Shell Alias: Provides a convenient and easy-to-remember way to trigger the update.
GitHub Sharing: Makes the solution accessible to others.
Secure: Security Group Modification uses aws cli and credentials in terminal environment
Today was exploring more of AI tools and many more.
Using chat-gpt generated a prompt for leonardo ai to create a subtle scenery.
Using GIMP on Ubuntu the scene was upscaled and cropped to 810:1440 ie 9:16 resolution. Used AI Music to generate a 30 second lo-fi bgm.mp3. using a PHP script generated 840 lines in format fname brightness contrast with values in a gradual change. This information was used in a loop for invoking imagemagic convert and create 840 frames. finally ffmpeg -i “%04d.jpg” -i bgm.mp3 -r 30 dusk.mp4. since frame rate is 30 and frames are 840, the video is of length 28 seconds and can be shared on facebook as a reel. That was my target.
Prompt in Chat GPT:
Suggest a detailed prompt for leonardo ai to generate a scenic view of country side just before sunset and the sun is behind the viewer.
The response, trimmed to the relevant portion which was further given to Leonardo AI:
A breathtaking countryside landscape bathed in the warm glow of the golden hour, just before sunset. The sun is behind the viewer, casting long, soft shadows over rolling green hills. A winding dirt road leads through fields of wildflowers in shades of lavender, yellow, and white, gently swaying in the breeze. A rustic wooden fence lines the path, guiding the eye toward a quaint farmhouse with a red roof and smoke curling from its chimney. Lush trees with golden-hued leaves dot the horizon, their branches swaying gently. A calm river reflects the soft, pastel hues of the evening sky—lavender, peach, and rose gold—while birds soar gracefully in the distance. The scene exudes warmth, tranquility, and a sense of peaceful solitude.
This was of dimension 1472×832 though I had set 16:9 resolution in the Leonardo image generator app.
Since I am on XUbuntu here I used GIMP to first upscale. As height required was 1440 the width proportionally came to 2548 once the scaling was done, chose canvas size from the image menu. setting width to 810 and position to center, since I needed 9:16 for Facebook reel.
Now for progressive frame generation a short script was written. For which the max thresholds were identified using GIMP. Colors -> Brightness & Contrast then in the slider, manually tweaking until the image was fully black. tried to fix these values to easy to calculate. And arrived at Brightness -120 and Contrast + 60. With a frame rate of 30 per second, a 28 second video will need 840 frames. So applying that brightness is 0 to -120 in 840 frames which evaluates to reduce by 1 in every 7 frames, whereas contrast is 0 to 60 and that evaluates to increase of 1 in every 14 frames. This was implemented using php scripting.
This was further run from the command line and the output captured in a text file. Further a while loop creates the frames using image magik convert utility.
php -q bnc.php > list.txt
mkdir fg
cat list.txt | while read fi bv cv; do convert scene.jpg -brightness-contrast -${bv}x${cv} fg/${fi}.jpg ; done
cd fg
ffmpeg -i %04d.jpg -i /home/jijutm/Downloads/bgm-sunset.mp3 -r 30 ../sunset-reel.mp4
The bgm-sunset.mp3 was created using AI music generator and edited in audacity for special effects like fade in fade out etc.
Why this workflow is effective:
Automation: The PHP script and ImageMagick loop automate the tedious process of creating individual frames, saving a lot of time and effort. Cost-effective: Using open-source tools like GIMP and FFmpeg keeps the cost down. Flexibility: This approach gives a high degree of control over every aspect of the video, from the scenery to the music and the visual effects. Efficient: By combining the strengths of different AI tools and traditional image/video processing software, this streamlined workflow is defined that gets the job done quickly and effectively.
Go and try searching for “migrate 20 dynamodb tables from singapore to Mumbai” on google and sure that you will get mostly migrating between accounts. But the real pain is that even though the documents say that full backup and restore is possible, the table has to be created with all the inherent configurations and when number of tables increases like 10 to 50 it becomes a real headache. I am attempting to automate this to the maximum extend possible using couple of shell scripts and a javascript code to rewrite exported json structure to that of a structure that can be taken by create option in the aws cli v2.