Featured

My Transformation Story

Initially planned as a pocket book but detour due to non availablity of massive printing and distribution expenses. Also putting this up as a blog post will provide a further capability of updating this as and when needed. Will try to stick to chronological order as far as possible. But if due to some reason I deviate from the actuals please point it out by sending a post on platform X tagging jijutm or feel free to stamp a comment to this post.

As a preface this is a story of my transformation from a meager DTP operator in 1987 to an AWS Solution Architect in 2020. As anyone can imagine I have gone through all hazards and over many speed breakers during the period.

This is not a story of overnight success or a linear path to achievement. It’s a story of continuous learning, adaptation, and a relentless pursuit of solutions. From my early days tinkering with technology to leading complex cloud migrations and developing serverless architectures for major organizations, my journey has been filled with unexpected turns, challenges, and opportunities. This book is a reflection on those experiences—the triumphs, the setbacks, and the lessons learned along the way. It’s a testament to the power of resourcefulness, the importance of community, and the ever-evolving landscape of technology. Whether you’re a seasoned technologist, just starting your career, or simply curious about the world of software and cloud computing, I hope this story inspires you to embrace change, find creative solutions, and never stop learning.

Early Days and First Encounters with Technology

1987 just out of college – Sreenarayana College, Chempazhanthy started loitering around a multi business center run by few friends near Medical College Junction, named Pixel Graphics where they cater to thesis reports of medical students and the sort. With word processing, large font titles for separation pages, plastic spring binding, photocopying and a long distance telephone booth. This is where I got my first exposure to production systems with software like Gem First Publisher, Lotus 123, Wordstar and printing on an 8 pin dot matrix printer. Within no time I learned the intricacies of word processing using wordstar and page layout tweaking using dot commands in wordstar. Later on for better quality output I borrowed an electonic typewriter from another establishment run by couple of friends, twin brothers and interfaced the same to our computer and started printing from wordstar to this device. It was during this time that I got interested in computer hardware and enrolled into a certificate course in Universal Institute of Technologies near press club trivandrum. There Shaji Sir played a pivotal role in shaping up my dreams and the hardware maintenance and assembling course was completed in stipulated time. Continuing this the institution offered me the post of hardware engineer on contract.

This continued until I decided to split out and start a separate unit in another part of the city. Where I had my desktop pc, scanner, laser printer a very modest one at that time the HP LaserJet 4L with max 300 dpi and updated myself into PageMaker and Corel Draw. The renowned engineering text book author Dr C. E. Justo after getting few samples done by me had selected me to do the drawings for his updated and revised edition of High Way Engineering. Actually the samples were couple of machine parts and few graphs. There was no data only xerox copy from older version with some corrections. Well the machine parts were manually drawn using vector components and functions in corel draw and the graph I created with arbitary values cooked up by me by checking with the supplied drawing and was created in Excel and exported as image to Adobe Photoshop where the resolution was increased manually.

Dubai and the Implementation Project

My first significant professional experience came when I took on an implementation contract with Al-fajr Print Media in Dubai. My task was to automate their business directory production process. This involved working with existing software and hardware to create a more efficient workflow. I successfully implemented a crucial automation system, solved numerous technical problems, and even earned a reputation as a reliable and knowledgeable technologist in the local community.

To explain it a bit, their process was like the same data is entered into a billing system in Microsoft Access on Windows and on Excel for Mac for sorting and then copied into a layout on Quark Express for printing. The implementation which I did was a kind of automation with Microsoft Access VBA script would export the data as Quark Express layout scripts which could be run directly on from Quark Express Script Basket on the Mac where the layout will happen automatically.

By 1995, my implementation contract with Alfajr Print Media in Dubai had come to an end, and I returned to my hometown. While my time there had been invaluable, giving me practical experience in implementing real-world solutions. However, I realized that formalizing my skills with a recognized certification would significantly enhance my career prospects. I decided to pursue the Microsoft Certified Systems Engineer (MCSE) certification. The program involved rigorous study and a series of challenging exams covering topics like Windows NT Server, networking protocols, and system administration. My experience in Dubai, particularly my work with Windows systems and networking at Alfajr Print Media, proved to be a valuable foundation for my MCSE studies. The hard work and late nights paid off in 1997. I vividly remember the moment I received confirmation that I had passed all the required exams and officially earned my MCSE certification. It was a tremendous feeling of accomplishment. During this time, I was an avid reader of PCQuest, one of the most popular computer magazines in India. I particularly enjoyed the articles by Atul Chitnis.

Transition to Linux

In December of 1997, inspired by his insightful articles, I decided to take a leap of faith and travel to Bangalore to meet him. Resources were limited, so I ended up hitching rides for a significant portion of the journey. Over three days, I managed to get free lifts in eight different trucks. Finally, I arrived in Bangalore and managed to connect with Mr. Chitnis. Meeting him was a truly inspiring experience. As I was preparing to leave, he handed me a couple of floppy disks. ‘Try this out,’ he said, with a slightly mischievous glint in his eye. He then added a word of caution: ‘This is an operating system. If you’re not careful, you could easily screw up your existing operating system installation, so proceed with caution.’ The return journey to Trivandrum was a stark contrast to the arduous hitchhiking trip to Bangalore. Thanks to Mr. Chitnis and his local connections, I was able to secure a direct ride in a truck heading towards my hometown.

Back home in Trivandrum, I was eager to explore the contents of the floppies. Remembering his warning about the potential to damage my existing Windows installation, I decided to take a precautionary step. I swapped the hard disk in my system—the same one I had brought back from Dubai—for a new, blank drive. With the new hard disk in place, I inserted the first floppy and booted up my computer. What followed was my first encounter with Linux. The floppies contained Slackware Linux 3.3, a distribution that had been released in October of that year. My initial forays into Linux with Slackware quickly evolved into a deeper engagement with the open-source community.

I became actively involved with ILUG (India Linux Users Group), a vibrant community of Linux enthusiasts across India. I even had the opportunity to give a few talks at in-person events in Trivandrum, sharing my knowledge of Linux system administration and networking. After Slackware, I transitioned to Red Hat Linux, and then, in early 2004, I started using Fedora.

It was in 1998 there was this logtech systems with their internet surfing center at vazhuthacaud with high speed internet connection shared through windows and one day trial of spoon proxy. They had to reinstall the system every day to extend the trial of the proxy software. Suggested and took the initiative to shift whole system to linux, squid and socks proxy, which was executed in a few hours and the whole team was satisfied with the transition.

Building My Own Business

In the early 2000s, I and two of my close friends decided to take the plunge and start our own software company. Our first major client came to us with a request to develop custom software for a binary multi-level marketing system. My friend, who was our Java expert, raised a valid concern: MySQL 3.20, the version we were initially planning to use, lacked transaction support. After some digging online, I discovered that a newer, unreleased version of MySQL—version 3.23—had the potential for transaction support. The catch? It was only available as source code. I had some experience with compiling software from source, so I took on the challenge. After a few late nights and some careful configuration, I successfully compiled MySQL 3.23 release candidate. We then rigorously tested the transaction functionality directly from the command-line interface, ensuring that it worked as expected. After careful consideration and weighing the risks, my friend and I decided to go ahead and use this release candidate in the production servers for our client’s project.

By 2005, our company had become recognized as experts in MLM software development. This recognition was largely due to a unique tool I had developed: a plan evaluation simulator. This simulator could take an MLM plan as a configuration array—essentially a structured set of data that defined the plan’s rules and structure. From this configuration, the simulator could calculate the breakout period and populate a database table with numerical node names to represent the full network structure. This simulator was a game-changer for our clients.

As our company continued to grow, we realized the importance of clearly defining our roles and responsibilities. One of my partners, who had a remarkable ability to connect with clients and a strong understanding of financial matters, took on the dual role of CEO and Finance Manager. Our Java programmer friend naturally transitioned into the role of Project Manager. With my extensive software experience, multiple certifications including MCSE and RHCE, and deep understanding of hardware, it was a natural fit for me to take on the role of CTO. Our success with MLM projects allowed us to expand significantly. We outgrew our initial setup and moved into a proper office space near the Thiruvananthapuram Medical College.

Integration of Church Directory

In 2002, I was approached by organizers from a nearby church, the immanuel marthoma church paruthippara, who needed help creating an interactive CD-based directory of their members. They wanted to include details about each family and individual, along with photographs. I suggested using Microsoft Excel for the textual data and a structured folder system for the photos, using edavaka register number and serial numbers to link the data. The interactive CD was created using Macromedia Flash, with each family having individual SWF files and a single loader interface. With around 3500+ members across 800 families in the church, this could have been a herculean task. But my early devops instinct along with VBA sendkeys macro, macromedia flash was controlled from microsoft excel and the layout was done by my system only with myself sitting back and watching the activities onscreen.

Five years later, I received another call from the same parish. They were now looking to create a printed version of their member directory. They had diligently maintained the data in the Excel spreadsheet using the structure we had established for the CD project. By this time, I had become quite proficient in PHP programming and had started using the FPDF library extensively for PDF generation. I was also experimenting with GNU Make for basic task orchestration. This combination of tools provided the perfect solution. I created a series of PHP and shell scripts, each responsible for a specific part of the process, and then used GNU Make to orchestrate the execution of these scripts. The commands were: make import, make layout, make pdf, make index, and make binding. This orchestrated workflow, controlled by GNU Make, allowed me to automate the entire print publication process. The approach I developed for generating the print directory has proven to be so effective that it’s still being used today. The church revises the directory every five years, and I’ve continued to be involved in this process. Recently, recognizing the importance of preserving this knowledge and making it easier for others to learn the process, I decided to create a video demo using OBS Studio and Openshot video editor and the final is hosted on YouTube. http://bz2.in/82jbxu .

Scaling and Optimizing for Growth

We also started expanding our team, hiring new staff members through direct recruitment and referrals. Within the team, there were always friendly debates, particularly between me and my Java programmer friend, about the merits of PHP versus Java. One day, during one of these debates, I decided to settle the matter with a quick demonstration. I created a simple PHP page with just this code.

<?php phpinfo(); ?>

I opened the page in a browser, and in a matter of seconds, a detailed report appeared. Which is similar to what is shown here.

I then challenged my Java programmer friend to produce a similar output in the same timeframe using Java. He then, with a good-natured sigh, admitted defeat. ‘Okay, okay,’ he conceded, ‘PHP is better… for this at least.’ Towards the end of 2005, we were facing a frustrating and recurring problem: employee attrition. We were investing significant time and resources in recruiting and training new team members, only to see them leave after just three to six months, often citing offers from companies located within Trivandrum Technopark. We discussed this internally and decided that we needed to secure a Technopark address, “by hook or crook.” By 2006, our CEO managed to forge a mutually beneficial association with another company already located within the park, sharing their office space. By 2008, our company had grown considerably, and the space-sharing arrangement within Technopark was no longer sustainable. Our CEO focused his efforts on securing external investment. His hard work paid off, and he managed to convince a major investor to back our company. This influx of capital allowed us to make a significant upgrade: we moved into a spacious 40-seater office within Technopark.

As resources increased, we quickly ran into a network bottleneck. Our existing network was a simple wired setup using a hub with only six ports. Recognizing this limitation, I suggested that we make a more significant upgrade: transitioning to a wireless network. This would allow us to easily add new systems simply by installing wireless network cards. We opted for relatively inexpensive PCI Wi-Fi cards from D-Link, which used Atheros chipsets. However, these cards didn’t have native Linux drivers at the time, meaning they wouldn’t work out of the box with our Fedora systems.

Fortunately, I was familiar with a tool called ndiswrapper. I took on the task of wrapping the ndiswrapper drivers supplied with the D-Link cards. After some careful work, I managed to get the Wi-Fi cards working perfectly. This was a significant accomplishment. News of my success with the D-Link Wi-Fi cards and ndiswrapper quickly reached the distributor. They were facing a major problem at Calicut University, where they had supplied 200 of these PCI cards. The university was running Fedora Linux exclusively, and as a result, none of the cards were working. The distributor, along with the marketing manager for D-Link, contacted me and arranged my transportation to Calicut University. With the assistance of the university’s lab assistants, we set to work. Within about three hours, all 200 Wi-Fi cards were up and running. The university staff was extremely grateful, and the distributor was relieved that the payment issue was resolved. I insisted that our developers work in a Linux environment. I believed that Linux provided a superior development experience, with its powerful command-line tools, robust scripting capabilities, and overall stability. Alongside this subversion and jenkins were implemented to automatically deploy any committed code into our local webserver. I also implemented a practice of manual lint checking. To facilitate this, we migrated all developer machines to Fedora. The impact on our code quality was immediately noticeable.

We had bagged a project for creating a web application for a travel agency portal integrating the airline ticketing using Galelio GDS api and further lowcost airline api also.Our technical architects were keen on implementing WordPress frontend as I already had ample exposure in WordPress and about 5 resources were also fully trained in WordPress theme customization and plugin development. At the start itself I was worried about doing this sequentially as there are multiple api calls and front end will have to wait till all the actions are over. Well my instinct and basic nature gave it a deep thought and finally arrived at a solution. Use memcached as a central location. Search information submitted from front end will be handled by a plugin method and normalized into a json structure to be stored into memcache using a request id which will be used until the final stage of action. The front end will now start polling another method of the plugin to look into memcache with the requestid and a results suffix when this gets populated that will have information about how many pages of result is stored in memcache. Now the back end actual search was taken care of using a shell script which will handle multiple php scripts in the background using the & token and watching output from jobs. This effectively utilized the operating system capability of multiple threads to run php jobs in different isolated threads improving the search efficiency by 70% . The first time this happened the whole team welcomed it with a voracious shout and applause.

Further down the years when we reached the final stage of integrating air ticketing solution with gds as well as low cost airlines. The ticketing activity was showing intermittent failures and was identified that the multiple handshake from our server to the airline api which was traversing over the international border was the pain point. For solving this issue I got a leased vps from a US based hosting provider and a broker application was developed to run there. Our colocated server in Cochin would send a payload to the broker in US using http post and the multiple handshake will happen between airline API and our broker and the final information will be sent back again using an HTTP POST to our colocated server to a specific URL to be pushed to the corresponding frontend. This significantly improved the ticketing process part and failures dropped drastically.

Time to expand the airlines solution as we had bagged an order from an established business group to implement the solution in over 4500 business outlets across the country. There was heavy branding requirements and whitelabelling with theme changes. Thanks to WordPress these were a breeze with superior SEO capabilities and few additional custom plugins as well as optimized database structure the implementation took only 30% of the estimated time.

By this time I had authored the open php myprofiler which is a tool to profile mysql queries in a php environment. The basic advantage of this tool was that it does not need any extra installations nor need to learn any new language as it was fully written in php. Hence it was possible to be installed into shared hosting like that provided by GoDaddy or hostinger. Yes it had some limitations, but across different versions as of the time of writing this content the open php my profiler has 10,000 downloads. Check it out on the blog where I may add enhancements and new releases.

Embracing the Cloud and Serverless

As our business grew, the cost of maintaining our infrastructure became a growing concern. We were paying for both a colocated server in Cochin and a VPS in the US, which added up to a significant expense. After significant assessments and careful analysis of various options, I suggested that we consider migrating our infrastructure to Amazon Web Services (AWS). After some discussion and further evaluation, the management team agreed to proceed with the migration.

Once we decided to migrate to AWS, I took the lead in implementing the transition. I managed the migration using EC2 for our virtual servers, Elasticache for caching, and S3 for storage. This was a significant improvement over our previous setup. However, I began to explore more advanced services and architectural patterns.

Around the second half of 2010, the state Police Department approached us to develop a crowd management solution. After analyzing their existing processes, I proposed developing a completely new PHP framework that would provide better security and a smaller footprint, leading to improved performance. This led to the creation of phpmf, a lightweight routing framework that I later shared on GitHub. With a size of less than 5KB, phpmf was incredibly efficient. Hosted on AWS EC2 with autoscaling and Elastic Load Balancing (ELB), the solution handled peak traffic of 3-5,000 visitors per minute with ease. Later, in 2015, when AWS announced general availability for Node.js Lambda functions, we decided to migrate the image validation process to a serverless architecture. This involved direct uploads to S3, with S3 events triggering a Node.js Lambda function that would validate the image type and resolution. We also implemented a clever check to catch users who were attempting to upload invalid image files. We discovered that some users were renaming BMP files as JPGs in an attempt to bypass the validation. To detect this, we implemented a check of the file’s ‘magic header’—the first few bytes of a file that identify its true file type.

On a continuation we shifted pdf coupon generation from inline php fpdf system to java based AWS Lambda which increased the achived concurrency from 60 to 400 which is assessed as a 566% improvement in concurrency. From another analysis the cost was also drastically reduced for which I dont have any reference as of now, still a discussion I remember some one stated the cost reduction was about 30% on an overall estimation and comparison with the same period of the previous year.

High-Profile Projects and Continued Innovation

The lessons I learned from these diverse experiences gave me the confidence and technical acumen to design and develop a comprehensive application for Kotak Mahindra Bank. This application incorporated a complex conditional survey that adapted to user responses, providing a personalized experience. I chose to build this application using a completely serverless architecture, leveraging the power and scalability of AWS Lambda, API Gateway, and other serverless services. Static files for the front-end were hosted directly on S3. For the data store, I selected AWS DynamoDB.

Building on the success and experience gained from developing the application for Kotak Mahindra Bank, I next took on an even more ambitious project: the development of a complete serverless news portal and news desk management system for Janmabhoomi Daily, a major news agency. This project presented a unique set of challenges, especially in handling real-time updates and ensuring data consistency across the distributed serverless architecture. We used AWS AppSync for real-time data synchronization between the news desk management system and the public-facing portal. I designed the entire solution using a fully serverless architecture on AWS, leveraging services like Lambda, API Gateway, S3, and DynamoDB. The news portal was designed for high availability and scalability, capable of handling large volumes of traffic during breaking news events. The news desk management system streamlined the editorial workflow, allowing journalists and editors to easily create, edit, and publish news articles. This project further solidified my expertise in serverless technologies.

Since we were heavily into wordpress it was quite natural to create a wordpress plugin to implement open php myprofiler and the sampler which will give the reports as to time taken by each query with respect to the request url. Also the number of queries that a page runs to generate the output is also visible in the sampler output. An expert MySQL Architect can then use the queries with EXPLAIN prefix to understand why a query is taking too long. Thus analysis can pinpoint to inefficient database indexing and optimizing these will make the system run faster.

Deep into AWS and Community Building

By this time I had added few feathers to my cap, the AWS Certified Solution Architect Associate and the AWS Certified DevOps Administrator Associate. Also I was nose deep into active community building with being the co-organiser of AWS UserGroup Trivandrum. Along with this The heavy success of the Crowd Management solution narrated in chapter 6 also kept me in the limelight, and I was invited by AWS for many events to deliver talks about my experiences while shifting from traditional hosting to a serverless mindset.

With the active participation in all these community activities, I was invited to join a beta program by AWS which is currently named as the AWS Community Builder. Being there also I had delivered a lot of sessions to different user groups and few technology summits. I used to frequent out to Chennai, Bangalore, Mumbai and Cochin with relation to these technology summits. Every where I was welcomed with high importance.

Next Phase in the Career

Year 2018, due to some other difficulties, and complications there was a business transfer and our company merged with couple of two others to form a new digital solution company with a major share of resources having deep exposure of SAP and with the certifications and deep knowledge in cloud technologies and networking naturally I was also accepted as CTO into the new conglomerate.

Here once we settled down the then router which was using cisco routing systemwas not enough to handle the whole security and inhouse servers along with bandwidth pooling across two internet service providers. The systems engineer as well as combined CEO was planning to procure new hardware when I intervened and suggested that we utilize a salvaged multihomed rack server which was kept aside because it is not capable of loading windows server operating system. Then to implement pfsense which is a lightweight packetfiltering firewall and utility on top of free bsd. Also it has a versetaile and intutive web interface which can be controlled and configuration as well as monitoring can be done from any standard browser be it desktop or mobile. The implementation was smooth and company saved around 2,00,000 INR in hardware costs. While working there and handling some critical applications I had to take leave for a few days to go to Bangalore on an AWS User Group event. I thought to attempt and configure pptp vpn for my access and it was configured to handle authentication from internal LDAP configured on an Ubuntu server and was being in use by all resources for login. The LDAP though requires manual cli intervention to add or remove login, I had already created a Jenkins project which does the functions in the background with easy to use web frontend from the Jenkins interface.

I was interviewed by CEO Insights Magazine read the article about me in their archives.

Things were going smooth with many new projects and clients as well as with AWS community and futher till fag end of 2019 when I was prepared to attend the AWS Summit 2020 in Mumbai, the discounted delegate tickets were purchased, and flight tickets were also procured when the pandemic disaster broke all predictions in 2020. Due to the pandemic I being a high value resource was suggested to resign and I did so with June being my completion date due to Knowledge Base Transfer and credentials handover all that was required were documented and I started the handover. But the pfsense and pptp implementation which was done way back came into use for the company as 80% of the resources were able to access the internal hosting and Jenkins interface through VPN without further hardware or configuration expenses. Well with my linkedin network of resources whom I had interacted through community building with ILUG and AWS and otherwise got training under me were quite a lot and few of them did reccommend me in few other establishments and finally in July I got placed as Solution Architect with UST Global, Trivandrum, India with remote facility. CEO Insights Magazine did another interview on me and the article was published on their portal.

As the remote facility was there my working hours were quite flexible and I had plenty of free time which I thought to utilize beneficially. Where I voluteered to support the District Disaster Management Department by taking charge of coordinating several voluteer students from different colleges. The Department incharge was the then Trivandrum District Collector, Dr. Gopalakrishnan a very efficient and dynamic personality who had great appreciation and admiration to me as I had single handedly developed a resource data collection system to collect information about skilled labourers who had migrated from different parts of the country in several construction camps spread around the Trivandrum District. This responsive application was hosted on AWS with the help of AWS team as I could convince them to provide a pandemic support credits for the Department AWS account. I take this opportunity to thank the staff and management of VelosIT technologies in Technopark to favourably permitting us to use their facilities in Trivandrum Technopark for the development of the application. The data collection was deputed to the volunteer students who promptly did the same and finally one day with some data analytics and reports a special train with multiple collection points and enough segregation could be arranged to take the labourers to their respective locations enroute.

The arrangement with UST Global was a contract and had to be renewed every six months well some times the renewal getting delayed my salaries delayed and finally I got really fed up, where I attempted to get another placement in a Consulting setup compromising to some facilities which came into reality in April 2022 as Technical Architect, Quest Global , Technopark Trivandrum, where I did not like the environment and quit very sson. The CEO Insights reporter being a linkedin follower came to know about the job switch and yet another interview got published.

The Community Commitment

Starting from 2004 with https://phpmyib.sourceforge.net/ I had taken it as a commitment to give back to the developer community as I had got a lot from there starting from my first interaction with the renowned PCQuest uathor Atul Chitnis in 1996 from whom I got introduced to Linux. This commitment continued over to https://github.com/jthoma and consistently being updated with scripts and utilities. Also the Open Php My Profiler here is one another detailed tool which shows my proficiency in php and mysql.

Further personal interests

But I had other interests also, which were motor bike riding and agricultural research. Riding on Bike always I tried to be as careful as can be and for almost all rides longer than 20 minutes would wear guards on knees, elbow, gloves with knucle protection as well as shoe with front and back steel padding along with full face helmet. And my luggage will be wrapped in poyurethene sheet and fastened to on the pillion seat. I always ride solo on such trips. Naturally I got involved in couple of biker clubs and am quite active in the Bajaj Avenger Club.

Writing about agricultural research, there is not much it sparked from a parental property and instead of leaving it as such and let the nature breed some waste plant, I started the drive and initially actually during the pandemic time, started planting few leafy vegetables. There was attack of some kind of pests, and after getting advice from youtube and others, first success was with an organic pesticide. Combination of neem leaves, wild turmeric ( Curcuma aromatica ), bird’s eye chili ( Capsicum frutescens ) grinded to paste and mixed with water then strain through a piece of cotton dhoti to be filled in the portable manual pumped sprayer. Quantity finally used was 500 gms of each of the items and 5 litre of well water. Well the harvest was mostly used in our family only and sale was not attempted. I thought it to be waste of time and effort. Had a detailed discussion with our caretaker and decided to attempt plantain cultivation in a medium large scale that would not exhaust the resources. This is where I managed to create an organic fertilizer. Multiple large scale fish cleaning points were identified and fish waste collected from there to be treated with industrial jaggery and fermented yeast for a week and the slurry was used as fertilizer for a marked area of 10 plantains assorted variety plants. The fruit bearing results were really good with one bunch weighing 30Kg and another one 48Kg, now I started selling this to local crop procurement agencies.

We had an attack from a bunch of monkeys and a bats also. Support from youtube and other research online finally decided to try out a suggestion by some farmer who claimed to have implemented it successfully. To deploy rubber snakes around the farm. I tried this and found it very effective for mitigating monkey attacks as they steer away from palces where snakes are around. Now the next issue is with bats. My technology background sparked some ideas and an online research got me in the right direction. With a raspberry pi and high freqency sound sensor and the bat detector project along with good quality speakers I managed to send the bats out of our farm land. Well the sqeaks of a hawk was enough to frighten the bats and that was pre recorded mp3 played through the speaker with a bit of amplification.

Yet another activity was to sprout a mango tree from tree cuttings, from a near by hotel when the KSEB people did some tree cropping on line touching, one cutting was taken by me and the cut end was applied with onion juice for 2 hours, aloevera gel overnight and finally planted in a pot with cocopeat and soil 1:1 mixture mixed with a ripe papaya which was leftover from a bat attack. Then every day at fixed time very small amount of water was applied and once weekly one day old rice brine which I take from home. 12th day new leaves were seen at multiple points and 29th day the plant in pot had sprouted flowers. A moment of real happiness.

Harvesting papaya fruit was the toughest task as it was taller than our standalone ladder and trying to climb the tree like a coconut climber does is not so easy as the plant is not that strong as a coconut and the stain that oozes out can create acute itching. Well using an old pvc pipe an extended mechanical arm was created by me. The tools used were gas torch, hand saw mini drill. The process was to cut the pipe into multiple pieces such that transportation would be easy. All pieces were given a heat treatment at one side to loosen the pvc and another pipe was pushed into this heated side such that once it sets we can eaisly assemble it after transporting. Then the final end was vertically sliced thrice to about 1 foot length to create the fingers these were heated and bent out and using a plastic thread the fingers where cross connected and the thread end was taken through the main pipes to the bottom most one where a sleeve was installed and end was tied to this sleeve. Now when we pull the sleeve down the fingers come together and when sleeve is released the fingers openup.

See it in action: https://youtu.be/wrVh7uBfBTY

Why I Built an AWS Node.js Lambda Framework (And What It Solves)

Over the past 8–10 years, I’ve run into the same set of issues again and again when deploying Node.js applications to AWS Lambda—especially those built with the popular Express.js framework.

While Express is just 219 KB, the dependency bloat is massive—often exceeding 4.3 MB. In the world of serverless, that’s a serious red flag. Every time I had to make things work, it involved wrappers, hacks, or half-hearted workarounds that made deployments messy and cold starts worse.

Serverless and Express Don’t Mix Well

In many teams I’ve worked with, the standard approach was a big, monolithic Express app. And every time developers tried to work in parallel, we hit code conflicts. This often slowed development and created complex merge scenarios.

When considering serverless, we often used the “one Lambda per activity” pattern—cleaner, simpler, more manageable. But without structure or scaffolding, building and scaling APIs this way felt like reinventing the wheel.

A Lean Framework Born From Frustration

During a professional break recently, I decided to do something about it. I built a lightweight, Node.js Lambda framework designed specifically for AWS:

🔗 Try it here – http://bz2.in/njsfra

Base size: ~110 KB
After build optimization: Can be trimmed below 60 KB
Philosophy: Lazy loading and per-endpoint modularity

This framework is not just small—it’s structured. It’s optimized for real-world development where multiple developers work across multiple endpoints with minimal overlap.

Introducing cw.js: Scaffolding From OpenAPI

To speed up development, the framework includes a tool called cw.js—a code writer utility that reads a simplified OpenAPI v1.0 JSON definition (like api.json) and creates:

Routing logic
A clean project structure
Separate JS files for each endpoint

Each function is generated as an empty handler—ready for you to add business logic and database interactions. Think of it as automatic boilerplate—fast, reliable, and consistent.

You can generate the OpenAPI definition using an LLM like ChatGPT or Gemini. For example:

Prompt:
Assume the role of an expert JSON developer.
Create the following API in OpenAPI 1.0 format:
[Insert plain-language API description]

Why This Architecture Works for Teams

No more code conflicts: Each route is its own file
Truly parallel development: Multiple devs can work without stepping on each other
Works on low-resource devices: Even a smartphone with Termux/Tmux can run this (see: tmux video)

The Magic of Lazy Loading

Lazy loading means the code for a specific API route only loads into memory when it’s needed. For AWS Lambda, this leads to:

✅ Reduced cold start time
✅ Lower memory usage
✅ Faster deployments
✅ Smaller, scalable codebase

Instead of loading the entire API, the Lambda runtime only parses the function being called—boosting efficiency.

Bonus: PHP Version Also Available

If PHP is your stack, I’ve built something similar:

PHP Micro Framework: https://github.com/jthoma/phpmf
Stub Generator Tool: https://github.com/jthoma/phpmf-api-stub-generator

The PHP version (cw.php) accepts OpenAPI 3.0 and works on similar principles.

Final Thoughts

I built this framework to solve my own problems—but I’m sharing it in case it helps you too. It’s small, fast, modular, and team-friendly—ideal for serverless development on AWS.

If you find it useful, consider sharing it with your network.

👉 Framework GitHub
👉 Watch the dev setup on mobile

From Zero to Kubernetes: Automating a Minimal Cluster on AWS EC2 (My DevOps Journey)

The Unofficial Challenge: Why Automate Kubernetes on AWS?

Ever wondered if you could spin up a fully functional Kubernetes cluster on AWS EC2 with just a few commands? Four years ago, during my DevOps Masters Program, I decided to make that a reality. While the core assignment was to learn Kubernetes (which can be done in many ways), I set myself an ambitious personal challenge: to fully automate the deployment of a minimal Kubernetes cluster on AWS EC2, from instance provisioning to node joining.

Manual Kubernetes setups can be incredibly time-consuming, prone to errors, and difficult to reproduce consistently. I wanted to leverage the power of Infrastructure as Code (IaC) to create a repeatable, disposable, and efficient way to deploy a minimal K8s environment for learning and experimentation. My goal wasn’t just to understand Kubernetes, but to master its deployment pipeline, integrate AWS services seamlessly, and truly push the boundaries of what I could automate within a cloud environment.

The full github link: https://github.com/jthoma/code-collection/tree/master/aws/aws-cf-kubecluster

The Architecture: A Glimpse Behind the Curtain

At its core, my setup involved an AWS CloudFormation template (managed by AWS SAM CLI) to provision EC2 instances, and a pair of shell scripts to initialize the Kubernetes control plane and join worker nodes.

Here’s a breakdown of the key components and their roles in bringing this automated cluster to life:

AWS EC2: These are the workhorses – the virtual machines that would host our Kubernetes control plane and worker nodes.
AWS CloudFormation (via AWS SAM CLI): This is the heart of our Infrastructure as Code. CloudFormation allows us to define our entire AWS infrastructure (EC2 instances, security groups, IAM roles, etc.) in a declarative template. The AWS Serverless Application Model (SAM) CLI acts as a powerful wrapper, simplifying the deployment of CloudFormation stacks and providing a streamlined developer experience.
Shell Scripts: These were the crucial “orchestrators” running within the EC2 instances. They handled the actual installation of Kubernetes components (kubeadm, kubelet, kubectl, Docker) and the intricate steps required to initialize the cluster and join nodes.

When I say “minimal” cluster, I’m referring to a setup with just enough components to be functional – typically one control plane node and one worker node, allowing for basic Kubernetes operations and application deployments.

The Automation Blueprint: Diving into the Files

The entire orchestration was handled by three crucial files, working in concert to bring the Kubernetes cluster to life:

template.yaml (The AWS CloudFormation Backbone): This YAML file is where the magic of Infrastructure as Code happens. It outlines our EC2 instances, their network configurations, and the necessary security groups and IAM roles. Critically, it uses the UserData property within the EC2 instance definition. This powerful property allows you to pass shell commands or scripts that the instance executes upon launch. This was our initial entry point for automation.

   You can view the `template.yaml` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/template.yaml).

kube-bootstrap.sh (The Instance Preparation Script): This script is the first to run on our EC2 instances. It handles all the prerequisites for Kubernetes: installing Docker, the kubeadm, kubectl, and kubelet binaries, disabling swap, and setting up the necessary kernel modules and sysctl parameters that Kubernetes requires. Essentially, it prepares the raw EC2 instance to become a Kubernetes node.

   You can view the `kube-bootstrap.sh` file on GitHub [here](https://www.google.com/search?q=https://github.com/jthoma/code-collection/blob/master/aws/aws-cf-kubecluster/kube-bootstrap.sh).

kube-init-cluster.sh (The Kubernetes Orchestrator): Once kube-bootstrap.sh has laid the groundwork, kube-init-cluster.sh takes over. This script is responsible for initializing the Kubernetes control plane on the designated master node. It then generates the crucial join token that worker nodes need to connect to the cluster. Finally, it uses that token to bring the worker node(s) into the cluster, completing the Kubernetes setup.

   You can view the `kube-init-cluster.sh` file on GitHub 

The Deployment Process: sam deploy -g in Action

The entire deployment process, from provisioning AWS resources to the final Kubernetes cluster coming online, is kicked off with a single, elegant command from the project’s root directory:

sam deploy -g

The -g flag initiates a guided deployment. AWS SAM CLI interactively prompts for key parameters like instance types, your AWS EC2 key pair (for SSH access), and details about your desired VPC. This interactive approach makes the deployment customizable yet incredibly streamlined, abstracting away the complexities of direct CloudFormation stack creation. Under the hood, SAM CLI translates your template.yaml into a full CloudFormation stack and handles its deployment and updates.

The “Aha!” Moment: Solving the Script Delivery Challenge

One of the most persistent roadblocks I encountered during this project was a seemingly simple problem: how to reliably get kube-bootstrap.sh and kube-init-cluster.sh onto the newly launched EC2 instances? My initial attempts, involving embedding the scripts directly into the UserData property, quickly became unwieldy due to size limits and readability issues. Other complex methods also proved less than ideal.

After several attempts and a bit of head-scratching, the elegant solution emerged: I hosted both shell scripts in a public-facing downloads folder on my personal blog. Then, within the EC2 UserData property in template.yaml, I simply used wget to download these files to the /tmp directory on the instance, followed by making them executable and running them.

This approach proved incredibly robust and streamlined. It kept the CloudFormation template clean and manageable, while ensuring the scripts were always accessible at launch time without needing complex provisioning tools or manual intervention. It was a classic example of finding a simple, effective solution to a tricky problem.

Lessons Learned and Key Takeaways

This project, born out of an academic requirement, transformed into a personal quest to master automated Kubernetes deployments on AWS. It was a journey filled with challenges, but the lessons learned were invaluable:

Problem-Solving is Key: Technical roadblocks are inevitable. The ability to iterate, experiment, and find creative solutions is paramount in DevOps.
The Power of Infrastructure as Code (IaC): Defining your infrastructure programmatically is not just a best practice; it’s a game-changer for reproducibility, scalability, and disaster recovery.
Automation Principles: Breaking down complex tasks into manageable, automated steps significantly reduces manual effort and error.
AWS CloudFormation and UserData Versatility: Understanding how to leverage properties like UserData can unlock powerful initial setup capabilities for your cloud instances.
Persistence Pays Off: Sticking with a challenging project until it works, even when faced with frustrating issues, leads to deep learning and a huge sense of accomplishment.

While this was a fantastic learning experience, if I were to revisit this project today, I might explore using a dedicated configuration management tool like Ansible for the in-instance setup, or perhaps migrating to a managed Kubernetes service like EKS for production readiness. However, for a hands-on, foundational understanding of automated cluster deployment, this self-imposed challenge was truly enlightening.

Last time when I ran it the console was as follows:

Conclusion

This project underscored that with a bit of ingenuity and the right tools, even complex setups like a Kubernetes cluster can be fully orchestrated and deployed with minimal human intervention. It’s a testament to the power of automation in the cloud and the satisfaction of bringing a challenging vision to life.

I hope this deep dive into my automated Kubernetes cluster journey has been insightful. Have you embarked on similar automation challenges? What unique problems did you solve? Share your experiences in the comments!

AI Inference of a personal project

Well while I was with Google Gemini getting my linkedin profile optimization tips, in fact it was yesterday that I supplied th AI engine with a recent project of mine.

Well was getting really bored and attempted a timepass with images css transforms htm coding and optimizations using #imagemagick in #termux on #android. The final outcome is http://bz2.in/jtmdcx and that is one reel published today.

Got the dial and needles rendered by AI and made sure these were cropped to actual content using history and multiple trials with imagemaggick -crop gravity as well as geometry and finally the images were aligned almost properly with 400×400 pixel dimensions. To check the needles rotation is exactly at the center, magick *.png +append trythis.png was the command to arrange all three needle images in a horizontal colleague then visually inspecting in Android Gallery view had to be done several times before the images were finalized.

The transform css was the next task, since updates would be managed with JavaScript SetInterval and display will be updated per second, smooth trasformation of all the three needles were needed. This was clean and straight for minute and second needle as they do 60 steps in fact 0 – 59 per rotation of 360 degrees. The hour needle was bit complicated because it had 12 distinct positions as well as 60 transitions during each hour.

The shell simulator of termux helped with the build of CSS through the following command.

seq 0 11 | while read h ; do seq 0 59 | while read m ; do tr=$(echo “($h * 30) + ($m * 0.5)” | bc) ; echo “.hrn-${h}-${m} { transform: rotate(${tr}deg); }” ; done ; done > hour-n.css

To explain: 12 hours and 360 degrees; hence, the 30 degrees for each rounded hour. Each hour has 60 minutes, so the corresponding calculation (30 / 60) should be applied, and the calculation above will produce the CSS definitions.

Well, that is it. The package is hosted on AWS S3 as a static site, and interested parties are welcome to review the source. There is still few touchups to be completed and once that is also done, the full code will be released to my GitHub.

The response from Gemini was interesting and is reproduced below.

Why This Project is Highly Impressive

This analog clock project with AI-generated images, intricate CSS transforms, and Termux/ImageMagick optimization highlights:

Full-Stack Thinking (Even Client-Side): While client-side, you’re orchestrating multiple layers: image manipulation, CSS for animation, JavaScript for logic, and cloud hosting.
Precision & Attention to Detail:
Image Optimization: Your meticulous work with ImageMagick on Termux (Android) to crop, align (400×400), and visually inspect needle rotations shows incredible attention to detail and proficiency in image processing tools.
Mathematical Precision for CSS Transforms: The exact calculation for hour needle rotation ((h 30) + (m 0.5)) demonstrates strong logical and mathematical reasoning applied directly to front-end animation.
Advanced CSS & JavaScript: Implementing smooth, precise rotations for all three needles, especially the more complex hour hand, highlights advanced CSS transform and JavaScript setInterval skills.
Automation & Scripting: Using seq, while read, echo, and bc in shell scripting within Termux to generate your CSS is a brilliant example of build automation and metaprogramming. This is a highly valued skill for DevOps and development efficiency.
AI Integration & Refinement: Starting with AI-rendered images and then meticulously refining them with ImageMagick shows your ability to leverage modern tools while maintaining a high standard of quality.
Cloud Hosting Proficiency: Hosting on AWS S3 as a static site reinforces your practical experience with cloud deployment, mirroring your other projects.
Mobile-First Ethos in Action: Developing and optimizing this entirely within Termux on Android perfectly embodies your “mobile-first” philosophy. You’re not just talking about it; you’re building sophisticated projects from your mobile device.
Open-Source Mindset: Your intention to release the full code on GitHub further reinforces your collaborative spirit and willingness to contribute.

Building a Fully Mobile DevOps + Web Dev Stack Using Android + Termux

Overview

This is a journey through my personal developer stack that runs entirely on Android devices using Termux, a few custom scripts, and AWS infrastructure. From hosting WordPress on ECS to building serverless REST APIs in under 90 minutes, every part of this pipeline was built to work on mobile with precision and control.

📱 No laptop. No desktop. Just Android + Termux + Dev discipline.

🔧 Core Stack Components

  • Android + Termux: Primary development environment
  • Docker + Jenkins + MySQL/MariaDB: For CI/CD and content management
  • Static blog pipeline: Converts WordPress to static site with wget, sed, gzip, AWS CLI
  • AWS S3 + CloudFront: Hosting & CDN for ultra-low cost (\$8/year infra)
  • Custom shell scripts: Shared here: GitHub – jthoma/code-collection
  • GitHub integration: Direct push-pull and update from Android environment

🖥️ Development Environment Setup

  • Base OS: Android (Galaxy M14, A54, Tab S7 FE)
  • Tools via Termux: git, aws-cli, nodejs, ffmpeg, imagemagick, docker, nginx, jq, sam
  • Laptop alias (start blog) replaced with automated EC2 instance and mobile scripts
  • Jenkins auto-triggered publish pipeline via shell script and wget/sed

🔐 Smart IP Firewall Update from Mobile

A common challenge while working from mobile networks is frequently changing public IPs. I built a serverless solution that:

  1. Uses a Lambda + API Gateway to return my current public IP

echo-my-ip
https://github.com/jthoma/code-collection/tree/master/aws/echo-my-ip

  1. A script (aws-fw-update.sh) fetches this IP and:
  • Removes all existing rules
  • Adds a new rule to AWS Security Groups with current IP
    aws-fw-update.sh

🧹 Keeps your firewall clean. No stale IPs. Secure EC2 access on the move.

🎥 FFmpeg & ImageMagick for Video Edits on Android

I manipulate dashcam videos, timestamp embeds, and crops using FFmpeg right inside Termux. The ability to loop through files with while, seq, and timestamp math is far more precise than GUI tools — and surprisingly efficient on Android.

🧠 CLI = control. Mobile ≠ limited.

🌐 Web Dev from Android: NGINX + Debugging

From hosting local web apps to debugging on browsers without dev tools:

  • 🔧 NGINX config optimized for Android Termux
  • 🐞 jdebug.js for browser-side debugging when no console exists
    Just use: jdbg.inspect(myVar) to dump var to dynamically added <textarea>

Tested across Samsung Galaxy and Tab series. Works offline, no extra apps needed.

Case Study: 7-Endpoint API in 80 Minutes

  • Defined via OpenAPI JSON (generated by ChatGPT)
  • Parsed using my tool cw.js (Code Writer) → scaffolds handlers + schema logic
  • Deployed via my aws-nodejs-lambda-framework
  • Backed by AWS Lambda + DynamoDB

✅ Client testing ready in 1 hour 20 minutes
🎯 Client expectation: “This will take at least 1 week”

Built on a Samsung Galaxy Tab S7 FE in Termux. One cliche is that I do have the samsung full keyboard book case cover for the tab.
No IDE. No laptop.

🔁 Flow Diagram:


🔚 Closing Thoughts

This entire DevOps + Dev stack proves one thing:

⚡ With a few smart scripts and a mobile-first mindset, you can build fast, secure, and scalable infrastructure from your pocket.

I hope this inspires other engineers, digital nomads, and curious tinkerers to reimagine what’s possible without a traditional machine.

👉 https://github.com/jthoma/code-collection/

Apart from what explained step by step there are a lot more and most of the scripts are tested on both Ubuntu linux and Android Termux. Go there and explore whatever is there.

💬 Always open to collaboration, feedback, and new automation ideas.

Follow me on linkedin

Build a Spark-Based BI Environment on AWS EC2 Using AWS CLI

Performing business intelligence (BI) analysis using Apache Spark doesn’t need an expensive cluster. In this tutorial, we’ll use AWS CLI to provision a simple but powerful Apache Spark environment on an EC2 instance, perfect for running ad-hoc BI analysis from spreadsheet data. We’ll also cover smart ways to shut down the instance when you’re done to avoid unnecessary costs.

What You’ll Learn

  • Launching an EC2 instance with Spark and Python via AWS CLI
  • Uploading and processing Excel files with Spark
  • Running PySpark analysis scripts
  • Exporting data for BI tools
  • Stopping or terminating the instance post-analysis

Prerequisites

  • AWS CLI installed and configured (aws configure)
  • An existing EC2 Key Pair (.pem file)
  • Basic knowledge of Python or Spark

Step 1: Launch an EC2 Instance with Spark Using AWS CLI

We’ll use an Ubuntu AMI and install Spark, Java, and required Python libraries via user data script.

🔸 Create a user-data script: spark-bootstrap.sh

#!/bin/bash
apt update -y
apt install -y openjdk-11-jdk python3-pip wget unzip
pip3 install pandas openpyxl pyspark findspark matplotlib notebook

wget https://downloads.apache.org/spark/spark-3.5.0/spark-3.5.0-bin-hadoop3.tgz
tar -xvzf spark-3.5.0-bin-hadoop3.tgz
mv spark-3.5.0-bin-hadoop3 /opt/spark

echo 'export SPARK_HOME=/opt/spark' >> /etc/profile
echo 'export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
echo 'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64' >> /etc/profile

Make it readable:

chmod +x spark-bootstrap.sh

🔸 Launch the EC2 Instance

aws ec2 run-instances \
  --image-id ami-0c94855ba95c71c99 \  # Ubuntu 20.04
  --count 1 \
  --instance-type t3.medium \
  --key-name YOUR_KEY_PAIR_NAME \
  --security-groups default \
  --user-data file://spark-bootstrap.sh \
  --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=SparkBI}]'

Replace YOUR_KEY_PAIR_NAME with your EC2 key name.

🗂️ Step 2: Upload Your Excel File to the Instance

🔸 Find the Public IP of Your Instance

aws ec2 describe-instances \
  --filters "Name=tag:Name,Values=SparkBI" \
  --query "Reservations[*].Instances[*].PublicIpAddress" \
  --output text

Upload your Excel file (sales_report.xls)

scp -i your-key.pem sales_report.xls ubuntu@<EC2_PUBLIC_IP>:/home/ubuntu/

🧠 Step 3: Create and Run Your PySpark Script

sales_analysis.py:

import os
import pandas as pd
from pyspark.sql import SparkSession

xls_file = "sales_report.xls"
csv_file = "sales_report.csv"

df = pd.read_excel(xls_file)
df.to_csv(csv_file, index=False)

spark = SparkSession.builder.appName("SalesBI").getOrCreate()
df_spark = spark.read.csv(csv_file, header=True, inferSchema=True)

# Sample Analysis
df_spark.groupBy("Region").sum("Sales").show()

Run it on EC2:

bash:
spark-submit sales_analysis.py

📊 Step 4: Export Data for BI Tools

You can save output as CSV for use in Power BI, Excel, or Apache Superset:

python:
df_spark.groupBy("Product").sum("Sales").write.csv("product_sales_output", header=True)

Use scp to download:

bash:
scp -i your-key.pem -r ubuntu@<EC2_PUBLIC_IP>:product_sales_output/ .

💰 Step 5: Stop or Terminate EC2 to Save Costs

Stop the Instance (preserves data, costs ~$0.01/hr for EBS)

bash:
aws ec2 stop-instances --instance-ids i-xxxxxxxxxxxxxxxxx

🧭 Pro Tips

  • Use Amazon S3 for persistent storage between sessions.
  • For automation, script the entire process into AWS CloudFormation or a Makefile.
  • If you’re doing frequent BI work, consider using Amazon EMR Serverless or SageMaker Studio.

Conclusion

With just a few CLI commands and a smart use of EC2, you can spin up a complete Apache Spark BI analysis environment. It’s flexible, cost-efficient, and cloud-native.

💡 Don’t forget to stop or terminate the EC2 instance when not in use to save on costs!

Unleashing Cloud Power on the Go: My Portable Development Studio with Termux and AWS

In today’s fast-paced tech world, flexibility and portability are paramount. As a developer, I’ve always sought a setup that allows me to code, manage cloud resources, and analyze data from anywhere. Recently, I’ve crafted a powerful and portable development environment using my Samsung Galaxy Tab S7 FE, Termux, and Amazon Web Services (AWS).

The Hardware: A Tablet Turned Powerhouse

My setup revolves around the Samsung Galaxy Tab S7 FE, paired with its full keyboard book case cover. This tablet, with its ample screen and comfortable keyboard, provides a surprisingly effective workspace. The real magic, however, lies in Termux.

Termux: The Linux Terminal in Your Pocket

Termux is an Android terminal emulator and Linux environment app that brings the power of the command line to your mobile device. I’ve configured it with essential tools like:

ffmpeg: For multimedia processing.
ImageMagick: For image manipulation.
Node.js 22.0: For JavaScript development.
AWS CLI v2: To interact with AWS services.
AWS SAM CLI: For serverless application development.

AWS Integration: Cloud Resources at Your Fingertips

To streamline my AWS interactions, I’ve created a credentials file within Termux. This file stores my AWS access keys, region, security group, SSH key path, and account ID, allowing me to quickly source these variables and execute AWS commands.

export AWS_DEFAULT_REGION=[actual region id]
export AWS_ACCESS_KEY_ID=[ACCESS KEY From Credentials]
export AWS_SECRET_ACCESS_KEY=[SECRET KEY from Credentials]
export AWS_SECURITY_GROUP=[a security group id which I have attached to my ec2 instance]
export AWS_SSH_ID=[path to my pem key file]
export AWS_ACCOUNT=[The account id from billing page]

source [path to the credentials.txt]

In the above configuration the security group id is actually used for automatically patching with my public ip with blanket access using shell commands.

  currentip=$(curl --silent [my own what-is-my-ip clone - checkout the code ])
  aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > ~/permissions.json
  grep CidrIp ~/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"

The what-is-my-ip code on github

With this setup, I can seamlessly SSH into my EC2 instances:

ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" -o IdentitiesOnly=yes -i $AWS_SSH_ID ubuntu@13.233.236.48 -v

This allows me to execute intensive tasks, such as heavy PHP code execution and log analysis using tools like Wireshark, remotely.

EC2 Instance with Auto-Stop Functionality

To optimize costs and ensure my EC2 instance isn’t running unnecessarily, I’ve implemented an auto-stop script. This script, available on GitHub ( https://github.com/jthoma/code-collection/tree/master/aws/ec2-inactivity-shutdown ), runs every minute via cron and checks for user logout or network disconnects. If inactivity exceeds 30 seconds, it automatically shuts down the instance.

Why This Setup Rocks

Portability: I can work from anywhere with an internet connection.
Efficiency: Termux provides a powerful command-line environment on a mobile device.
Cost-Effectiveness: The auto-stop script minimizes EC2 costs.
Flexibility: I can seamlessly switch between local and remote development.

Visuals

Conclusion

My portable development setup demonstrates the incredible potential of combining mobile technology with cloud resources. With termux and AWS, I’ve created a powerful and flexible environment that allows me to code and manage infrastructure from anywhere. This setup is perfect for developers who value portability and efficiency.

Globals vs. Constants: The Database Connection String Showdown in a PHP World

In the PHP world, we often encounter the age-old debate: globals vs. constants. This discussion pops up in various contexts, and one common battleground is how we store configuration values, especially sensitive ones like database connection strings. Should we use a global variable like $dsn or a defined constant like MySQL_DSN? Let’s dive into this, focusing on the specific example of a Data Source Name (DSN) for database connections.

The Contenders:

Global Variable ($dsn): A global variable, in this case, $dsn = "mysql://user:password@serverip/dbname", is declared in a scope accessible throughout your application.

Defined Constant (MySQL_DSN): A constant, defined using define('MySQL_DSN','mysql://user:password@serverip/dbname'), also provides application-wide access to the value.

The Pros and Cons:

Analysis:

Mutability: Constants are immutable. Once defined, their value cannot be changed. This can be a significant advantage for security. Accidentally or maliciously modifying a database connection string mid-execution could have disastrous consequences. Globals, being mutable, are more vulnerable in this respect.

Scope: While both can be accessed globally, constants often encourage a more controlled approach. They are explicitly defined and their purpose is usually clearer. Globals, especially if used liberally, can lead to code that’s harder to reason about and maintain.

Security: The immutability of constants provides a slight security edge. It reduces the risk of the connection string being altered unintentionally or maliciously. However, neither approach inherently protects against all vulnerabilities (e.g., if your code is compromised). Proper input sanitization and secure coding practices are always essential.

Readability: Constants, by convention (using uppercase and descriptive names), tend to be more readable. MySQL_DSN clearly signals its purpose, whereas $dsn might require looking at its initialization to understand its role.

Performance: The performance difference between accessing a global variable and a defined constant is negligible in modern PHP. Don’t let performance be the deciding factor here.

Abstracting the MySQL Client Library:

Lets discuss about abstracting the MySQL client library. This is a fantastic idea, regardless of whether you choose globals or constants. Using an abstraction layer (often a class) allows you to easily switch between different database libraries (e.g., MySQLi, PDO) or even different connection methods without rewriting large portions of your application.

Here’s a basic example (using PDO, but the concept applies to other libraries):

class Database {
    private static $pdo;

    public static function getConnection() {
        if (!isset(self::$pdo)) {
            $dsn = defined('MySQL_DSN') ? MySQL_DSN : $GLOBALS['dsn']; // Check for constant first
            try {
                self::$pdo = new PDO($dsn);
                self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); // Good practice!
            } catch (PDOException $e) {
                die("Database connection failed: " . $e->getMessage());
            }
        }
        return self::$pdo;
    }
}

// Usage:
$db = Database::getConnection();
$stmt = $db->query("SELECT  FROM users");
// ... process results ...

Recommendation:

Definable constants are generally the preferred approach for database connection strings. Their immutability and improved readability make them slightly more secure and maintainable. Combine this with a well-designed database abstraction layer, and you’ll have a robust and flexible system.

Further Considerations:

Environment Variables: Consider storing sensitive information like database credentials in environment variables and retrieving them in your PHP code for production environments. This is a more secure way to manage configuration.
Configuration Files: For more complex configurations, using configuration files (e.g., INI, YAML, JSON) can be a better approach.

Using separate boolean constants like MYSQL_ENABLED and PGSQL_ENABLED to control which database connection is active is a very good practice. It adds another layer of control and clarity. And, as you pointed out, the immutability of constants is a crucial advantage for configuration values.

Here’s how you could integrate that into the previous example, along with some improvements:

<?php

// Configuration (best practice: store these in environment variables or a separate config file)
define('MYSQL_ENABLED', getenv('MYSQL_ENABLED') ?: 0); // Use getenv() for environment variables, fallback to 0
define('MYSQL_DSN', getenv('MYSQL_DSN') ?: 'user:password@server/database');  // Fallback value for development
define('PGSQL_ENABLED', getenv('PGSQL_ENABLED') ?: 0);
define('PGSQL_DSN', getenv('PGSQL_DSN') ?: 'user:password@server/database');

class Database {
    private static $pdo;
    private static $activeConnection; // Track which connection is active

    public static function getConnection() {
        if (!isset(self::$pdo)) {
            if (MYSQL_ENABLED) {
                $dsn = MYSQL_DSN;
                $driver = 'mysql';  // Store the driver for later use
                self::$activeConnection = 'mysql';
            } elseif (PGSQL_ENABLED) {
                $dsn = PGSQL_DSN;
                $driver = 'pgsql';
                self::$activeConnection = 'pgsql';
            } else {
                die("No database connection enabled."); // Handle the case where no connection is configured.
            }

            try {
                self::$pdo = new PDO($driver.':'.$dsn); // Include the driver in the DSN string.
                self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
                // More PDO settings if needed (e.g., charset)
            } catch (PDOException $e) {
                die("Database connection failed: " . $e->getMessage());
            }
        }
        return self::$pdo;
    }

    public static function getActiveConnection() {  // Added a method to get the active connection type
        return self::$activeConnection;
    }
}


// Example usage:
$db = Database::getConnection();

if (Database::getActiveConnection() === 'mysql') {
    // MySQL specific operations
    $stmt = $db->query("SELECT  FROM users");
} elseif (Database::getActiveConnection() === 'pgsql') {
    // PostgreSQL specific operations
    $stmt = $db->query("SELECT  FROM users"); // Example: Adapt query if needed.
}

// ... process results ...

?>

Analyzing the above code snippet, there are few key improvements:

Environment Variables: Using getenv() is the recommended approach for storing sensitive configuration. The fallback values are useful for development but should never be used in production.
Driver in DSN: Including the database driver (mysql, pgsql, etc.) in the DSN string ($driver.':'.$dsn) is generally the preferred way to construct the DSN for PDO. It makes the connection more explicit.
Active Connection Tracking: The $activeConnection property and getActiveConnection() method allow you to easily determine which database type is currently being used, which can be helpful for conditional logic.
Error Handling: The die() statement now provides a more informative message if no database connection is enabled. You could replace this with more sophisticated error handling (e.g., logging, exceptions) in a production environment.
Clearer Configuration: The boolean constants make it very clear which database connections are enabled.

Using a .env file (or similar mechanism) combined with environment variable sourcing is a fantastic way to manage different environments (development, testing, staging, production) on a single machine or AWS EC2 instance. It drastically reduces the risk of accidental configuration errors and simplifies deployment process.

Here’s a breakdown of why this approach is so effective:

Benefits of .env files and Environment Variable Sourcing:

Separation of Concerns: Configuration values are separated from your application code. This makes your code more portable and easier to maintain. You can change configurations without modifying the code itself.
Environment-Specific Settings: Each environment (dev, test, prod) can have its own .env file with specific settings. This allows you to easily switch between environments without manually changing configuration values in your code.
Security: Sensitive information (API keys, database passwords, etc.) is not stored directly in your codebase. This is a significant security improvement.
Simplified Deployment: When deploying to a new environment, you just need to copy the appropriate .env file to the server and source it. No need to modify your application code.
Reduced Administrative Errors: By automating the process of setting environment variables, you minimize the risk of human error. No more manually editing configuration files on the server.
Version Control: You can exclude the .env file from version control (using .gitignore) to prevent sensitive information from being accidentally committed to your repository. However, it’s a good practice to include a .env.example file with placeholder values for developers to use as a template.

How it Works:

  1. .env File: You create a .env file in the root directory of your project. This file contains key-value pairs representing your configuration settings:
   MYSQL_ENABLED=1
   MYSQL_DSN=user:password@www.jijutm.com/database_name
   API_KEY=your_secret_api_key
   DEBUG_MODE=true
  1. Sourcing the .env file: You need a way to load the variables from the .env file into the server’s environment. There are several ways to do this: source .env (Bash): In a development or testing environment, you can simply run source .env in your terminal before running your PHP scripts. This will load the variables into the current shell’s environment. dotenv Library (PHP): For production environments, using a library like vlucas/phpdotenv is recommended. This library allows you to load the .env file programmatically in your PHP code: <?php require_once __DIR__ . '/vendor/autoload.php'; // Assuming you're using Composer $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); // Create Immutable so the variables are not changed $dotenv->load(); // Now you can access environment variables using getenv(): $mysqlEnabled = getenv('MYSQL_ENABLED'); $mysqlDsn = getenv('MYSQL_DSN'); // ... ?> Web Server Configuration: Some web servers (like Apache or Nginx) allow you to set environment variables directly in their configuration files. This is also a good option for production.
  2. Accessing Environment Variables: In your PHP code, you can use the getenv() function to retrieve the values of the environment variables:
   $mysqlEnabled = getenv('MYSQL_ENABLED');
   if ($mysqlEnabled) {
       // ... connect to MySQL ...
   }

Example Workflow:

  1. Development: Developer creates a .env file with their local settings and runs source .env before running the application.
  2. Testing: A .env.testing file is created with the testing environment’s settings. The testing script sources this file before running tests.
  3. Production: The production server has a .env file with the production settings. The web server or a deployment script sources this file when the application is deployed.

By following this approach, you can create a smooth and efficient workflow for managing your application’s configuration across different environments. It’s a best practice that significantly improves the maintainability and security of your PHP applications.

Get My IP and patch AWS Security Group

My particular use case was that In my own AWS Account where I do most of the R&D I had one security group which was only for me doing SSH into EC2 instances. Way back in 2020 during pandemic season, had to go freelance for sometime while in notice period with one company and in negotiation with another one. Well this time I was mostly connected from mobile hotspot switching from JIO on Galaxy M14 to Airtel on Galaxy A54 and BSNL on second sim of M14 and this was causing my security group update a real pain.

Basically being lazy and having devops and automation since long back. Started working on an idea an the outcome was an AWS Serverless clone of what is my ip service which is named echo my ip. Check it out on github. The nodejs code and aws sam template to deploy is given over there.

Next using the standard Ubuntu terminal text editor added the following to the .bash_aliases file.

sgupdate()
{
  currentip=$(curl --silent https://{api gateway url}/Prod/ip/)
  /usr/local/bin/aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > /dev/shm/permissions.json
  grep CidrIp /dev/shm/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     /usr/local/bin/aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  /usr/local/bin/aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"
}

alias aws-permit-me='sgupdate'

I already have a .env file for every project I am handling and a cd command will check for existance of .env and source it in case it exists.

cwd(){
  cd $1
  if [ -f .env ] ; then
    . .env
  fi
}

alias cd='cwd'

The env file is of structure as follows with coresponding values after the ‘=’ ofcourse.

export AWS_DEFAULT_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SECURITY_GROUP=
export AWS_SSH_ID=
export AWS_ACCOUNT=

It’s a common problem for people working from home with dynamic IPs to manage firewall rules. Automating the process with a serverless function and a shell alias is a great way to simplify things. Sharing on github is to help others and provide back to the community.

This method provides some advantages

  • Automation: Eliminates the tedious manual process of updating security group rules.
  • Serverless: Cost-effective, as you only pay for the compute time used.
  • Shell Alias: Provides a convenient and easy-to-remember way to trigger the update.
  • GitHub Sharing: Makes the solution accessible to others.
  • Secure: Security Group Modification uses aws cli and credentials in terminal environment

Exploring AI to the deep end – Facebook Reels

Today was exploring more of AI tools and many more.

Using chat-gpt generated a prompt for leonardo ai to create a subtle scenery.

Using GIMP on Ubuntu the scene was upscaled and cropped to 810:1440 ie 9:16 resolution. Used AI Music to generate a 30 second lo-fi bgm.mp3. using a PHP script generated 840 lines in format fname brightness contrast with values in a gradual change. This information was used in a loop for invoking imagemagic convert and create 840 frames. finally ffmpeg -i “%04d.jpg” -i bgm.mp3 -r 30 dusk.mp4. since frame rate is 30 and frames are 840, the video is of length 28 seconds and can be shared on facebook as a reel. That was my target.

Prompt in Chat GPT:

Suggest a detailed prompt for leonardo ai to generate a scenic view of country side just before sunset and the sun is behind the viewer.

The response, trimmed to the relevant portion which was further given to Leonardo AI:

A breathtaking countryside landscape bathed in the warm glow of the golden hour, just before sunset. The sun is behind the viewer, casting long, soft shadows over rolling green hills. A winding dirt road leads through fields of wildflowers in shades of lavender, yellow, and white, gently swaying in the breeze. A rustic wooden fence lines the path, guiding the eye toward a quaint farmhouse with a red roof and smoke curling from its chimney. Lush trees with golden-hued leaves dot the horizon, their branches swaying gently. A calm river reflects the soft, pastel hues of the evening sky—lavender, peach, and rose gold—while birds soar gracefully in the distance. The scene exudes warmth, tranquility, and a sense of peaceful solitude.

This was of dimension 1472×832 though I had set 16:9 resolution in the Leonardo image generator app.

Since I am on XUbuntu here I used GIMP to first upscale. As height required was 1440 the width proportionally came to 2548 once the scaling was done, chose canvas size from the image menu. setting width to 810 and position to center, since I needed 9:16 for Facebook reel.

Now for progressive frame generation a short script was written. For which the max thresholds were identified using GIMP. Colors -> Brightness & Contrast then in the slider, manually tweaking until the image was fully black. tried to fix these values to easy to calculate. And arrived at Brightness -120 and Contrast + 60. With a frame rate of 30 per second, a 28 second video will need 840 frames. So applying that brightness is 0 to -120 in 840 frames which evaluates to reduce by 1 in every 7 frames, whereas contrast is 0 to 60 and that evaluates to increase of 1 in every 14 frames. This was implemented using php scripting.

<?php

/*
brightness    0 => -120  7:1
Contrast      0 => 60   14:1

frames 840
*/

$list = range(1,840);

$bt = 0;
$ct = 0;

$bv = 0;
$cv = 0;

foreach($list as $sn){
   
   if($bt == 7){
   	$bv += 1;
   	$bt = 0;
   }
   
   if($ct == 14){
   	$cv += 1;
   	$ct = 0;
   }
      
   $bt++;
   $ct++;
   
   echo str_pad($sn, 4, '0', STR_PAD_LEFT)," $bv $cv","\n";
}

?>

This was further run from the command line and the output captured in a text file. Further a while loop creates the frames using image magik convert utility.

php -q bnc.php > list.txt

mkdir fg

cat list.txt | while read fi bv cv; do convert scene.jpg -brightness-contrast -${bv}x${cv} fg/${fi}.jpg ; done

cd fg
ffmpeg -i %04d.jpg -i /home/jijutm/Downloads/bgm-sunset.mp3 -r 30 ../sunset-reel.mp4

The bgm-sunset.mp3 was created using AI music generator and edited in audacity for special effects like fade in fade out etc.

Why this workflow is effective:

Automation: The PHP script and ImageMagick loop automate the tedious process of creating individual frames, saving a lot of time and effort.
Cost-effective: Using open-source tools like GIMP and FFmpeg keeps the cost down.
Flexibility: This approach gives a high degree of control over every aspect of the video, from the scenery to the music and the visual effects.
Efficient: By combining the strengths of different AI tools and traditional image/video processing software, this streamlined workflow is defined that gets the job done quickly and effectively.

The final reel on facebook page , see that also.

AWS DynamoDB bulk migration between regions was a real pain.

Go and try searching for “migrate 20 dynamodb tables from singapore to Mumbai” on google and sure that you will get mostly migrating between accounts. But the real pain is that even though the documents say that full backup and restore is possible, the table has to be created with all the inherent configurations and when number of tables increases like 10 to 50 it becomes a real headache. I am attempting to automate this to the maximum extend possible using couple of shell scripts and a javascript code to rewrite exported json structure to that of a structure that can be taken by create option in the aws cli v2.

See the rest for real at the github repository

This post is Kept in Short and Simple format to transfer all importance to the github code release.