Cloud Migration – A Thought Process

Everybody is running after this and gets stuck at one stage or the other unless their product or application is still in black and white on some notebooks or just in the wireframe and has to be built from the ground up. Now if you are going to build a new application, it can be designed to take full advantage of the cloud by combining multiple microservices, leaving out more time and resources to do the application into a perfectly usable solution. Whereas we are considering migration of existing applications into the cloud. The development language, database as well as design approach of the whole application should be considered when thinking about migration. It means that migration to the cloud should be considered on a case-to-case basis and there is no storyboard which fits all use cases.

Continue reading “Cloud Migration – A Thought Process”

Architecture in a Serverless Mindset

Consider designing a simple serverless system to process orders within an e-commerce workflow. This is an architecture for a REST micro-service that is simple to implement.

Simple Rest Architecture

How an order will be processed by this e-commerce workflow is as follows.

  1. Amazon API Gateway handles requests and responses to those API calls.
  2. Lambda contains the business logic to process the calls.
  3. Amazon DynamoDB provides persistent JSON document storage.

Though this is simple to implement, this can cause bottlenecks and failures resulting in frustrated clients at the web front end. Analyze the flow and see the possible failure points. Amazon API Gateway integrates with AWS Lambda with a synchronous invocation method and expects AWS Lambda to respond within 30 seconds. As long as this happens, all is well and good. But what if a promo gets shared over social media and very large users pile up with orders, Scaling is built into the AWS Services, but can reach the throttling limits.

The configuration of Amazon DynamoDB, where capacity specifications do play a lot. AWS Lambda throttling as well as concurrency can also create failures. Large dynamic library linking which requires initializing time also affects the cold start time and eventually the latency of AWS Lambda which could get lost with the http request timeout of Amazon API Gateway. Getting deep into the system, the business logic could have some complications and in case one request cannot be processed due to the custom code written in AWS Lambda could fail without any trace of the request saved into any persistent storage. Considering all these factors as well as suggestions by veterans in this walk of life this architecture could be further expanded to something like the below.

Revised Order Processing Architecture

What is the revision and what do the additional components provide as advantage? Let’s discuss it now.

  • Order information comes in through an API call over HTTP into Amazon API Gateway
  • AWS Lambda validates and populates the request into Amazon Simple Queue Service (SQS)
  • SQS integrates with AWS Lambda asynchronously and automatic retries for failed requests as well as Dead Letter Queues (left out in illustration) could help out
  • Business logic Processed requests could be stored to DynamoDB
  • DynamoDB Streams could trigger another AWS Lambda to intimate through SNS about the order to Customer Support

Digging more into the illustrations and explanations there are more to be done to make this a full production-ready blueprint let’s leave those thoughts to upcoming Serverless enthusiasts.

Conclusion

I strongly believe that I have been loyal to the core thoughts of being in a Serverless Mindset. Further thoughts of cost optimizing and scaling can be considered with savings plans, AWS Lambda Concurrent provisioning, Amazon DynamoDB on-demand capacity setting and making sure to optimize business logic and reduce latency.

Rearchitecting an Old Solution

It was in 2016 that a friend approached me with a requirement of a solution. They were receiving video files with high resolution into an ftp server which was maintained by the media supplier. They had created an in-house locally hosted solution to show these to the news operators to preview video files and decide where to attach them. They were starting to spread out their news operational desk to multiple cities and wanted to migrate the solution to the cloud, which they did promptly by lift and shift and the whole solution was reconfigured on a large Ec2 instance which had custom scripts to automatically check the FTP location and copy any new media to their local folders. When I was approached, they were experiencing some sluggish streaming from the hosted Ec2 instance as the media files were being accessed from different cities at the same time. Also, the full high-definition videos had to be downloaded for the preview. They wanted to optimize bandwidth utilization and improve the operator’s response times.

Continue reading “Rearchitecting an Old Solution”

AWS for Software Testing Professionals

Software testing professionals should know something about some services and facilities that AWS provides for the automation and integration of testing and quality control into continuous integration pipelines. This is where QA/QC has to work in hand with DevOps. Though it sounds complicated and scary, knowledge about certain items makes it wonderful and easy. Let us dig into those facilities and suggested practices.

  • AWS EC2
  • AWS Cloud Watch
  • AWS SNS
  • AWS Inspector
  • AWS Device Farm
  • AWS Cloud9
  • Script Suites by Third-party Vendors
Continue reading “AWS for Software Testing Professionals”

Complete Managed Development Environment on AWS

Amazon CodeCatalyst, a Unified Software Development Service it was only a few days back that I suggested about Run your Development Environment on Cloud, and as though our dear fellows at AWS had heard my thoughts the preview of Amazon CodeCatalyst was announced two days back as of this post.

As we go through the explanation and blog post we find that it is really intriguing and exciting to hear about the features. Well, I did give a run through the preview and I found that this could change the way we work. At least it did change the way I worked but not for the full-time job as that would violate the compliance complications. But mostly this would be used by me for my leisure time and commitments towards FOSS and my GitHub presence.

Project templates – or blueprints as they define the term do help in fast-tracking the initial development phase and creating a boilerplate to start working. On-demand development environment hosted on the AWS cloud, automated ci-cd pipelines with a multitude of options and drag and drop building, browser-based ide cloud9 with terminal access on the development instance running amazon linux2 which is based out of centos, invite collaborators across the globe to inspect your code with just a few clicks are just a few of the facilities of this unified development environment as service.

I am still very much excited to dig into this service and will go further into this and maybe come out with more like a session with the awsugtvm very soon as time and health permits. Last month I was bedridden after a bike accident involving a stray dog.

Run your Development Environment on Cloud

The changing scenarios and demanding environments along with rising CAPEX costs of the environment as well as upgrade requests for hardware demand a much more robust and simpler environment which follows the OPEX mode and even facilitates the more productive possibility of WFX. What if we had a Browser based IDE which has collaboration and team chats along with support for almost all leading languages, coupled with managed revision control, practically unlimited storage, build, test automation and deploy pipeline which goes with the pay as you go model ?

The combination that I wanted to suggest is a set of services from AWS.

  • Cloud9 IDE
  • AWS CodeCommit (Managed Git)
  • AWS CodePipeline
  • AWS CodeBuild
  • Amazon Elastic File System
  • AWS DeviceFarm (for hybrid application testing)

The above list is a minimal environment without any bloat and will work much more efficiently from any entry-level or decent smartphone even 5 years old. But with the assistance of an AWS DevOps Professional, integrate with ActiveDirectory authentication and group permissions, along with more cost effective storage and further infrastructure as code “code snippets” could be used along with the pipeline not to forget the manual stages and conditional stages in the pipeline. Also additional features like pre-commit validations for lint checking or software composition analysis for tracking and documenting licencing or legal standards could be added to the pipeline or version control. Will see in detail about each of the services extracted from their respective pages.

Continue reading “Run your Development Environment on Cloud”

CloudFront with multiple origins

A while ago I had bragged about how this site is published – heavily customized WordPress deployed on S3. Though I had left out some of the indigenous parts which I would like to explain in this article. The main aspect explained would be the convergence of multiple origins with CloudFront and configuring behaviours for different cache settings.

Continue reading “CloudFront with multiple origins”

php-mf in AWS Lambda running serverless

A bit outdated, though this sample implementation was committed to my git hub repository along with examples.

Nothing big to explain, rather the php-mf is a routing framework where the routes are defined in index.php or any included files. Normal PHP statement “include” can be used or the MF directive MF::addon can be used to include further routing. All these could be packaged into a lambda. This uses a couple of layers, one the php7.3 public layer, another one is the AWS PHP SDK which was built and published by me. These are available only on the ap-south-1 region as I think. So if you need on a different region, please make sure you deploy the layers correctly before attempting to deploy this module.

Publish Lambda Layers across Regions

This could even be classified as re:Inventing the wheel but was a quick hack workaround that I found at the early stages where we required a set of node.js libraries and custom modules across multiple regions, in fact, I needed this in only 5 regions. Since these libraries and modules could have frequent updates, I wanted the latest package to be updated and published as a new layer version to each region and concerned developers be notified about the new layer version as well as the changelog. Well, to make the long story short, I was at that time familiar with subversion, and the project code was committed to an svn repository. This could have biased me to use the following solution at that time period, rather now my preference would be either “Multi-Region Deployment” using cloud formation or “CodeCommit and CodePipeline with SNS” instead of S3 triggers.

The architecture is simple and was deployed manually at that time. Sorry to say that I no longer have any access to the aws account, as it is a discarded community project and the account is closed at this time. Thought about posting the idea over here since there was a discussion or rather query about this in a community forum.

Continue reading “Publish Lambda Layers across Regions”