Cloud Migration – A Thought Process

Everybody is running after this and gets stuck at one stage or the other unless their product or application is still in black and white on some notebooks or just in the wireframe and has to be built from the ground up. Now if you are going to build a new application, it can be designed to take full advantage of the cloud by combining multiple microservices, leaving out more time and resources to do the application into a perfectly usable solution. Whereas we are considering migration of existing applications into the cloud. The development language, database as well as design approach of the whole application should be considered when thinking about migration. It means that migration to the cloud should be considered on a case-to-case basis and there is no storyboard which fits all use cases.

Continue reading “Cloud Migration – A Thought Process”

Architecture in a Serverless Mindset

Consider designing a simple serverless system to process orders within an e-commerce workflow. This is an architecture for a REST micro-service that is simple to implement.

Simple Rest Architecture

How an order will be processed by this e-commerce workflow is as follows.

  1. Amazon API Gateway handles requests and responses to those API calls.
  2. Lambda contains the business logic to process the calls.
  3. Amazon DynamoDB provides persistent JSON document storage.

Though this is simple to implement, this can cause bottlenecks and failures resulting in frustrated clients at the web front end. Analyze the flow and see the possible failure points. Amazon API Gateway integrates with AWS Lambda with a synchronous invocation method and expects AWS Lambda to respond within 30 seconds. As long as this happens, all is well and good. But what if a promo gets shared over social media and very large users pile up with orders, Scaling is built into the AWS Services, but can reach the throttling limits.

The configuration of Amazon DynamoDB, where capacity specifications do play a lot. AWS Lambda throttling as well as concurrency can also create failures. Large dynamic library linking which requires initializing time also affects the cold start time and eventually the latency of AWS Lambda which could get lost with the http request timeout of Amazon API Gateway. Getting deep into the system, the business logic could have some complications and in case one request cannot be processed due to the custom code written in AWS Lambda could fail without any trace of the request saved into any persistent storage. Considering all these factors as well as suggestions by veterans in this walk of life this architecture could be further expanded to something like the below.

Revised Order Processing Architecture

What is the revision and what do the additional components provide as advantage? Let’s discuss it now.

  • Order information comes in through an API call over HTTP into Amazon API Gateway
  • AWS Lambda validates and populates the request into Amazon Simple Queue Service (SQS)
  • SQS integrates with AWS Lambda asynchronously and automatic retries for failed requests as well as Dead Letter Queues (left out in illustration) could help out
  • Business logic Processed requests could be stored to DynamoDB
  • DynamoDB Streams could trigger another AWS Lambda to intimate through SNS about the order to Customer Support

Digging more into the illustrations and explanations there are more to be done to make this a full production-ready blueprint let’s leave those thoughts to upcoming Serverless enthusiasts.

Conclusion

I strongly believe that I have been loyal to the core thoughts of being in a Serverless Mindset. Further thoughts of cost optimizing and scaling can be considered with savings plans, AWS Lambda Concurrent provisioning, Amazon DynamoDB on-demand capacity setting and making sure to optimize business logic and reduce latency.

Rearchitecting an Old Solution

It was in 2016 that a friend approached me with a requirement of a solution. They were receiving video files with high resolution into an ftp server which was maintained by the media supplier. They had created an in-house locally hosted solution to show these to the news operators to preview video files and decide where to attach them. They were starting to spread out their news operational desk to multiple cities and wanted to migrate the solution to the cloud, which they did promptly by lift and shift and the whole solution was reconfigured on a large Ec2 instance which had custom scripts to automatically check the FTP location and copy any new media to their local folders. When I was approached, they were experiencing some sluggish streaming from the hosted Ec2 instance as the media files were being accessed from different cities at the same time. Also, the full high-definition videos had to be downloaded for the preview. They wanted to optimize bandwidth utilization and improve the operator’s response times.

Continue reading “Rearchitecting an Old Solution”

php-mf in AWS Lambda running serverless

A bit outdated, though this sample implementation was committed to my git hub repository along with examples.

Nothing big to explain, rather the php-mf is a routing framework where the routes are defined in index.php or any included files. Normal PHP statement “include” can be used or the MF directive MF::addon can be used to include further routing. All these could be packaged into a lambda. This uses a couple of layers, one the php7.3 public layer, another one is the AWS PHP SDK which was built and published by me. These are available only on the ap-south-1 region as I think. So if you need on a different region, please make sure you deploy the layers correctly before attempting to deploy this module.

Publish Lambda Layers across Regions

This could even be classified as re:Inventing the wheel but was a quick hack workaround that I found at the early stages where we required a set of node.js libraries and custom modules across multiple regions, in fact, I needed this in only 5 regions. Since these libraries and modules could have frequent updates, I wanted the latest package to be updated and published as a new layer version to each region and concerned developers be notified about the new layer version as well as the changelog. Well, to make the long story short, I was at that time familiar with subversion, and the project code was committed to an svn repository. This could have biased me to use the following solution at that time period, rather now my preference would be either “Multi-Region Deployment” using cloud formation or “CodeCommit and CodePipeline with SNS” instead of S3 triggers.

The architecture is simple and was deployed manually at that time. Sorry to say that I no longer have any access to the aws account, as it is a discarded community project and the account is closed at this time. Thought about posting the idea over here since there was a discussion or rather query about this in a community forum.

Continue reading “Publish Lambda Layers across Regions”

Refactored a Complicated Lambda to use Layers and split it up

Till recently, in fact, till last week, was not too worried about writing all code into single code folder, and mapping multiple AWS::Serverless::Function into individual named handlers. Till I stumbled on this article, where I started wondering how my folder structure and sam templates were going into the stack. A detailed inspection was not required, though this was the time when I used the GUI ( after a long time ). But the outcome showed how pathetic the condition was.

The lambda console with the filter “aws:cloudformation:stack-name: <stack>”

Well, it is clear that the whole mess is being uploaded into all the function code. What does this mean – holy grail, any one small change here or there, would update all the functions – last modified is the same, all functions will have the node_modules and other artifacts like templates and custom modules.

Continue reading “Refactored a Complicated Lambda to use Layers and split it up”

Case Study – WIP Reporting and Timeline video on Completion

Requirements (My hallucinations):

Design and architect a highly available, large user base system which is going to be used by the National Highways, the regular employees updating photos of WIP on different stages and when work completed, archive all images after creating a timeline video. WIP sequence should keep the most latest photo thumbnail linked to a project blog page, with a gallery linking to the last photo per day.  Post-processing of completed work can take even up to a week giving more importance to the lowest cost possible. The system should be capable of handling hundreds of thousands of high-quality mobile photographs per day. Runtime costs should be as low as possible. For each WIP a minimum of one photograph in six hours is desired. 

Solution on AWS (My views):

Application to be developed in some kind of single-page-app with progressive-web-app support, javascript and CSS libraries. This can be hosted on the AWS S3 bucket with Cloud Front default origin pointed here. The standard secure approach of HTTPS (Redirect HTTP to HTTPS), OAI and custom domain with a certificate from ACM is recommended. The dynamic part uses Cognito User Pool, Amazon API Gateway (regional), Lambda, STS etc. API Gateway stage should be the behaviour point for the route.

Continue reading “Case Study – WIP Reporting and Timeline video on Completion”

Export Cloudwatch Logs to AWS S3 – Deploy using SAM

With due reference to the blog which helped me in the right direction, the Tensult blogs article Exporting of AWS CloudWatch logs to S3 using Automation, though at some points I have deviated from the original author’s suggestion.

Some points are blindly my preference and some other due to the suggested best practices. I do agree that starters, would be better off with setting IAM policies with ‘*’ in the resource field. But when you move things into production it is recommended to use least required permissions. Also, some critical policies were missing from the assume role policy. Another unnecessary activity was the checking of the existence of s3 bucket and attempt to create if not exists, at each repeated execution. Again for this purpose the lambda role needed create bucket permission. All these were over my head, and the outcome is this article.

Well if you need CloudWatch logs to be exported to S3 for whatever reason, this could save your time a lot, though this needs to be run in every different region where you need to deploy the stack. Please excuse me as the whole article expects to have aws-cli and sam-cli pre-installed.

Continue reading “Export Cloudwatch Logs to AWS S3 – Deploy using SAM”