Reference Architecture

Reference architecture for a generic interface for Cloud Search on AWS with a broker in any lambda-supported runtime. For the particular implementation, I chose and used Node.js. Hence any client request is authorized from an API key and hits the aws api gateway which in turn invokes the lambda function. In this function internally the code will do necessary normalization and pass it on to aws Cloud Search and if any response the same is reformatted for adapting as aws api gateway response. Along with this functionality, the lambda broker will write a human-readable version of the request as analyzed from the request with request method as verb keywords and sort direction with a prefix of JSON property names, etc into AWS cloud watch with simple console.log methods. Tried to make it as generic as possible.

An event bridge scheduler will trigger another lambda which will analyze these human readable messages and try to detect any missing indexes which will be auto-created into the Cloud Search and updated into a config file on aws S3. Lots of production testing and fine tuning is pending along with necessary documentation as well as the AWS sam template to deploy the same. As of now, this is just a blueprint and the components are lying in different locations and need orchestration there are no plans to open this on any public repository. But anyone who wants to adopt the design is free to pick this and do it on his own without any commitment to me. By creating this with the self-learning capabilities this system can be used literally by many applications even those that already depend on some kind of custom clumsy backend.

A few real-time use cases could be community member databases, hospital patient records, pet shops and many more. Generally, the request methods should work like POST create a new record, PUT updates a record, DELETE deletes ( or trash ) a referenced record, and GET fetch according with proper documentation the feature can be defined as the client software is designed and developed.

The reference architecture drawing is attached here and that is just my thoughts. Please share if you think this is good enough.

Complete Managed Development Environment on AWS

Amazon CodeCatalyst, a Unified Software Development Service it was only a few days back that I suggested about Run your Development Environment on Cloud, and as though our dear fellows at AWS had heard my thoughts the preview of Amazon CodeCatalyst was announced two days back as of this post.

As we go through the explanation and blog post we find that it is really intriguing and exciting to hear about the features. Well, I did give a run through the preview and I found that this could change the way we work. At least it did change the way I worked but not for the full-time job as that would violate the compliance complications. But mostly this would be used by me for my leisure time and commitments towards FOSS and my GitHub presence.

Project templates – or blueprints as they define the term do help in fast-tracking the initial development phase and creating a boilerplate to start working. On-demand development environment hosted on the AWS cloud, automated ci-cd pipelines with a multitude of options and drag and drop building, browser-based ide cloud9 with terminal access on the development instance running amazon linux2 which is based out of centos, invite collaborators across the globe to inspect your code with just a few clicks are just a few of the facilities of this unified development environment as service.

I am still very much excited to dig into this service and will go further into this and maybe come out with more like a session with the awsugtvm very soon as time and health permits. Last month I was bedridden after a bike accident involving a stray dog.

Export Cloudwatch Logs to AWS S3 – Deploy using SAM

With due reference to the blog which helped me in the right direction, the Tensult blogs article Exporting of AWS CloudWatch logs to S3 using Automation, though at some points I have deviated from the original author’s suggestion.

Some points are blindly my preference and some other due to the suggested best practices. I do agree that starters, would be better off with setting IAM policies with ‘*’ in the resource field. But when you move things into production it is recommended to use least required permissions. Also, some critical policies were missing from the assume role policy. Another unnecessary activity was the checking of the existence of s3 bucket and attempt to create if not exists, at each repeated execution. Again for this purpose the lambda role needed create bucket permission. All these were over my head, and the outcome is this article.

Well if you need CloudWatch logs to be exported to S3 for whatever reason, this could save your time a lot, though this needs to be run in every different region where you need to deploy the stack. Please excuse me as the whole article expects to have aws-cli and sam-cli pre-installed.

Continue reading “Export Cloudwatch Logs to AWS S3 – Deploy using SAM”