Low-Code/No-Code path to Business Applications – AWS Scores again

Introducing HoneyCode a new, fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based, drag-and-drop interface builder.

Developers can build applications for up to 20 users for free. After that, they pay per user and for the storage their applications take up. There is no wait for applications to be approved on play store / app store as the applications are not directly deployed, rather through a pre deployed player ( interpreter ).

Like similar tools, Honeycode provides users with a set of templates for common use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.

Honeycode allows AWS clients to build interactive mobile and web applications with no programming required. Honeycode has a simple visual application builder customers can use to, in Amazon’s words, “create applications that range in complexity from a task-tracking application for a small team to a project management system that manages a complex workflow for multiple teams or departments.”

The company is hoping that Honeycode can eliminate the need to resort to spreadsheets and emails to schedule events, create to-do—lists, track personnel progress and track content and inventory, among other business functions. Honeycode apps will make it easier for clients to sort, filter and link data together and will also give them way to create data dashboards that are updated in real-time. Clients don’t even have to worry about managing and maintaining any hardware or software — Amazon will take care of those.

Honeycode has pre-built templates clients can use, but they can also build apps from scratch using the visual spreadsheet-like interface to manually add elements like lists, buttons and input fields onto app screens. Apps with up to 20 users are free, and clients will be able to pay for more users and storage if they need to.

During a test drive, I have felt that this is going to bring a big change.

EBS Provisioning VS Performance – Confusions cleared

For almost over the last decade ( since 2009 ), I was never worried about the EBS performance indexes. Used to create a single volume and attached to an instance as and when required. Today just for wandering, and to entertain myself, did a couple of tests. Thanks to aws-cli without which this could have taken more than what it would.

Straight into what I found in a short summary. Note that the values are Bps.

T1T2T3T4T5T6T7
Single272M492M268M1.3G393K272M8954.02M
Raid 0631M671M740M1.3G366K631M8851.47
Raid 5336M250M332M1.2G9.9k315M8306.52
Performance across different combination of EBS Volumes

Kicked up an EC2 instance and mounted a 200gb EBS volume to run a series of tests. Thanks to nixCraft article titled “Linux and Unix Test Disk I/O Performance With dd Command“.

#!/bin/bash

 dd if=/dev/zero of=/data/test1.img bs=1G count=1 oflag=dsync
 rm -f /data/test1.img
 
 dd if=/dev/zero of=/data/test2.img bs=64M count=1 oflag=dsync
 rm -f /data/test2.img
 
 dd if=/dev/zero of=/data/test3.img bs=1M count=256 conv=fdatasync
 rm -f /data/test3.img
 
 dd if=/dev/zero of=/data/test4.img bs=8k count=10k
 rm -f /data/test4.img
 
 dd if=/dev/zero of=/data/test4.img bs=512 count=1000 oflag=dsync
 rm -f /data/test5.img
 
 dd if=/dev/zero of=/data/testALT.img bs=1G count=1 conv=fdatasync
 rm -f /data/test6.img

hdparm -T /dev/<device>
With single disk of 200G

Well after that tore down removed the single ebs volume and deleted the same. Then created 12 20Gb ebs volumes. The listing in text mode was dumped into a text file and against each, a device id of pattern xvd[h-s] was added to the text. This was done just to further enable looping commands.

Then 10 20GB disks were attached to the instance, and internally this was assembled into /dev/md0 using raid level 0. The same test was run again and the output is as below.

With 10 x 20GB in raid level 0

How good it seems but we are allocating too much, actually most of our major projects would not take more than a 100gigs and this was way toomuch. So thought about playing with it further.

The raid was stopped, unmounted and super-block erased using dd command. The next test was conducted with the same configuration only change was that I just added 5 disks this time.

Ha! Ha.. not much of a degradation. I am still confused at this one, might be that we are having only 2VCPu in the vm. Sometime later I should attempt this with a different hardware.

But again thought about another option, why not try the raid5. Yes again did the cleanup, and added 6 of the volumes back to the same instance and did the same test.

Aw!.. as expected the performance is dropped 🙁 might be due to the parity writing overhead.

As per the EBS Volume types document it is more or less 3 IOPS per GiB of volume size, with a minimum of 100 IOPS. This means the 200G allocates about 600 IOPs, raid 0 with 10 20G will give 1000 IOPs, raid0 with 5 x 20G will give 500, and the raid5 with 6×20 has 600 IOPs.

For a reference the commands which I used are illustrated below. The ami id used is from Ubuntu Amazon EC2 AMI Finder, for region ap-south-1, focal hvm.

# create instance
aws ec2 run-instances --image-id ami-06d66ae4e25be4617 --security-group-ids <sg-id> --instance-type m5.large --count 1 --subnet-id <subnet-id> --key-name <keyname>

# create volume attach and then finally cleanup
aws ec2 create-volume --availability-zone ap-south-1c --size 200
aws ec2 attach-volume --device xvdf --volume-id vol-058b551d8ce21e37d --instance-id i-04373f3985b1a13e6
aws ec2 detach-volume --volume-id vol-058b551d8ce21e37d --instance-id i-04373f3985b1a13e6
aws ec2 delete-volume --volume-id vol-058b551d8ce21e37d

# create 12 identical volumes 
seq 1 12 | while read i; do aws ec2 create-volume --availability-zone ap-south-1c --size 20; done

# find those volumes which are available
aws ec2 describe-volumes --output text | grep available

# vol-071e9b06c24627554 xvdh
# vol-0097f6ee4b1f0f614 xvdi
# vol-05a882cefed9a8c13 xvdj
# vol-0cf57aab66e51c68b xvdk
# vol-075ab760f2df2270c xvdl
# vol-073b272e1f84b4450 xvdm
# vol-0c98e527a16d34764 xvdn
# vol-0603e3976cd6f0c34 xvdo
# vol-0c488b59b353d51bd xvdp
# vol-05a04ea90f18d52ff xvdq
# vol-0a8726c93947641e2 xvdr
# vol-08903b57f0d5518d0 xvds

# take 10 volumes from the list and attach them
head -10 vols | while read vol dev; do aws ec2 attach-volume --device $dev --instance-id i-04373f3985b1a13e6 --volume-id $vol ; done
# create raid, create partition, format and mount
mdadm -C /dev/md0 -l raid0 -n 10 <list of devices>
fdisk /dev/md0 [p, enter 3 time, wq]
mkfs.ext4 /dev/md0p1
mount /dev/md0p1 /data

** run tests 

# unmount, stop raid, write zeros into the first 12M (clears partition and super-block)
umount /data
mdadm --stop /dev/md0 
seq 1 10 | while read f; do dd if=/dev/zero of=/dev/nvme${f}n1 bs=12M count=1; done

# detach volumes
head -10 vols | while read vol dev; do aws ec2 detach-volume --instance-id i-04373f3985b1a13e6 --volume-id $vol ; done

# final cleanup
cat vols | while read vol dev; do aws ec2 delete-volume --volume-id $vol ; done
aws ec2 terminate-instances --instance-id i-04373f3985b1a13e6

First taste of bbpress – was sweet but getting sour

Hey hey.. when we had to implement a bulletin board for an institution, the first thought was for phpbb, but the time limitation as well as lack of resources to hack through a completely new code took the turn and prompted us to take a new course. The bb press way. Atleast it was the same language written and used in the same way we were handling for more than a year. Well _ck_ should be thanked for all the goodies. And when we find bugs or errors, we all should help each other to make the system more better.

When I was using the bbPress Attachments, trying to delete a post, even by admin or moderator, would confront with an SQL error. The same error was reported by Joseph, for which _ck_ responded with downloading a new version from the trunk. For me even after the new version came in, I was getting the error. Continue reading “First taste of bbpress – was sweet but getting sour”

Configure two mysql server and connecting php

One would think what is there in configuring two mysql server, or even think what the purpose behind achieving this. Well there are different requirements, and these different requirements may lead to take us through various possiblities. For instance it may be that certain projects may need the advanced features of MySQL 5.2, where as some others could even be run on MySQL 4.12. In my case it was very peculiar and different, in that about half of our projects used transactional tables and other half could go without transactional tables. And we preferred that these two were configured on two different mysql servers. When the system was explained and the need described to the management, they ruled out the option to have different dedicated server for those projects which was not using transactional tables. Thus I thought about configuring multiple mysql server on the same hardware and operating system.

Continue reading “Configure two mysql server and connecting php”

12 PHP optimization tips

  1. If a method can be static, declare it static. Speed improvement is by a factor of 4.
  2. Avoid magic like __get, __set, __autoload
  3. require_once() is expensive
  4. Use full paths in includes and requires, less time spent on resolving the OS paths.
  5. If you need to find out the time when the script started executing, $_SERVER[’REQUEST_TIME’] is preferred to time()
  6. See if you can use strncasecmp, strpbrk and stripos instead of regex
  7. preg_replace is faster than str_replace, but strtr is faster than preg_replace by a factor of 4
  8. If the function, such as string replacement function, accepts both arrays and single characters as arguments, and if your argument list is not too long, consider writing a few redundant replacement statements, passing one character at a time, instead of one line of code that accepts arrays as search and replace arguments.
  9. Error suppression with @ is very slow.
  10. $row[‘id’] is 7 times faster than $row[id]
  11. Error messages are expensive
  12. Do not use functions inside of for loop, such as for ($x=0; $x < count($array); $x) The count() function gets called each time.

As blogged by alexmoskalyuk

Hit the WDDX Bug twice

In Saturn we had a handful of XUL projects where the XUL part was for backend administration. It was in the year 2004, when we developed most of the packages, and we were on php 4.2. There was heavy use of WDDX serialization, since that was found to be easy, we never knew about soap and soap implementations. By the end of 2005, most of the projects got woundup, and our XUL first hand developer had also quit. Since then we were swaying away from XUL development, and by mid 2006, we had almost dropped any further XUL support, as well as development.

Recently, the management decided to revamp, and pull out one of the old project to be reworked as a new product with solid backend. It was then I got bitten by the WDDX bug, [#38839], and I overcame that by patching the latest hourly patch.

Later on our COO needed the same project to be deployed on his laptop, where we downloaded the TSW which is Easy, modular and flexible WAMP bundling Apache2/SSL, MySQL4, PHP4, Perl5.8/ASP, Python2.3, Tomcat5, FirebirdDB, FileZilla, Mail/News-Server, phpMyAdmin, Awstats, WordPress, etc. It also includes a web-GUI to control/manipulate all bundled services. But even with the php 4.3.4, it also seemed to have the WDDX bug, but in a different way. ie; when the XUL application sends an AJAX request, where the output was expected as a wddx serialized string, the Apache server started crashing.

Finally in the TSW also, I downloaded a cvs snapshot and patched that, then the error went away.

Rent A Coder – Resume Page Ratings to RSS

Myself is more or less active at RAC, and wanted to show off my buyers comments in a page for promoting me. And wrote this script rac2rss. Download from here.

The main script is rac2rss.php, change the line [php] $coder_id = “1242159”; [/php], in rac2rss.php to get your resume. [php] $myFileMgr = new fileMgr(‘./cache’,7 * 24 * 60 * 60); [/php] defines the cache, and the cache folder should be world / webserver writable, and it can be outside web path for security reasons, only that the absolute or relative path should be provided, as well the lifetime is in seconds, actually in the said script I have put it as a week.

As a last note, thanks to all members and maintainers of RAC, for making it a wonderful place for all of us.