Treebo infrastructure automation on AWS

ABOUT TREEBO
Founded in 2015, Treebo Hotels is India’s third-largest hotel chain and operates in the budget segment of the hospitality industry, which is estimated to be around $20 Bn. in size. Treebo operates on a franchise model and emphasizes tight quality control.


THE CHALLENGE
Treebo was facing scale issues during the holidays & long weekends as during this period, demand for budget hotels was always at the peak.

Moreover, production deployment was a manual process which caused downtime whenever there was a new release to be pushed to production. They were keen on having a full-fledged automated process to run the whole cycle of integration, deployment, testing etc. Since every engineering team works on their own branch they faced the challenge to setup up their own environment identical to production to run test cases.

They also wanted to secure the infrastructure and should be protected from a distributed denial-of-service (DDoS) attack which took their website down in the past.


INSIGHT ACTION
The Techpartner team provided cloud consulting service and did a full-stack assessment of the existing application and the deployment strategy and restructured the application to have a stateless configuration. They also, configured the autoscaling to scale up and down as per load with proper load balancing features.

Manual deployment was replaced by CI/CD tools in combination with AWS-CLI for making API calls to take care of systems in/out of ELB during deployment. We use AWS Provided Multi-AZ RDS service for the database to reduce the overhead of managing DB System.

We designed the custom deployment Jobs for Treebo developers by using cloud formation template from which they can setup up their own working environment in minutes which would hold the production masked data for testing.


SECURITY MEASURED

To secure the infrastructure, Techpartner took a three layer security approach.


Network Security

○ VPC is reconfigured to have multiple subnets to support 3-Tier architecture of WEB, APP & DB

○ External load balancers are kept in WEB subnet which is public subnet and everything else including application and databases are put into APP and DB subnet respectively which is the private subnet

○ All outgoing traffic from private subnet is via NAT gateway (Internet access is provided only during patch management)

○ VPC flow log were enabled

○ Security group are configured in such a way that there is no direct access from WEB to DB

○ Explicit application instances are whitelisted in DB Security group

○ VPN tied up with LDAP is the only way to connect the AWS Infrastructure

○ Only 443 port is open to the world (traffic to port 80 is redirected to 443)

○ NACL is configured to connect Whitelisted IP’s/ports only

○ Different subnet and environment for Dev, QA and UAT


Application Security

○ Customise AMI created for Treebo by following CIS guidelines and same is used across all the instances

○ Access to any instance in AWS is only via OpenVPN which is verified against user certificate & user credentials. Validity for user certificate is one year & every user has to change the credentials every six months.

○ Inactive users for more than 30 days are automatically disabled in the LDAP

○ LDAP group is created as per the different set of roles in the organisation

i. Dev Group: Has access to all Dev environment

ii. QA Group: Has access to all QA environment

iii. IT Ops: Access to all environments for managing infrastructure

iv. IT Audit: Read only access to infra during Audit.

○ Production access is restricted and all deployments are done via Jenkins and Ansible.

○ ELK is configured to view all application logs centrally to avoid Dev access to production system during troubleshooting.

○ Application is tested for VAPT regularly with proper approval from AWS.


WAF

For all Internet-facing traffic, it is mandatory that it goes from WAF which can be configured using standard Treebo cloud formation template. This template will have the following area of security covered and will automatically deploy the respective component in the selected VPC.

○ SQL injection and cross-site scripting protection: The solution automatically configures two native AWS WAF rules that protect against common SQL injection or cross-site scripting (XSS) patterns in the URI, query-string or body of a request

○ IP lists: This component creates two specific AWS WAF rules that allow you to manually insert IP addresses that you want to block (blacklist) or allow (whitelist)

○ HTTP flood protection: This component configures a rate-based rule that automatically blocks web requests from a client once they exceed a configurable threshold.

Other than this Techpartner also implement the role-based access to instance wherever there is a need to use any AWS services like S3, API Gateway, Lambda etc. Root account is secured and all other sub-account are created with restricted access and with MFA enabled. AWS keys are rotated every six months with automated Ansible script run from the bastion host.


THE BENEFITS

● Auto Scaling Architecture: with this architecture, Treebo was able to serve the clients with improved response time which in turn helped to acquire more business

● Performance: as the application is configured in the Auto Scaling performance of the website improved with good response time

● Automation: automation reduced the manual deployment time and due to this, there was no more downtime on production

● Innovation: team was able to concentrate more on development than the infrastructure issues. AWS STACK for the success of project Techpartner used below AWS services

● Amazon EC2 was used for computing with a combination of on-demand and reserved instance which was configured to spin up automatically during load

● Amazon S3 was used to store mainly for the images which need to be accessible across the instance

● AWS NAT Gateway Service was used to provide the Internet to systems in private subnet during patch management

● AWS CloudWatch was used to monitor the instance performance

● AWS CloudTrail was used to keep track of the activity across the AWS environment

● RDS was used in multi-AZ for Database so that no more maintenance of DB was needed

● Auto Scaling was used to handle the peaky traffic

● AWS CodeCommit was used to keep track of the code repository

● AWS CodeDeploy was used with webhook constantly checking CodeCommit changes and automatically executing test cycles & deploying the successful build in Dev/staging environment

● AWS CloudFormation was used to templatize the infrastructure footprint

● AWS Inspector was used to check application and OS security and apply fixes as recommended and ensure consistency

● AWS Config was used to track changes for AWS resources and also to alert with resources that are not compliant as per defined rules

● AWS Identity and Access Management (IAM) was used to provide AWS resources access as per company’s policy. Also, wherever possible, IAM roles were used to provide access to AWS resources as per IAM’s best practices


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Chemical giant- Unix to Linux


Executive Summary
A flagship company of Mafatlal Group was in dire need of modernising their IT infrastructure. Their entire software solution was being served from a single server and many dumb terminals across departments connected to this server. These dumb terminals are similar to the ones we see nowadays in railway reservation centers.
Tape backup devices were used for the backup process. They wanted this entire platform to be moved to present-day systems.


The Client’s Challenge
The client’s setup was based on systems of the late eighties era with no networking option or any external connectivity. The server was old, dying and with no spare parts, no vendor was ready to support it. The applications were based on COBOL language again for which vendors were hard to find. Thus they were in a catch-22 situation.


Insight to Action
Techpartner spent time and understood the whole setup. After a complete study Techpartner proposed the use open source software on modern systems. To convince them Techpartner took a server with all necessary components installed and migrated their payroll module on it. Ran a dummy payroll and the client was dumbfounded with the speed and accuracy.

With no connectivity options, it was a real challenge to migrate the data along with programs. Techpartner made use of the customized serial cable and then hooked up with the server’s serial port to migrate all the data.

The whole data migration and customizing the programs was done in a month’s time.

The new setup was then run parallelly for a month and once the accuracy was established the client moved over happily to the new setup.


Impact
With none of the vendors unable to think out of the box and offer any solution the client felt helpless. With our innovative solution backed by excellent technical knowledge not only made them migrate to newer platform but also at a reduced cost from their estimated budget.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Regional mobile apps on cloud

(Migration from Singapore to Mumbai)

Problem Statement
In Indus, we recently migrated from AWS Singapore region to Mumbai region as all our customer base is from India. The only service which is left in Singapore region that needs to be migrated to Mumbai region is AWS Mobile Analytics (AMA). The reason we can’t directly migrate it to Mumbai is because of the limitation at AMA side. (Once you create AMA endpoint to S3 bucket in the region it remains with it and can’t be changed). We have the data pipeline which reads this S3 bucket and puts the data into Redshift which is again in Singapore. But due to this, we are incurring huge data transfer cost (approx. $7K).

To avoid this we worked with AWS SA and found that since we can’t change the endpoint for AMA, our only option is to change the app on all those million devices in the market which will send the data to different AMA endpoint but then again we will be in the same situation in case we need to change this Endpoint in future when our handsets get distributed globally.

Another approach for us is to use “Kinesis Firehose” but again for the time being, it is not available in Mumbai Region.


Solution
We are planning to use Kinesis which will be the new endpoint for all our devices. Consumer Apps running on EC2 will then pull data from Kinesis Shards and put it into S3 bucket. This setup will be configured in Mumbai region. When it is ready, we will send an update to all our devices. Till the time all devices are upgraded, this setup will run in parallel with Singapore region AMA. AMA will be eventually shut down once all devices are upgraded.


Migration Plan of Action

Kinesis
In Mumbai region, we will set up new Kinesis with few shards (These shards will be configured to auto scale as we send the upgrade to all devices). Shard will then work with the consumer who is configured on EC2 instance and upload the data to respective S3 bucket. We will use the Kinesis library to make sure that data from each shard is uploaded to the respective S3 bucket.


S3 Replication
Since older than two weeks data is not needed, we will set up the S3 cross region replication (Singapore to Mumbai) and allow it to run for at least month. This will populate needed data in Mumbai region before we start migrating data pipeline and Redshift.


Redshift
There is huge data (6TB) which resides in Redshift which currently runs in Singapore. We will be enabling cross-region snapshot backup. With this, we will have the required data in Mumbai region for data pipeline to work.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Automation on AWS

A pioneer and innovator of eLearning in India, this client is today a leader in this realm. They develop specialised solutions for corporate houses, educational institutions and retail students (B2C – Students) in numerous spheres – synchronous and asynchronous learning, high-end strategy consulting, deployed blended learning, customised content development, cutting-edge web collaboration and more.


THE CHALLENGE
Client wanted to migrate to AWS infrastructure as they were not able to scale with physical servers. Also sizing the right resources was challenge with growth demands of e-learning business.

Also without a version control system for code development and growing team size resulted in long release and bug fixing cycles. Production deployment was a manual process which sometimes caused downtime whenever there was a new release to be pushed to production. Due to these challenges, they were keen on having a full-fledged automated process to run the whole cycle of integration, deployment, testing etc.

Since every engineering team worked on their code locally, they faced a challenge to setup up their own Environment identical to production to run test cases.


DEPLOYMENT LAPSES
Since there was no version control, the code development process was already challenging with multiple people working on the same codebase. This led to increased development time, release cycles and bug fixing cycles.

Also, the deployment was a manual process with a development team given access to the production servers and database. This led to a high risk of pushing untested code with no control over ownership.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Fintech infra bottlenecks

EXECUTIVE SUMMARY
A premier Fintech company was facing a crisis wherein the transactions were getting slower by the day.


THE CLIENT’S CHALLENGE
The client had multiple challenges in ensuring uptime of the resources. They were completely depended on the colocation provider for their infra needs including security and networking. They had severe scale problems too due to which they had to size the infra for peaky traffic. On top of this, there were huge performance bottlenecks. There were even issues of SPOF, lack of automation. There was an intent to solve these problems but unsure of how to address them.


INSIGHT TO ACTION

Techpartner spent time and understood the whole setup. After a complete study involving their application stack, network stack and database stack Techpartner spent time with their team in understanding their problem statements. Techpartner did a thorough analysis and came up with following…

Provided an in-depth analysis of long running database queries and tune them to reduce the execution time. Analysed the thread dump to identify the issue within the application. Identified the SPOF and help them to create the HA architecture to eliminate the SPOF at all the levels.

Implemented the Automated CICD (continuous integration and continuous deployment) solution to take care for all application deployment. Configured and Installed the Directory Server for managing User and group across the systems. Implemented the Centralised logging server for the engineering team so that no direct access to production system would be needed.


IMPACT
Reduction in query execution time. Fine tuning the application helped to take 10x more load with the same hardware. CI/CD implementation reduces the turnaround time for deployment. User management made easy and component base access with Centralise logging were appreciated during PCI Audits.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Auto portal- open source migration

EXECUTIVE SUMMARY
This top online auto media portal provides an online platform for automobile owners and buyers to research, sell, buy or just discuss about their vehicles.


THE CLIENT’S CHALLENGE
The client had multiple challenges in ensuring uptime of their resources. They were completely depended on colocation provider for their infra needs including build deployment, security and networking. They had severe scale problems too, due to which they had to size the infra for peaky traffic. On top of this, there was tech refresh cycle that was to be done regularly resulting in allocation of extensive resources. Last but not the least, the TCO was not favourable.


INSIGHT TO ACTION
Techpartner spent time and understood the whole setup. After a complete study involving their application stack, network stack and database stack. The Techpartner team spent time with their team in understanding their problem statements. TP did a thorough analysis and came up with a plan to help migrate to the new setup to AWS.


The TCO analysis clearly weighed in favour of AWS as the infra management piece was completely out of the plate.

Techpartner also showcased automation using various AWS and open source tools which improved the productivity including CI/CD.

TP also showcased the advantages of migrating to open source including MSSQL.


IMPACT
With AWS, the client has the flexibility of scaling out or scaling up as and when needed. The elasticity really helped them scale to handle 8x traffic during Bharat IV standards roll out and more recently GST. With rich IAM features, even the developers are given the freedom to own and use instances programmatically. The client could save a lot by migrating to open source database from MSSQL with no compromise on performance and scale.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Mobile applications on AWS

ABOUT VITRUVIAN
Vitruvian has built platforms that automate business functions of real estate companies using real estate management software real estate websites and mobile apps, thereby allowing them to scale their business and manage growth effectively. It’s digital platform attracted more than 2 lakh properties spread across more than 1000 customers across the country.



THE CHALLENGE

Vitruvian launched the real estate website, real estate software, real estate CRM, real estate ERP to serve the customer. With the business growth, they needed the speedy and automated way of deployment cycle which would help them to move the software releases across the different environment (Dev, Staging, UAT etc.) and once accepted the need to push to production.

Since the market is volatile and there are new requests coming every now and then the manual deployment process was becoming a nightmare and delivery timelines were getting missed at times.

With the increase in the load, the existing application design was failing to serve request at scale. Security policies are not followed and thus Vitruvian wanted Techpartner to fix this loophole.



SECURITY LAPSES

● Existing Infra is running under default VPC

● All the instances which are launched are with public IP attached to it including DB instance

● SSH and DB ports were found open to the world (0.0.0.0/0) and all the Web and App instance talking to DB on the public IP

● No specific rules are created in the database

● AWS root access is shared with leads

● The root key is used to access the S3 from app instance for backup

● DB credentials policy is not set (it was taking password even with a single character)

● On app server Tomcat manager is open to the world

● To access the instance single on-pem file which is created during the launching of instance is shared across all developers

● PEM Files are not rotated



DEPLOYMENT LAPSES

Deployment is manual. The Dev team access both web and app server for the deployment and do the git pull there. There were cases where for hotfixes were done directly on the production and related changes are not push to the repository and thus at some point track is missing.

Since deployment is done manually Dev team was given the access to production servers and databases.



PROPOSED ARCHITECTURE



INSIGHT ACTION

The Techpartner team worked with developers to understand the process. Together we chalked down the plan for the automated deployment. Our more focus was to utilise the more open source tools and achieve the required output. Using Ansible we automated the deployment on multiple servers. With the help of Jenkins and Selenium created the Job which will do the post-deployment tests and certify the build. On successful completion of Jenkin jobs, we gave the option either to automatically push the code on production at the scheduled time or do it manually whenever hotfix is needed. With this deployment process, Vitruvian was able to deploy the scalable architecture in minutes.


The Techpartner team also suggested that how we can split the infrastructure into micro services so as to catered the increasing requests which can handle the load easily.

While working on elimination SPOF we not only added the additional systems but also made sure that there won’t be additional outflow in billing and that is controlled by using auto scaling for scaling in and out.



SECURITY BEST PRACTICES

To overcome the existing lapses in the security we redesigned the AWS architecture and implemented best security practices as per AWS standards.

● New 3-Tier VPC. (Public, Private & DB Subnet) with separation of staging and testing environment.

● IAM Role to instance for S3 access

● IAM User for AWS Console access with MFA enabled with restricted policy access.

● Password policy across entire Infra (Web, APP & DB) as well as in IAM.

● Except for Public Facing load balancer everything taken under Private and DB Subnet and thus no instance except VPN Server (Jump host) having public IP.

● VPN server provided with LDAP as an authentication server and all other instances are configured as a client so that PEM file sharing is no more needed.

● PEF files are rotated after every 3 months and kept only with ‘admin’ user and not with ‘group’.

● Root key reference is deleted from every environment.

● Application changes were made to point local DB IP.

● Except port 80 and 443 for web and 1194 for VPN nothing is publically open



DEPLOYMENT AUTOMATION

Deployment is automated for TEST and STAGING environment and kept either scheduled or manual for production. This has been achieved by configuring the automation tools Ansible & Jenkins. Selenium test cases were executed during every code change and on success, build were push to TEST or STAGING environment automatically once build is certified and ready to push to production.

Jenkin is configured either to run the job manually or to schedule the deployment job at night which automatically pushes the build to the production server and intimate the team once it is successfully done over Email.


Every newly launch server by auto-scaling is getting registered with Ansible and deregistered when scaling down to maintain the deployment across all the servers.



THE BENEFITS

● Scalable Architecture: With the scalable architecture Virtuvian was able to serve the client with improved response time which in turns helped to acquire more clients.

● Performance: As the application became modular the whole CI/CD process became easy and efficient.

● Automation: Automation reduced the manual deployment time by 90% giving free hand to the developer to concentrate on innovation.

AWS Stack

For the success of the project, Techpartner used the below AWS Services…

● Amazon EC2 was used to compute with a combination of on-demand and reserved instance. Instances were configured to spin up automatically during load.

● Amazon S3 was used to store mainly for the images which need to be accessible across the instance

● AWS NAT Gateway Service was used to provide the Internet to systems in private subnet during patch management

● AWS CloudWatch was used to monitor the instance performance

● AWS CloudTrail was used to keep track of the activity across the AWS Environment

● AWS CodeCommit was used to keep track of the code repository

● AWS CloudFormation was used to templatize the infrastructure footprint

● AWS Inspector was used to check application and OS security and apply fixes as recommended and ensure consistency

● AWS Config was used to track changes for AWS resources and also to alert with resources that are not compliant as per defined rules

● AWS Identity and Access Management (IAM) was used to provide AWS resources access as per the company’s policy. Also wherever possible IAM roles were used to provide access to AWS resources as per IAM’s best practices



For more, visit www.techpartner.in or contact us at info@techpartner.in.

Gupshup automation on AWS

ABOUT GUPSHUP 
Headquartered in Silicon Valley, Gupshup is the world’s most advanced bot and messaging platform. It enables developers to quickly and easily build, test, deploy and manage chatbots across all messaging channels.

Gupshup has been a pioneer in messaging and bots, long before they became trendy. Gupshup’s messaging platform is used by over 30,000 businesses, leading companies such as Flipkart, OLA, Facebook, Twitter, ICICI, HDFC and Zee TV.

Gupshup’s platform handles over 4 billion messages per month and over 150 billion cumulative. Gupshup also developed a smart-messaging app, Teamchat, which introduced patent-pending “smart” messages in 2014, only now being offered by other messaging apps. Gupshup’s bot platform provides tools for the entire bot lifecycle enabling developers to quickly and easily build, test, deploy, monitor and track bots.



THE CHALLENGE
Gupshup’s working on the IDE which will be useful for the user who came to Gupshup’s platform for creating the bot. The idea behind giving this IDE to user is to check the bot on the UI itself if things are working as expected or not. With the Help of this IDE user can debug the code and work on the solution to get the bot working.

1. The real challenge while designing this project is how we can secure the user data?
2. How can we separate the IDE for each user?
3. How long we will store the data once the user is registered and starts using IDE?
4. How to manage the high availability of IDE?
5. How to manage the scale?
6. How each user can identify his/her own IDE?
7. Which platform to use?
8. How to authenticate the user?

And the list goes on… This time Techpartner came into the picture to figure out the way and to design, implement and manage the solution.



INSIGHT ACTION
Techpartner team worked with tech lead and project management team to understand the project needs. Together we chalked down the plan and finalise the architecture. Our more focus was to utilize the more open source tools and achieve the required output.

Since every user is going work on its own IDE. Techpartner proposed to go with microservice level approach using one docker container for every new IDE. For authorisation, we didn’t want to maintain another authentication tool so we integrated it with Git & Facebook to support SSO. For container orchestration, we use Mesosphere to support the scale and faster rollout.

For data storage, we use AWS EFS service so that data on the EFS available across all the running node which is the basic need of application when scaling in/out.

Containers were created runtime every time users try to use IDE. They are kept running still user session is alive. Data is continuously synced to the local repository in case there are any code changes in user-defined bot using IDE.

Auto Scaling feature is configured for scale as well as for high availability so that at if any EC2 instance holding container goes down auto-scaling feature will launch the instance again and put it behind the Load balancer and then mesosphere will take it from there for container orchestration. This is fully automated and self-healing architecture designed for the Gupshup bot.



SECURITY BEST PRACTICES
Since it was flat subnet architecture as each container which is created and destroyed at runtime need to be available over the internet to the user so that they can work on the respective IDE. To keep the data on each IDE secure from other IDE running for another user data security is a must. We achieve this by using several.

AWS Services

● Even though it is flat subnet we kept the ALB in public subnet and all other containers and mesosphere under private subnet so that access to application is only via Load balancer.

● Except for docker image nothing is stored on the EC2 instance.

● Data constantly push to user repository to maintain consistency.

● EFS is configured to store all temporary data which can be used across all the EC2 instances running containers.

● Cloud Formation template is used to set up the entire infrastructure.

● For all new application deployment, we are following Blue-Green deployment practice. (This helps us to migrate the running IDE containers seamlessly)

● Except port 443 for web, nothing is publically open.



THE BENEFITS
● Scalable Architecture: With the scalable architecture Gupshup was able to serve the user with improved response time which in turns helped to acquire more users.

● Performance: As the application became modular the whole CI/CD process became easy and efficient.

● Automation: Automation reduced the manual deployment time by 90% giving free hand to the developer to concentrate on innovation.




AWS STACK
For the success of the project, Techpartner used below AWS services.

● Amazon EC2 was used for computing with a combination of on-demand and reserved instance. The instance was configured to spin up automatically during load.

● Amazon S3 was used to store mainly for the images which need to be accessible across the instance.

● AWS NAT Gateway Service was used to provide the Internet to systems in private subnet during patch management.

● AWS CloudWatch was used to monitor to the instance performance.

● AWS CloudTrail was used to keep track of the activity across the AWS Environment.

● AWS CloudFormation was used to templatize the infrastructure footprint.

● AWS Inspector was used to check application and OS security and apply fixes as recommended and ensure consistency.

● AWS Config was used to track changes for AWS resources and also to alert with resources that are not compliant as per defined rules.

● AWS Identity and Access Management (IAM) was used to provide AWS resources access as per the company’s policy. Also wherever possible, IAM roles were used to provide access to AWS resources as per IAM’s best practices.



For more, visit www.techpartner.in or contact us at info@techpartner.in.

GOLS eLearning setup on AWS

ABOUT GOLS 
The pioneer and innovator of eLearning in India, GurukulOnline Learning Solutions™ (GOLS) is today, a leader in this realm. GOLS develop specialised solutions for corporate houses, educational institutions, and retail students (B2C – Students) in numerous spheres – synchronous and asynchronous learning, high-end strategy consulting, deployed blended learning, and customised content development, cutting-edge web collaboration and more.



THE CHALLENGE
The client wanted to migrate to AWS infrastructure as they were not able to scale with physical servers. Also sizing the right resources was a challenge with the growth demands of the E-learning business.

Also without a version control system for code development and growing team size resulted in long release and bug fixing cycles. Production deployment was a manual process which sometimes caused downtime whenever there is a new release to be pushed to production. Due to these challenges, they were keen on having a full-fledged automated process to run the whole cycle of integration, deployment, testing etc.

Since every engineering team worked on their code locally, they faced a challenge to setup up their own Environment identical to production to run test cases.



DEPLOYMENT LAPSES
Since there was no version control, the code development process was already challenging with multiple people working on the same codebase. This led to increased development time, release cycles, and bug fixing cycles.

Also, the deployment was a manual process with a development team given access to the production servers and database. This led to a high risk of pushing untested code with no control over ownership.



PROPOSED ARCHITECTURE



INSIGHT ACTION

Techpartner team provided cloud consulting service for migration and developing a CI/CD environment.

Manual deployment was replaced by CI/CD tools in combination.

We designed the custom deployment Jobs for Client Developers by using which they can setup up their own working environment in a few minutes which hold the production masked data for testing.



SECURITY BEST PRACTICES
We designed the AWS architecture and implemented the best security practices as per AWS standards.

● IAM Role to instance for S3 access

● IAM User for AWS Console access with MFA enabled with restricted policy access

● Password policy across entire Infra (Web & DB) as well as in IAM

● DB access (authentication and authorisation) over VPN only and to web instances only

● Accept HTTP and HTTPS Port and SSH for client office IP, nothing is publicly open

● Apart from implementing AWS security best practices, we also implemented security control in webserver to protect from XSS and usage for i-frames



DEPLOYMENT AUTOMATION
We have introduced for client’s Dev team for version control system like git. With the version control system, development of code, bug fixes and release cycle have greatly improved.

Deployment is automated for TEST environment and for Production. This has been achieved by configuring the Automation tools like Jenkins. Once the deployment is successful on TEST environment and ready to push to production Jenkin is configured to schedule the deployment job which automatically push the build to production server.



THE BENEFITS
● Cloud Architecture: With moving to AWS’s Cloud architecture, the client is now able scale and right size infrastructure easily which help them to serve their clients with improved response time. This results in acquiring more clients.

● Performance: With AWS’s scalable and elastic infrastructure, performance of e-learning provider has improved with good response time.

● Automation: Automation reduced the manual deployment time and removed downtime on production.

● Innovation: Team was able to concentrate more on development and core competency rather than working on infrastructure issues and production issues due to bad manual deployment.



AWS STACK
For success of project Techpartner used the below mentioned AWS Services

● Amazon EC2 was used for compute with combination of on demand and reserved instance. Instances are pre-baked and were configured to spin up automatically during load.

● Amazon S3 was used to store mainly for the content videos and files which need to be accessible across the instance.

● AWS CloudTrail was used to keep track the activity across the AWS Environment.

● AWS Inspector was used to check application and OS security and apply fixes as recommended.

● AWS Config was used to track changes for AWS resources and also to alert with resources that are not complaint as per defined rules.

● AWS Identity and Access Management (IAM) was used to provide AWS resources access as per company’s policy. Also wherever possible IAM roles were used to provide access to AWS resources as per IAM’s best practices.

● AWS Lambda is used to take automated AMI backup and delete them after X number of days. It is also used to schedule start and stop of development of TEST environment during non-production hours.



For more, visit www.techpartner.in or contact us at info@techpartner.in.

Athena Automation on AWS

ABOUT ATHENA
Athena is into the cab aggregation business. It took over a company called Flywheel and was now keen on further expansion across the USA.



THE CHALLENGE

Athena LLC is into cab aggregation and facing the challenge of scaling in and out during peak hours and weekends when demand for a cab is more than the regular hours.

The whole CI/CD process was manual along with infrastructure provisioning.

They were keen on having a full-fledged automated process to run the whole cycle of integration, deployment, testing etc. And during scale things should be automatically handled. But since there dev team was busy with feature requests and bug fixes there was no proper attention given on the automation and deployment initiatives. They also wanted help to make sure that customer data is protected across the application with proper backup and restore policy.



DEPLOYMENT LAPSES

The whole CI/CD process was manual along with infrastructure provisioning. It took a lot of time to provision infra with various dependencies on software versions. Also due to the complexity of business logic, manual deployment consumed a lot of time with a high risk of human errors.



PROPOSED ARCHITECTURE



INSIGHT ACTION

Techpartner team provided cloud consulting service and did a full-stack assessment of the existing application and the deployment strategy. We helped them to restructure the stack into a secure environment and limiting public access for load balancers only. Configured application load balancer to reduce overheads of having multiple ELBs wherever feasible. This helped them to reduce cost.

Techpartner worked with dev team to make sure that application is horizontally scalable to handle the load during peaky traffic.

Manual deployment was replaced by writing the Jenkinjobs, Config management and provisioning has been taken care by writing the Chef recipes. Custom monitoring tools are configured to track the actual progress of ride using Grafana and Nagios.

Cache has been incorporated between DB and Application to get the recent coordinates when the ride is in progress which in term reduced the load on the DB by 60%.

Data in transit and data at rest is encrypted as per the government norms with encrypted volumes in place.



SECURITY BEST PRACTICES

We designed the AWS architecture and implemented best security practices as per the AWS standards

● New 3-Tier VPC. (Public,Private & DB Subnet) with Separation of Staging and Testing environment

● IAM Role to the instance for S3 access

● IAM User for AWS Console access with MFA enabled with restricted policy access

● Password policy across entire Infra (Web, APP & DB) as well as in IAM

● Application changes were made to point local DB IP

● Only required ports are open for public access



DEPLOYMENT AUTOMATION
Deployment is automated for STAGING and PRODUCTION environment. This has been achieved by configuring Automation tools like Ansible & Jenkins. Deployment automation will remove instances from load balancer before deploying the releases and once the deployment is tested and verified using automated test cases, it will be added back to load balancer for serving the traffic.

Also, automation was done at the infrastructure level to automate launching of pre-baked instances, deploying the latest code on the instance and adding to load balancer thus enabling automated scaling.



THE BENEFITS
● Auto scaling Architecture: With this architecture the client able to serve their client with improved response time which in turns help to acquire more clients.

● Performance: As the application is configured in the Auto Scaling performance of the website improved with the good Response time.

● Automation: Automation reduced the manual deployment time and no more downtime on production due to this. The impact of this was also “zero” human error during deployment process

● Innovation: Team was able to concentrate more on Development than the production issues due to custom Deployment jobs for developers.



AWS STACK
For the success of project Techpartner used below AWS Services

● Amazon EC2 was used for computing with a combination of on demand and reserved instance. Instance were pre-backed to spin up when required

● Amazon Route 53 was used for both public and private zone. Records in private zones were used for internal communication between applications

● Amazon S3 was used to store backups, storing kinesis stream data and for uploading regulators data

● AWS NAT Gateway Service was used to provide the Internet to systems in private subnet during patch management

● AWS CloudWatch was used to monitor to the instance performance

● AWS CloudTrail was used to keep track of the activity across the AWS environment

● AWS Trusted Advisor was used for the recommendation of security groups and other security-related recommendations

● AWS Inspector was used to check application security and OS security and apply fixes as recommended

● AWS Config was used to track changes for AWS resources and also to alert with resources that are not a complaint as per defined rules

● AWS Identity and Access Management(IAM) was used to provide AWS resources access as per the company’s policy. Also wherever possible IAM roles were used to provide access to AWS resources as per IAM’s best practices

● AWS Lambda is used to take automated AMI backup and delete them after X number of days



For more, visit www.techpartner.in or contact us at info@techpartner.in.