Koo- the microblogging platform

Koo is a microblogging platform built for Indians in order to be able to share their views using Indian languages. This website and app won the Atma Nirbhar innovation challenge and is a Made in India service.

This platform was co-founded by Aprameya Radhakrishna and Mayank Bidawatka and enables users to share their thoughts in text, audio or video.

Many prominent faces of India use Koo. It lets users to follow people they like, know what’s on their mind and share thoughts too. The Koo app has more than 13 million active users (as of Oct 2021) and is growing at a very high rate. Koo has users across the globe.

THE CHALLENGE

Koo needed a platform which is highly scalable, highly elastic, performant and secure at the same time. Being a social media and micro-blogging platform, It needs to be highly available.

The real challenge was to ensure that the platform scales for millions of users and autoscale. Other issues to be taken care of such as ….

1. Scale up fast

2. Data storage backends should have enough capacity to store and serve data

3. Manage high availability of the system

4. Manage the scale in a cost efficient manner

5. Identify stress points before any system fails

6. Keep the platform secure

7. Performance should be within SLA


TECHPARTNER was involved to design, implement and manage the solution.

INSIGHT ACTION

The TECHPARTNER team worked with the Koo management team and tech leads to understand the project needs. Together we chalked down the plan and finalized the architecture. Our focus was to leverage AWS services and utilize the open source tools to achieve the required output.

For container orchestration we use AWS EKS  to support the scale and faster rollout. The architecture consisted of setting of auto-scaling nodegroups using a mix of on-demand and spot instances for better performance, scalability and cost optimisation. 

SECURITY BEST PRACTICES

  • Active and passive security is implemented by using several AWS Services. All standard AWS recommended security best practices have been implemented
  • Application code deployment via Jenkins is done after proper QA
  • Terraform and Ansible is used to setup the entire infrastructure

THE BENEFITS

  • Scalable Architecture: With the scalable architecture Koo was able to serve the user with improved response time which in turns helped to acquire more users
  • Performance: As the application and deployment is modular, the whole CI/CD process became easy and efficient
  • Automation: Automation reduced the manual deployment time by 90% giving free hand to developer to concentrate on Innovation

AWS STACK

For success of project, TECHPARTNER used below AWS Services

  • Amazon EKS, the managed container service, helps reduce costs with efficient compute resource provisioning and automatic Kubernetes application scaling
  • Amazon EC2 was used for compute with a combination of on demand and spot instances. Node Instance was configured to spin up automatically during load
  • Amazon S3 was used to store mainly for the Images which need to be accessible across the instances
  • AWS NAT Gateway Service was used to provide the Internet to systems in private subnet during patch management
  • AWS CloudWatch was used to monitor to the Instance performance
  • AWS CloudTrail was used to keep track of the activity across the AWS environment.
  • AWS Config was used to track changes for AWS resources and also to alert with resources that are not compliant as per defined rules
  • AWS Identity and Access Management (IAM) was used to provide AWS resources access as per company’s policy. Also wherever possible IAM roles were used to provide access to AWS resources as per IAM’s best practices
  • AWS Trusted Advisor checks and provides recommendations that help us follow AWS best practices
  • AWS Secrets Manager is used to protect secrets needed to access applications, services
  • AWS Guardduty is being used to proactively monitor for threats. This helps mitigate threats early by triggering automated responses.
  • AWS KMS – keys generated using KMS are used in encryption for secure data
  • AWS Aurora RDS is used as the primary database backend with Postgresql engine
  • AWS Elasticache Redis is used as the in-memory data-store
  • AWS ECR is used to store the application containers
  • AWS ELK is used for centralised logging. All applications log to this centralised ELK
  • AWS Elasticsearch – The search in the application is powered by Elasticsearch

Eclipse Che on AWS with EFS

This blog is for Eclipse Che 7 (Kubernetes-Native in-browser IDE) on AWS with EFS Integration.

Image for post

Eclipse Che makes Kubernetes development accessible for developer teams, providing one-click developer workspaces and eliminating local environment configuration for your entire team. Che brings your Kubernetes application into your development environment and provides an in-browser IDE, allowing you to code, build, test and run applications exactly as they run on production from any machine.

How Eclipse Che Works

  • One-click centrally hosted workspaces
  • Kubernetes-native containerised development
  • In-browser extensible IDE

Here we will go through how to installing Eclipse Che 7 on AWS Cloud, which focuses on simplifying writing, building and collaborating on cloud native applications for teams.

Prerequisites

  • A running instance of Kubernetes, version 1.9 or higher.
  • The kubectl tool installed.
  • The chectl tool installed

Installing Kubernetes on Amazon EC2

  1. Launch a minimum sized linux Ec2 instance, say like t.nano or t3 micro
  2. Set up the AWS Command Line Interface (AWS CLI). For detailed installation instructions, see Installing the AWS CLI.
  3. Install Kubernetes on EC2. There are several ways to have a running Kubernetes instance on EC2. Here, the kops tool is used to install Kubernetes. For details, see Installing Kubernetes with kops. You will also need kubectl to install kops which can be found at Installing kubectl
  4. Create a Role with Admin privileges and attach it to the EC2 instance where kops is installed. This role will be creating kubernetes cluster with master, nodes with autoscaling groups, updating route53, creating load balancer for ingress. For detailed instructions, see Creating Role for EC2

So to summarise, so far we have installed aws cli, kubectl, kops tool and attached AWS admin role to EC2 instance.

Next, We need route53 records which kops can use to point kubernetes api, etcd…

Throughout the document, I will be using eclipse.mydomain.com as my cluster domain.

Now, let’s create public hosted zone for “eclipse.mydomain.com” in Route53. Once done, make a note of zone id which will be used later

Image for post

Copy the four DNS nameservers on the eclipse.mydomain.com hosted zone and create a new NS record on mydomain.com and update the above copied DNS entries. Note that when using a custom DNS provider, updating the record takes a few hours.

Image for post

Next, Create the Simple Storage Service (s3) storage to store the kops configuration.

$ aws s3 mb s3://eclipse.mydomain.com
make_bucket: eclipse.mydomain.com

Inform kops of this new service:

$ export KOPS_STATE_STORE=s3://eclipse.mydomain.com

Create the kops cluster by providing the cluster zone. For example, for Mumbai region, the zone is ap-south-1a.

$ kops create cluster --zones=ap-south-1a apsouth-1a.eclipse.mydomain.com

The above kops command will create new VPC with CIDR 172.20.0.0/16 and new subnet to install nodes and master in kubernetes cluster and will use Debian OS by default. Incase you want to use your own existing VPC, Subnet and AMI, then use below command:

$  kops create cluster --zones=ap-south-1a apsouth-1a.eclipse.mydomain.com --image=ami-0927ed83617754711 --vpc=vpc-01d8vcs04844dk46e --subnets=subnet-0307754jkjs4563k0

The above kops command uses ubuntu 18.0 AMI to be used for master and worker nodes. You can add your own AMIs as well.

You can review / update the cluster config for cluster, master and nodes using below commands

For cluster —

$ kops edit cluster — name=ap-south-1a.eclipse.mydomain.com

For master —

$ kops edit ig — name=ap-south-1a.eclipse.mydomain.com master-ap-south-1a

For nodes —

$ kops edit ig — name=ap-south-1a.eclipse.mydomain.com nodes

Once the cluster, master, node config is reviewed and updated , you can create cluster using following command

$ kops update cluster --name ap-south-1a.eclipse.mydomain.com --yes

After the cluster is ready, validate it using:

$ kops validate cluster

Using cluster from kubectl context: ap-south-1a.eclipse.mydomain.com

Validating cluster eclipse.mydomain.com
INSTANCE GROUPS
NAME                ROLE    MACHINETYPE  MIN  MAX  SUBNETS
master-ap-south-1a   Master  m3.medium    1    1    eu-west-1c
nodes               Node    t2.medium    2    2    eu-west-1c

NODE STATUS
NAME                                         ROLE    READY
ip-172-20-38-26.ap-south-1.compute.internal   node    True
ip-172-20-43-198.ap-south-1.compute.internal  node    True
ip-172-20-60-129.ap-south-1.compute.internal  master  True

Your cluster is ap-south-1a.eclipse.mydomain.com ready

It may take approx 10 -12 min for cluster to come up.

Check the cluster using the kubectl command. The kubectl context is also configured automatically by the kops tool:

$ kubectl config current-context
ap-south-1a.eclipse.mydomain.com
$ kubectl get pods --all-namespaces

All the pods in the running state are displayed.

Installing Ingress-nginx

To install Ingress-nginx:

  1. Install the ingress nginx configuration from the below github location.
$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/mandatory.yaml

2. Install the configuration for AWS.

$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/service-l4.yaml

$ kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/patch-configmap-l4.yaml

The following output confirms that the Ingress controller is running.

$ kubectl get pods --namespace ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-76c86d76c4-gswmg   1/1     Running   0          9m3s

If the pod is not showing ready yet, wait for couple of minutes and check again.

3. Find the external IP of ingress-nginx.

$ kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}'
Ade9c9f48b2cd11e9a28c0611bc28f24-1591254057.ap-south-1.elb.amazonaws.com

Troubleshooting: If the output is empty, it implies that the cluster has configuration issues. Use the following command to find the cause of the issue:

$ kubectl describe service -n ingress-nginx ingress-nginx

4. Now in Route53, create a wildcard record in zone eclipse.mydomain.com with the record as LB url as received in previous kubectl get services command. You can create CNAME record or Alias A record.

Image for post

The following is an example of the resulting window after adding all the values.

Image for post

It is now possible to install Eclipse Che on this existing Kubernetes cluster.

Enabling the TLS and DNS challenge

To use the Cloud DNS and TLS, some service accounts must be enabled to have cert-manager managing the DNS challenge for the Let’s Encrypt service.

  1. In the EC2 Dashboard, identify the IAM role used by the master node and edit the same. Add the below inline policy to the existing IAM role of the master node and name it appropriately like eclipse-che-route53.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetChange",
                "route53:ListHostedZonesByName"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": [
                "arn:aws:route53:::hostedzone/<INSERT_ZONE_ID>"
            ]
        }
    ]
}

Update the DNS Zone ID copied earlier while creating zone

Installing cert-manager

  1. To install cert-manager, run the following commands
$ kubectl create namespace cert-manager
namespace/cert-manager created
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
namespace/cert-manager labeled

2. Set validate=false. If set to true, it will only work with the latest Kubernetes:

$ kubectl apply \
  -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml \
  --validate=false

3. Create the Che namespace if it does not already exist:

$ kubectl create namespace che
namespace/che created

4. Create an IAM user cert-manager with programatic access and below policy. Copy the Access key and Secret Access key generated for further use. This user is required to manage route53 records for eclipse.mydomain.com DNS validation during certificate creation and certificate renewal.

Policy to be used with cert-manager IAM user

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "route53:GetChange",
            "Resource": "arn:aws:route53:::change/*"
        },
        {
            "Effect": "Allow",
            "Action": "route53:ChangeResourceRecordSets",
            "Resource": "arn:aws:route53:::hostedzone/*"
        },
        {
            "Effect": "Allow",
            "Action": "route53:ListHostedZonesByName",
            "Resource": "*"
        }
    ]
}

5. Create a secret from the SecretAccessKey content.

$ kubectl create secret generic aws-cert-manager-access-key \
  --from-literal=CLIENT_SECRET=<REPLACE WITH SecretAccessKey content> -n cert-manager

6. To create the certificate issuer, change the email address and specify the accessKeyID:

$ cat <<EOF | kubectl apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: che-certificate-issuer
spec:
  acme:
    dns01:
      providers:
      - route53:
          region: eu-west-1
          accessKeyID: <USE ACCESS_KEY_ID_CREATED_BEFORE>
          secretAccessKeySecretRef:
            name: aws-cert-manager-access-key
            key: CLIENT_SECRET
        name: route53
    email: user@mydomain.com
    privateKeySecretRef:
      name: letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
EOF

7. Add the certificate by editing the domain name value (eclipse.mydomain.com, in this case) and the dnsName value:

$ cat <<EOF | kubectl apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
 name: che-tls
 namespace: che
spec:
 secretName: che-tls
 issuerRef:
   name: che-certificate-issuer
   kind: ClusterIssuer
 dnsNames:
   - '*.eclipse.mydomain.com'
 acme:
   config:
     - dns01:
         provider: route53
       domains:
         - '*.eclipse.mydomain.com'
EOF

8. A new DNS challenge is being added to the DNS zone for Let’s encrypt. The cert-manager logs contain information about the DNS challenge.

9. Obtain the name of the Pods:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6587688cb8-wj68p              1/1     Running   0          6h
cert-manager-cainjector-76d56f7f55-zsqjp   1/1     Running   0          6h
cert-manager-webhook-7485dd47b6-88m6l      1/1     Running   0          6h

10. Ensure that the certificate is ready, using the following command. It takes approximately 4-5 min for the certificate creation process to complete. Once the certificate is successfully created, you will be below output.

$ kubectl describe certificate/che-tls -n che

Status:
  Conditions:
    Last Transition Time:  2019-07-30T14:48:07Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2019-10-28T13:48:05Z
Events:
  Type    Reason         Age    From          Message
  ----    ------         ----   ----          -------
  Normal  OrderCreated   5m29s  cert-manager  Created Order resource "che-tls-3365293372"
  Normal  OrderComplete  3m46s  cert-manager  Order "che-tls-3365293372" completed successfully
  Normal  CertIssued     3m45s  cert-manager  Certificate issued successfully

Now that we have Kubernetes cluster, Ingress controller (AWS Load Balancer )and TLS certificate ready, we are ready to install Eclipse Che.

Installing Che on Kubernetes using the chectl command

chectl is the Eclipse Che command-line management tool. It is used for operations on the Che server (start, stop, update, delete) and on workspaces (list, start, stop, inject) and to generate devfiles.

Install chectl cli tool to manage eclipse che cluster. For installation instructions see, Installing chectl

You will also need Helm and Tiller. To install Helm you can follow instructions at Installing Helm

Once chectl is installed, you can install and start the cluster using below command.

chectl server:start --platform=k8s --installer=helm --domain=eclipse.mydomain.com --multiuser --tls

If using without authentication, you can skip — multiuser and start cluster as below

chectl server:start --platform=k8s --installer=helm --domain=eclipse.mydomain.com --tls
✔ ✈️  Kubernetes preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify remote kubernetes status...done.
    ✔ Verify domain is set...set to eclipse.mydomain.com.
  ✔ 🏃‍  Running Helm to install Che
    ✔ Verify if helm is installed
    ✔ Check for TLS secret prerequisites...che-tls secret found.
    ✔ Create Tiller Role Binding...it already exist.
    ✔ Create Tiller Service Account...it already exist.
    ✔ Create Tiller RBAC
    ✔ Create Tiller Service...it already exist.
    ✔ Preparing Che Helm Chart...done.
    ✔ Updating Helm Chart dependencies...done.
    ✔ Deploying Che Helm Chart...done.
  ✔ ✅  Post installation checklist
    ✔ PostgreSQL pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Keycloak pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Che pod bootstrap
      ✔ scheduling...done.
      ✔ downloading images...done.
      ✔ starting...done.
    ✔ Retrieving Che Server URL...https://che-che.eclipse.mydomain.com
    ✔ Che status check
Command server:start has completed successfully.

Now you can open Eclipse Che portal using URL —

https://che-che.eclipse.mydomain.com/

Eclipse che has 3 components — Che , plugin-registry and devfile-registry

Challenges

Each of these component have same versioning that goes hand in hand. For eclipse che cluster to function correctly, image used for che, plugin registry and devfile registry must have same version. The current latest version is 7.13.1.

However chectl have command line option to specify only che image version. Incase if you want to use higher version of Che cluster implementation, you will need to upgrade chectl to the respective version. For example, you will need chectl version 7.12.1 to install Che, plugin-registry and devfile-registry of version 7.12.1 and so on.

Advanced Eclipse Che Configuration

By default, Eclipse che uses “common” PVC strategy which means all workspaces in the same Kubernetes Namespace will reuse the same PVC

CHE_INFRA_KUBERNETES_PVC_STRATEGY: common

Hence the challenge posed is that while using multiple worker node cluster, when the workspace pods is launched on multiple worker node, it fails as it is not able to get the EBS volume which is already mounted on other node.

Other option is to use ‘unique’ or ‘per-workspace’ which will create multiple EBS volumes to manage. Here the best solution would be to use a shared file system where we can use ‘common’ PVC strategy so that all workspaces are created under same mount.

We have used EFS as our preferred choice of its capabilities. More on EFS here

Integrating EFS as shared storage for use as eclipse che workspaces

Make EFS accessible from Node Instances. This can be done by adding node instances SG (this is created by KOPS cluster already) to SG of EFS

$ kubectl create configmap efs-provisioner --from-literal=file.system.id=fs-abcdefgh --from-literal=aws.region=ap-south-1 --from-literal=provisioner.name=example.com/aws-efs

Download EFS deploy file from below location using wget –

$ wget https://raw.githubusercontent.com/binnyoza/eclipse-che/master/efs-master.yaml

Edit efs-master.yaml to use your EFS ID (edit is at 3 places). Also update the storage size for EFS to say 50Gi and apply using below command

kubectl create --save-config -f efs-master.yaml

Apply below configs

kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/aws-efs-storage.yaml
kubectl apply -f https://raw.githubusercontent.com/binnyoza/eclipse-che/master/efs-pvc.yaml

Verify using

kubectl get pv
kubectl get pvc -n che

Edit che config using below command and add below line

$ kubectl edit configmap -n che
   CHE_INFRA_KUBERNETES_WORKSPACE_PVC_STORAGEClassName: aws-efs

save and exit. Restart che pod using below command. Whenever any changes made to che configmap, restarting che pod is required.

$ kubectl patch deployment che -p   "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n che

Check the pod running status using

$ kubectl get pods -n che

Now, you can start creating workspaces and your IDE Environment from URL —

https://che-che.eclipse.mydomain.com

Limitation

  • Eclipse Che version 7.7.1 and below is unable to support more than 30 workspaces
  • In case of multiple Che clusters, creating them in same VPC leads to failure of TLS certificate creation. These seems due to rate limits imposed by Lets Encrypt

Yay… Thanks for reading through 🙂

This article has been written by Binny Oza, Principal Devops Engineer at Techpartner. At Techpartner, we excel in providing simple solutions. For more, visit www.techpartner.in or contact us at info@techpartner.in.

Monitoring your AWS Infrastructure changes

Monitoring application and infrastructure helps maintain acceptable uptime / SLA for any business. However with different Cloud providers (here AWS) and several remote teams, it becomes equally important to monitor infrastructure changes as well to identify out-of-compliance events, accelerate incident investigations, security breaches in a timely manner.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.

Benefits

  • Simplified compliance
  • Visibility into user and resource activity
  • Security automation
  • Security analysis and troubleshooting

With several remote teams working and CloudTrail logging all of AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools and other AWS services, it becomes even more important to be notified of critical events and changes. This will also accelerate incident investigations.

Here, I will discuss about Security Automation on how we can get notification if there is any changes on AWS Infrastructure. I will quote examples for notifying any changes in Security Groups and EC2 instance status like if anyone Started, Stopped, Terminated or created new instance.

Note: You must enable AWS CloudTrail for this to work.

Security Group Changes Notification

This CloudWatch rule will help you to get notified on slack, when anyone makes changes to the security group.

  1. Goto CloudWatch → Events → Rules
  2. Create a New Rule and Choose Event Pattern. Edit the Event Pattern Preview and Add the below block
{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "ec2.amazonaws.com"
    ],
    "eventName": [
      "AuthorizeSecurityGroupEgress",
      "AuthorizeSecurityGroupIngress",
      "RevokeSecurityGroupEgress",
      "RevokeSecurityGroupIngress"
    ]
  }
}

3. In the target, choose SNS Topic of your choice, preferably Slack or Email and in Configure Input, choose Input Transformer. Input Transformer filters the entire event based on the defined template.

Enter below text in 1st Text Block. This will define various available fields in CloudTrail event against a variable.

{"changetype":"$.detail.eventName","sgid":"$.detail.requestParameters.groupId","region":"$.detail.awsRegion","username":"$.detail.userIdentity.principalId"}

Enter below text in 2nd Text Block. This will form a filtered message using the variables to be sent to the SNS topic.

"The user <username> has initiated <changetype> for Security Group with id <sgid> in <region>."

The above event will be triggered as soon as CloudTrail detects any changes to the Security group and sends notification to the SNS Topic.

Sample Notification Event –

“The user ASFDS6F32B23IU3D32:iam.user has initiated AuthorizeSecurityGroupIngress for Security Group with id sg-0gd73rbjdhbcew in us-east-1.”

“The user ASFDS6F32B23IU3D32:iam.user has initiated RevokeSecurityGroupIngress for Security Group with id sg-0gd73rbjdhbcew in us-east-1.”

EC2 Instance Changes Notification

This CloudWatch rule will help you to get notified on slack, when anyone Start / Stop / Launch / Terminate EC2 instances.

  1. Goto CloudWatch → Events → Rules
  2. Create a New Rule and Choose Event Pattern. Edit the Event Pattern Preview and Add the below block
{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "ec2.amazonaws.com"
    ],
    "eventName": [
      "RunInstances",
      "StartInstances",
      "StopInstances",
      "TerminateInstances"
    ]
  }
}

3. In the target, choose SNS Topic of your choice, preferably Slack or Email and in Configure Input, choose Input Transformer. Input Transformer filters the entire event based on the defined template.

Enter below text in 1st Text Block. This will define various available fields in CloudTrail event against a variable.

{"instanceid":"$.detail.requestParameters.instancesSet","changetype":"$.detail.eventName","region":"$.detail.awsRegion","username":"$.detail.userIdentity.principalId"}

Enter below text in 2nd Text Block. This will form a filtered message using the variables to be sent to the SNS topic.

"The user <username> has initiated <changetype> for instance with instance id <instanceid> in <region>."

The above event will be triggered as soon as CloudTrail detects any changes to EC2 events like starting / stopping / launching / terminating of EC2 instances and sends notification to the SNS Topic.

Sample Notification Event –

“The user ASFDS6F32B23IU3D32:iam.user has initiated StopInstances for instance with instance id {items:[{instanceId:i-0a6asdf2345ghj}]} in us-east-1.”

“The user ASFDS6F32B23IU3D32:iam.user has initiated StartInstances for instance with instance id {items:[{instanceId:i-0a6asdf2345ghj}]} in us-east-1.”

Similarly, you can also write CloudWatch event rules to notify for critical AWS Infra changes like Route Table changes, VPC changes, S3 Policy changes, IAM changes etc.

This article has been written by Binny Oza, Principal Devops Engineer at Techpartner. For more, visit www.techpartner.in or contact us at info@techpartner.in.

AWS CloudFront at Techpartner

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Our team has built a base of applied experience around Amazon CloudFront, and we’re dedicated to helping our customers achieve their business goals by leveraging the agility of the AWS Cloud.


Our CloudFront practice helps our customers accelerate the delivery of websites, APIs, video content, and other web assets. We have experience in building and managing critical systems, and have a highly skilled hands-on team of technical experts, we specialize in seamless execution, crafting best of solutions that enable our customers to unlock value and capabilities leveraging the power of AWS.


AWS CloudFront helps distribute content at the edge, speeding up content delivery by offering it to Web viewers from edge locations near them. By delivering cached content quickly to users, companies can effectively meet the demand of internal and external customers. AWS CloudFront is part of our best practice to speed content delivery and availability.


The Amazon CloudFront content delivery network (CDN) is massively scaled and globally distributed. AWS CloudFront is a highly-secure CDN that provides both network and application level protection. CloudFront features can be customized for the specific application requirements. Amazon CloudFront is integrated with AWS services such as Amazon S3, Amazon EC2, Elastic Load Balancing, Amazon Route 53, and AWS Elemental Media Services.


Case Studies: HousieQuiz

Does technology set you free?

Today, there is no process that is not touched by technology. Right from managing bank statements to filing tax returns, from the printing of the inventory list at your local grocery store to the tracking of advertisements online to show up exactly what you could be of interest to you, technology is everywhere.

Technology is a critical enabler to the functioning of every business, be it advertising, targeting, logistics or procurement of stock. As technological assets are not a one block unit that comes in a pack, people keep stacking up a few relevant blocks every now and then. The problem arises when there these a lot of this jigsaw puzzle pieces that don’t seem to fit into each other to form a seamless unit. Moreover, different technologies sometimes fail to communicate between themselves, throwing up errors as a result. This requires human intervention and supervision.

So is technology giving you a sense of freedom that it was supposed to deliver? Perhaps not. One solution comes with another bunch of unannounced issues. If you are an MSME or an SME, you would understand what we are trying to say here.

We at Techpartner, however, see this differently. Smaller companies have the advantage to get the IT blocks rearranged as it’s easier to move fast within a less rigid corporate structure. As for the execution, we are the experts. We restructure all of the dumb terminals to start communicating with one another, in a way which makes sense to the business. All the IT systems too are brought under the same platform, accessible from any location or device thanks to the cloud integration facilities.

Perspective is everything. Sometimes, when you’re so close to your business, it’s hard to see the opportunities that are lying on your way, waiting to be exploited.

The roadblock of small and medium-sized firms is perspective. Dynamic business leaders can see challenges coming over the horizon and they are prepared when the storm blows in. This pandemic has taught us all why keeping our IT infra accessible via the cloud is absolutely critical.

Maybe the pandemic is a blessing in disguise. William Wrigley Jr was a soap and baking powder salesman who offered free chewing gum with every purchase before he realised that his gum was becoming more popular than the cleaning products. Business leaders need to spot the potential of an idea or a situation and have the courage to experiment with changing scenarios.

The customer, the way to manage the business… the core will always remain the same, but the way it is done will always keep changing.

Clear, collective leadership and the willingness to keep embracing change is the key. For the rest of the action, we are just a call away.

For more, visit www.techpartner.in or contact us at info@techpartner.in.

Flocash migration to AWS

Executive Summary
A premier online payment gateway company had its infrastructure co-located in a data center with managed services from them.


The Client’s Challenge 
The client had multiple challenges in ensuring uptime of the resources. They were completely dependent on colo provider for their infra needs including security and networking. They had severe scale problems too due to which they had to size the infra for peaky traffic. To top it all, the technology refresh cycle was to be periodically resulting in allocation of extensive resources.


Insight to Action 
Techpartner spent time and understood the whole setup. After a complete study involving their application stack, network stack and database stack TP spent time with their team in understanding their problem statements. The Techpartner team did a thorough analysis and came up with a plan to help migrate to the new setup to AWS. The TCO analysis clearly weighed in favour of AWS as the infra management piece was completely out of the plate.

The Techpartner team also showcased automation using various AWS and open source tools which improved the productivity including CI/CD.


Impact 
With the whole plan approved and project co-funded by AWS, the migration activity was a smooth affair with zero burden on the client. The whole setup being certified on various standards and compliances, the client had to only now think of application level compliances. With great amount of savings, improved productivity and greater uptime it was win-win solution for all.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Containerisation and how banking can benefit

THE CHALLENGE
A premier private bank was looking to modernise its applications and automate the whole CI/CD process. They were keen on making the existing architecture more modular and take advantage of microservices. They wanted to move from Solaris and WebLogic platform to open source using Linux and Tomcat which are more flexible.


INSIGHT ACTION 

The Techpartner team suggested the use of docker based approach which will reduce the cost of management of VM in terms of its usage. Kubernetes was used for docker orchestration in HA mode using multiple masters, thus making the cluster highly available in case of any base node failures.


Application along with Pod monitoring and logging was set up to provide a single view on the current status of Infra to the support team. 


DEPLOYMENT AUTOMATION 

CI/CD Process – For production deployment, docker registry was configured to hold versioned docker images. Jenkins jobs were configured to build docker image from code repository and to automatically pull required docker image and deploy to the Kubernetes cluster.


THE BENEFITS

The bank was able to benefit from the microservices architecture with streamlined CI/CD process. By moving to microservices architecture in a containerised environment, they were no longer dependent on any proprietary platform. By using Jenkins and private docker registry, they were able to restrict deployment processes to authorised users only with approval. This led to a more streamlined and controlled CI/CD process with minimum downtime and zero human intervention. 


STACK USED
1. Docker
2. Kubernetes orchestration
3. etcd database
4. ELK
5. Prometheus/Grafana
6. Haproxy
7. Keepalived


For more, visit www.techpartner.in or contact us at info@techpartner.in.

CarWale migration to AWS

Executive Summary
Carwale is India’s largest auto media vehicle. It provides an online platform for automobile owners and buyers to research, sell, buy or just discuss about their vehicles. 


The Client’s Challenge

The client had multiple challenges in ensuring uptime of the resources. They were completely dependent on colo provider for their infra needs including build deployment, security and networking. They had severe scale problems too due to which they had to size the infra for peaky traffic. On top of this there was tech refresh cycle that was to be done regularly resulting in allocation of extensive resources. And last but not the least the TCO was not to be very much favourable. 


Insight to Action

Techpartner spent time and understood the whole setup. After a complete study involving their application stack, network stack and database stack the Techpartner team spent time understanding the problem statements, did a thorough analysis and came up with a plan to help migrate to the new setup to AWS. The TCO analysis clearly weighed in favour of AWS as the infra management piece was completely out of the plate. 

Techpartner also showcased automation using various AWS and open source tools which improved the productivity including CI/CD. 

Techpartner also showcased the advantages of migrating to open source including MSSQL. 


Impact

With AWS, client is having the flexibility of scaling out or scaling up as and when needed. The elasticity really helped them scale to handle 8X traffic during Bharat IV standards roll out and more recently GST. With rich IAM features, even the developers are given the freedom to own and use instances programmatically. The client could save a lot by migrating to open source database from MSSQL with no compromise on performance and scale.


For more, visit www.techpartner.in or contact us at info@techpartner.in.

Smart messaging- IDE

EXECUTIVE SUMMARY
A Hi-tech messaging company wanted to setup IDE for users to write their own bots and test them online which they can integrate with Facebook, Telegram, WhatsApp etc.


THE CHALLENGE
  • Standalone application with multiple web servers on single system
  • Improper architecture with multiple SPOFs (single point of failure)
  • Deployment automation missing
  • Application Logging visibility was missing
  • Scaling was not handled properly

INSIGHT ACTION
  • Did in-depth analysis on application function component of the application which are loosely and tightly coupled
  • Worked with dev team to separate the component to make it micro serviced
  • Configured and installed the Docker with Mesos as management server for all these application
  • Identified the SPOFs and helped them to create the HA architecture to eliminate the SPOF at all the levels
  • Implemented the automated CI/CD solution to take care for all application deployment with Jenkins jobs using AMIs as the base
  • Implemented the Centralised logging Server for Engineering team so that no direct access to production system was needed
  • Identified and projected the all-round needs for increased growth and configured the Auto Scaling to handle the scale
  • Made use of Terraform for templatization of environment

THE IMPACT
  • Container based applications helped to take any type of load without affecting the cost
  • Planned growth within the company, the needs of which were anticipated and met
  • CICD implementation reduced the turnaround time for deployment
  • Centralised logging was appreciated during audits
  • Team was able to concentrate more on development than the infrastructure issues

STACK USED

For success of project, Techpartner used below services

  • Docker was used for the Microservice Architecture
  • Ansible was used for the config management (patching, security updates etc.) of servers
  • Git was used to keep track the code repository
  • Jenkins was used with webhook constantly checking git changes and automatically execute test cycle & deploy the successful build in Dev/staging/production environment
  • HAProxy was used for Load balancing between the Docker Microservices
  • Messos was used for Docker Management
  • Terraform was used to templatise the infrastructure footprint

For more, visit www.techpartner.in or contact us at info@techpartner.in.

Enterprise messaging- robust infra solution

EXECUTIVE SUMMARY
The leading enterprise messaging company is India’s largest enterprise messaging platform provider catering to top corporates in India and abroad. The client was looking for an IT infrastructure, which could scale from few users to millions of users with 99.99% uptime.


THE CLIENT’S CHALLENGE
The client was coming up with a new product addressing to prospective to customers worldwide. They also wanted to get started quickly and scale as they grow with no shortcomings. And last but not the least the TCO has to be very much favourable.


INSIGHT TO ACTION
Techpartner spent time and understood the whole setup. After a complete study TP proposed the AWS model. We quickly got AMIs ready and setup a full blown infrastructure including VPC, VPN etc. Load balancer was selected and purchased from market place based on our study and analysis.

Entire infrastructure was based on dedicated instances as we wanted to understand the need trend.

The whole setup is now migrated to reserved (1year heavy) instance helping the client reduce the monthly cost from around 6000 USD to 1290 USD with one time cost of around 20000 USD.


IMPACT
With AWS, client is having the flexibility of scaling out or scaling up as and when needed. With rich IAM features, even the developers are given the freedom to own and use instances programmatically.


For more, visit www.techpartner.in or contact us at info@techpartner.in.