0% found this document useful (0 votes)
19 views

Aditya Kasala AWS..

Uploaded by

Mandeep Bakshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Aditya Kasala AWS..

Uploaded by

Mandeep Bakshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

ADITYA KASALA

https://www.linkedin.com/in/aditya-kasala-a45bb4ab/
[email protected] | 2098908247

SUMMARY:

 Over 14+ years of experience as a Build, Release, Deployment, Configuration Management (CM), and
DevOps Engineer, I specialize in AWS, DevOps, Kubernetes, and Terraform etc. My expertise spans
configuration management, automated build and deployment processes, release management.
 Experienced in Various AWS services like IAM, S3, EC2, EKS, Lambda, ApiGateway, Route53, VPC, Subnets,
Route tables, Cloudwatch and Cloudtrail etc
 Proficient in managing and administering AWS RDS, DynamoDB, and DocumentDB, including tasks such as
provisioning, configuring, and optimizing database instances for high availability, disaster recovery, and
performance.
 Extensive experience in managing and automating cloud infrastructure using Terraform and
CloudFormation.
 Experience in managing Kubernetes clusters for scalable and resilient container orchestration.
 Experience in Utilizing EKS Fargate for serverless compute, enabling scalable containerized applications
without managing servers.
 Expertise in Implementing Gitops Deployment Model for EKS clusters.
 Expertise in provisioning and managing cloud resources consistently and repeatably Using IAC tools like
Terraform and CloudFormation
 Experienced in Continuous integration and Continuous Delivery Tools Like GitLab, Jenkins Etc.
 Designed and deployed serverless functions like lambda to reduce operational overhead and improve
scalability.
 Implemented monitoring solutions for insights into application performance, resource utilization, and
system health using Datadog, Prometheus and Grafana
 Experienced in Setting up ELK Stack for centralized logging, enabling efficient troubleshooting and log
analysis.
 Experienced in Deploying and managing microservices using Docker containers for portability and
consistency.
 Experienced in implementing Cost effective Solutions in the AWS cloud using AWS cost explorer and aws
trusted Advisor.
 Extensive experience in setting up service mesh tools like Istio, LinkerD in Kubernetes clusters.
 Strong problem-solving skills with a track record of delivering efficient, reliable, and scalable systems.
 Experienced in Python scripting extensively in a DevOps environment to automate deployment processes,
Proficient in creating custom Python scripts for continuous integration and delivery pipelines, enabling
efficient and scalable software delivery practices.
 Strong experience in monitoring & observability using Splunk & Datadog.
SKILL SET:

Source Code Management Subversion, Perforce, ClearCase, GIT


Configuration Management Ansible, Chef, Puppet
Build/Release Management ANT, Maven, Gradle, UCD, Cruise Control, Anthill Pro
Change/Defect Management ClearQuest, JIRA, Bugzilla
Scripts Perl, Python, Unix shell scripting, GO
Web/Application Servers Tomcat, Web logic
Languages C, C++, Java, XML, HTML, CSS
Operating Systems 2003 server, UNIX (Solaris), Win XP/NT/2000/9x, Linux and
MS-DOS
Development Tools Eclipse, Various IDEs
Databases Oracle, MySQL, Cassandra
Cloud computing Platform AWS
AWS Services Ec2, S3, Route 53, VPC, RDS, IAM, Cloudwatch, DyanmoDB,
DocumentDB
CI/CD GitLab, GitHub actions, ArgoCD, Jenkins
GitOps tools ArgoCD, Flux
Container Orchestration Kubernetes, AWS ECS, Openshift,
Serverless API gateway, Lambdas, Beanstalk, Fargate
Messaging RabbitMQ, Kafka, SNS, SQS
Networking VPC, Subnets, Route Tables, NAT gateways, Transit
gateways, Direct Connect
Monitoring & Observability Splunk, DataDog, Grafana, Priometheus
IAAC Terraform, CloudFormation

Education:

• Bachelors in Instrumentation Engineering, JNTU University, India - 2007


• Masters in Electrical Engineering, Northwestern Polytechnic University, California - 2010

Certifications:

 Certified Kubernetes Administrator


https://www.credly.com/badges/983cce3a-bed3-49eb-be7f-c1cfd93e1024/public_url

 AWS Certified Solutions Architect – Professional


https://www.credly.com/badges/0245a3c9-8aaa-4cda-982a-9337bbdea631/public_url
PROFESSIONAL WORK EXPERIENCE:

Client: CHARTER Communications May’2015 – Present


Role: Principal Dev-Ops Engineer
Charter Communications is an American cable telecommunications company, which offers their services to
consumers and businesses under the branding of Charter Spectrum. Providing services to 5.9 million
customers in 29 states, it is the fourth-largest cable operator in the United States by subscribers, behind
Comcast, Time Warner Cable & Cox Communications and by residential subscriber lines it is the tenth-largest
telephone provider.

Responsibilities:
 Designed and implemented scalable and resilient infrastructure on AWS using EKS and Fargate.
 Utilized Terraform for infrastructure as code (IaC) to Create various resources in AWS, Also created
terraform templates to spinup EKS cluster and install addons.
 Implemented Karpenter for optimizing Kubernetes cluster resource usage and cost-effective scaling.
 Developed and maintained CI/CD pipelines using GitLab CI for automated build, test, and deployment
processes.
 Integrated ArgoCD for continuous delivery, enabling seamless application deployment and version control
in Kubernetes.
 Deployed RabbitMQ as Stateful set in EKS also Configured and maintained RabbitMQ for reliable
messaging between microservices
 Deployed serverless functions using AWS Lambda to reduce operational overhead and improve scalability.
 Configured API Gateway to create and manage APIs, providing secure and scalable access to backend
services.
 Implemented monitoring solutions in EKS using Datadog for insights into application performance,
resource utilization, and system health.
 Implemented Fluentd in Kubernetes cluster to stream the container logs to splunk, and Created
DashBoards in splunk to visualize applications health.
 Provisioned EKS Kubernetes clusters using CAPI (Cluster API), CAPA (Cluster API Provider for AWS) and
Argocd in GitOps declarative way.
 Deployed helmcharts in Kubernetes cluster in GitOps way using the ArgoCD.
 Configured replica sets, demon sets, deployments, services on the kubernetes cluster to host
containerized micro services.
 Provisioned EKS clusters using terraform modules.
 Deployed and configured Istio and Linkerd as a service mesh to enhance security, observability, and traffic
management within Kubernetes clusters.
 Implemented security best practices, including network policies and role-based access controls (RBAC)
within Kubernetes.
 Collaborated with cross-functional teams to gather requirements, design solutions, and ensure successful
project delivery.
 Documented infrastructure designs, deployment procedures, and operational guidelines to support
knowledge sharing and onboarding of new team members.
 Implemented Canary and Blue/Green Deployment strategies in AWS cloud.
 Involved in troubleshooting of various pod related issues and networking (calico) related issues in
kubernetes cluster.
 Created A, CNAME, ALIAS records in route 53 and configured various routing policies like simple routing,
failover routing, weighted routing, latency-based routing etc.
 Implemented open-source robust monitoring stack leveraging Prometheus, Grafana, Alertmanager to
obtain comprehensive insights into the health and performance of Kubernetes clusters and applications.
 Configured Prometheus to collect metrics from Kubernetes deployments, pods, nodes, and the API server
using specific scraping targets.
 Created Prometheus alerting rules to monitor Kubernetes resources, including high CPU/memory usage,
pod restarts, and container crashes.
 Designed detailed Grafana dashboards to visualize essential Kubernetes metrics (e.g., CPU/memory
utilization, request latency, deployment rollouts) for effective monitoring.
 Integrated Alertmanager with Prometheus to route and group alerts based on severity and source.
 Administered MongoDB and DocumentDB for NoSQL database solutions, focusing on data storage,
retrieval, and performance tuning.
 Managed and administered AWS RDS instances for relational databases such as MySQL, PostgreSQL, and
SQL Server, ensuring high availability, scalability, and performance optimization.
 Implemented backup and recovery strategies for AWS RDS databases using automated snapshots and
maintained disaster recovery plans.
 Monitored and troubleshooted AWS RDS performance using Amazon CloudWatch metrics, Enhanced
Monitoring, and Performance Insights.
 Implemented DynamoDB Streams for real-time data processing and change capture, enabling event-driven
architectures
 Leveraged Python for scripting and automation tasks, including infrastructure management, deployment
automation, and the development of custom DevOps tools.
 Strong experience in monitoring & observability using Splunk & Datadog.
 Installed Datadog agents and ensure log forwarding from cloudwatch to datadog.
 Designed API monitoring metrics using service container logs on Datadog.
 Setup Dashboards on Datadog including APM metrics, System metrics, monitoring & alerts.
 Integrated Datadog alerts with Xmatters.
 Custom Datadog Dashboards based on business requirements.

Environment: EC2, S3, IAM, VPC, Cloud Watch, Cloud Formation, terraform, SNS, SQS, EBS, Route 53, ELB, ALB,
Ansible, Shell scripting, Docker, Maven, Ant, Jenkins, Gitlab-ci, helm node.js, AppDynamics , Instana , Splunk ,
zuul router , eureka discovery services , Kubernetes , LinkerD , calico, Amazon Linux , CentOS Linux, Rancher,
EKS, Karpenter, Datadog, Prometheus, Grafana, kafka, Python, RDS.

Client: Lowe's Companies, Inc. Oct’2014 – Apr’2015


Role: Sr. Devops Engineer
Lowe's Companies, Inc. often shortened to Lowe's, is an American retail company specializing in home
improvements. Headquartered in Mooresville, North Carolina, the company operates a chain of retail stores in
the United States and Canada. As of February 2021, Lowe's and its related businesses operates 2,197 home
improvement and hardware stores in North America.
Lowe's is the second-largest hardware chain in the United States (previously the largest in the U.S. until
surpassed by The Home Depot in 1989) behind rival The Home Depot and ahead of Menards. It is also the
second-largest hardware chain in the world, also behind The Home Depot but ahead of European
retailers Leroy Merlin, B&Q, and OBI.
.
Responsibilities:
 Implement and maintain highly automated build and deployment process.
 Created Automated build process for Rational team concert using python & selenium scripts.
 Integrated the automated build scripts on Jenkins for driving daily and nightly builds.
 Installation of UCD server and agents on required boxes for automating deployment process.
 Automated the deployment process using UCD (urban code deploy) on various environments i.e Dev, IST,
QA & Perf.
 Created component process and application process templates making them reusable for other
applications.
 Created ant targets for publishing the artifacts to code station of UCD to required components for
deployment.
 Responsible for working in complete automation process with various team like middleware Websphere
team to understand the manual deployment process to drive it through automation.
 Created components on UCD which help publish and copy artifacts to multiple servers and made it a
generic reusable component for any application.
Environment: Jenkins, Selenium, python, UCD, ant

Client: JPMorgan Chase Dec’2010 –Sep’2014


Role: SCM, Build/Release Engineer
JPMorgan Chase & Co. is an American multinational banking corporation of securities, investments and retail.
It is the largest bank in the United States by assets and market capitalization. It is a major provider of financial
services, with assets of $2 trillion and according to Forbes magazine is the world's largest public company
based on a composite ranking.

Responsibilities
 Implement and maintain highly automated build and deployment process.
 Lead the application teams in adopting the best practice of source code management and traceability.
 Assist with supporting source code management tools and automation builds by Maven.
 Ensure proper management of the product release life cycle.
 Develop deployment plans and schedules for the Change Review meeting.
 Responsible for maintaining and managing the software configuration on various environments.
 Responsible for maintaining the integrity between development, test & production environments.
 Maintained Dev/QC/PROD application environments to ensure all business rules, print logic and
compliance issues are well-managed and documented prior to pushing to production.
 Work with development team to resolve code and integration issues while maintaining the integrity for
various environments.
 Created the Deployment notes along with the development team and released the deployment
instructions to Application Support
 Maintained Defect Fix Deployments and documented the deployed files in the appropriate Environment
Migration log.
 Automated the build & deploy process using JENKINS continuous integration tool.
 Work with other dependency teams to resolve environmental & configuration issues.

Environment: Subversion, Maven, ANT, Jenkins, W2k/NT, Windows 2003, UNIX, SUN Solaris, HP UX,
Agile,Mercury Quality Center, Apache Tomcat, Java, Weblogic, Oracle.

You might also like