0% found this document useful (0 votes)
8 views

sample 4

The document outlines the professional experience and skills of a Sr. DevOps Engineer/SRE with over 12 years in cloud engineering, infrastructure automation, and continuous integration across AWS, Azure, and GCP platforms. Key responsibilities include managing cloud resources, mentoring teams, implementing CI/CD pipelines, and providing production support. The candidate possesses extensive technical skills in various tools and technologies, including Docker, Kubernetes, Terraform, and multiple programming languages.

Uploaded by

itconsultantus10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

sample 4

The document outlines the professional experience and skills of a Sr. DevOps Engineer/SRE with over 12 years in cloud engineering, infrastructure automation, and continuous integration across AWS, Azure, and GCP platforms. Key responsibilities include managing cloud resources, mentoring teams, implementing CI/CD pipelines, and providing production support. The candidate possesses extensive technical skills in various tools and technologies, including Docker, Kubernetes, Terraform, and multiple programming languages.

Uploaded by

itconsultantus10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Sr.

DevOps Engineer/SRE

PROFESSIONAL SUMMARY:

A Tech professional with around 12+ years of experience in various roles such as DevOps Engineer, Cloud Engineering
with a focus on Cloud Resource Utilization, Source Code Management, Infrastructure Automation, Continuous
Integration, Continuous Delivery and Continuous Deployment. AZURE, AWS and GCP platforms while actively
contributing to research and innovation.

 Experience on AWS services like EC2, S3, RDS, ELB, EBS, VPC, Auto scaling groups, Route53, Code deploy, Code
commit, Cloud Watch, Security Groups, Cloud Trail, Cloud Front, Lambda, Snowball, EMR, Glacier and IAM for
instantiating, configuring, and managing various Amazon images for server migration from physical servers
into cloud.
 Configuring and Networking of Virtual Private Cloud (VPC). Glacier for storage and backup on AWS. Written
Cloud formation templates and deployed AWS resources using it. Creating S3 buckets and managing policies
for S3 buckets and Utilized S3 bucket.
 Experience working on Migration from a physical data center environment to AWS.
 Configured AWS Multi Factor Authentication to enable 2 step authentication of user’s access using Google
Authenticator and AWS Virtual MFA apps.
 Provide mentorship to a growing SRE team on core SRE principles and tools.
 Implementing, Maintaining and Monitoring and alerting production and corporate servers/storage using Cloud
Watch, Managed Ubuntu Linux and Windows virtual servers on AWS EC2 instances.
 Build Customized Amazon Machine Images (AMIs) & deployed these customized images based on
requirements.
 Experience in designing and implementing cloud solutions using Azure services such as azure virtual machines,
azure storage, Azure SQL Database, Azure functions, and Azure Active directory.
 Proficient in managing and optimizing Azure resources to ensure high availability, scalability, and
performance.
 Skilled in configuring and deploying virtual networks, network security groups, and VPN gateways to enable
secure connection between Azure resources and on-premises infrastructure.
 Knowledge in monitoring and troubleshooting Azure services using tools such as Azure monitoring, Azure log
Analytics, and Azure service Health.
 Able to implement and manage Azure AD identities and groups to enable RBAC, including creating and
managing users, groups, and services principals.
 Coordinate/assist developers with establishing and applying appropriate branching, labelling/ naming
conventions using Subversion (SVN), Perforce and Git source control.
 Implemented Continuous Integration and deployment using various CI Tools like Jenkins, Hudson, Bamboo,
Chef, Puppet and Sona type Nexus. Experience in working on source controller tools like GIT, GitHub,
Subversion (SVN), TFS Microsoft Visual Studio and Perforce.
 Expertise in OOP for SRE and DevOps to improve code organization, scalability, and maintainability.
 Experienced with Apache Tomcat, configuration FMW suite, clusters, Load balancers, HTTP Webservers,
deployment of Enterprise e-commerce applications, support of Production, Staging, Test and Development
environments.
 Perform periodic on-call duty as part of the SRE team
 In-depth knowledge of Devops management methodologies and production deployment which include
Compiling, Packaging, Deploying and Application Configurations.
 Experience with installing configuring OpenShift and maintain high availability solution in configuring
your masters and maintaining the nodes.
 Experience in Installation and Configuration of the Nexus repository manager for sharing the artifacts within
the company.
 Experience on Linux based server solutions like Apache, Tomcat, FTP, DHCP, DNS, NIS and NFS.
 Creating and maintaining users, profiles, security, rights, disk space, LVMs and process monitoring, worked
with Redhat Package Manager (RPM) and YUM, Job Scheduling using cron.
 Monitoring and logging Prometheus, thanos , Grafana, Splunk.
 Knowledge of DAS, NAS, SAN, Open LDAP and experience in managing LAMP Stack and Experience in
networking with LAN, WAN, Routers, and Gateways etc.
 Experience with DHCP, DNS, IP, Sub Nets, VPNs, vLAN, Network routing, firewalls, LAN/WAN switching and
Backup & Recovery, File & Print Server, IIS (Web Server), FTP, Terminal Server
 Strong understanding of Networking Protocols like OSI models, TCP/IP, UDP, Firewall, SMTP, POP3, HTTP, DNS
DHCP and Socket Programming.
 Installed, Configured, Managed Monitoring Tools such as Splunk, Nagios, Grafana for Resource
Monitoring/network/Monitoring/Log Trace Monitoring.
 Established infrastructure and service monitoring using Prometheus and Grafana.
 Conducted performance testing of Apache and Tomcat management services.
 Day to day administration of the Development environment and Production and Test environment systems.
24x7 on-call support
 Day to day job included but not limited to handling Tickets, Monitoring, Troubleshooting and maintenance

TECHNICAL SKILLS:

Cloud platforms AWS, Microsoft Azure, OpenStack, PCF.


Containerization Tools Docker, Kubernetes, OpenShift, Mesos.
Virtualization Platforms Oracle VM Virtual Box, Vagrant, Hyper-V and VMware
Application/Web Servers Amazon AWS, Apache Tomcat, JBoss, WebSphere, VMware
Scripting Languages Perl, Python, Ruby, Bash/Shell Scripting, PowerShell, YAML, PHP, JSON.
Build Tools ANT, Maven, Gradle.
Configuration Management Ansible, chef, puppet.
tool
continuous Integration tools Bamboo, Hudson, Jenkins
Operating Systems UNIX, Linux, Windows, Solaris, Ubuntu
Logging & Monitoring Tools Nagios, Splunk, Cloud watch, ELK, Dynatrace
Databases MS Access, MS SQL Server, Oracle 8/10.0, Mongo DB, DynamoDB
Networking TCP/IP, DHCP, DNS, SNMP, SMTP, Ethernet, NFS, LDAP.
Issue Tracking Tools JIRA, Remedy, Clear Quest, I-Track

CERTIFICATIONS:

 Aws Certified Developer Associate


 Certified Kubernetes Administrator
 Certified Site Reliability Engineering (SRE) Foundation
 Docker Certified Associate
 Microsoft Azure Administrator

WORK EXPERIENCE:

Client: WellCare Health Plans, Tampa FL Nov 2023 to till date


Role: Sr. DevOps Engineer
Responsibilities:
Overview: Mainly involved in Production support and monitoring activities ranging from PROD CR approvals,
deployments using CI-CD tools. Work closely with various developers/stakeholders of payment portals, claims
processing portals and many more apps for UAT and non-prod code deployments. Work on daily operations tickets for
CI-CD issues, DR requests, pipelines modification, server builds, application upgrades, monitoring, reporting etc.
Running monthly Major Release communications bridge with all stakeholders and provide 24/7 on-call support -
when needed. Manage version control systems and repositories and staging code for deployments. Involved in set-up,
installation, configuration and monitoring of Windows, Unix, and Linux UAT & PROD boxes.

 Used ITIL best practices to support affected business units by managing, directing, coordinating, and
communicating across multiple technical and non-technical teams which include application,
infrastructure, third party suppliers, and business units.
 Used agile methodology throughout the project. Involved in weekly and daily bases release management.
 Responsible for Change Management activities, including new business planning, leading regular change
control meetings, prevention, and detection of unauthorized migration to production environments.
 Implemented and executed major incident management processes including invocation, ownership,
escalation, communication, and restoration of service. Lead and/or follow-up with support team personnel
on investigations of critical, cross-functional problems in the IT environment.
 Worked with Release Management tools like ServiceNow, Plutora for managing release and developing
release reports etc.
 Extensively used Plutora and ServiceNow for creating change request tickets/service tickets for production
releases and defining the software release lifecycle from planning through approvals and implementation.
 Prepared documentation and reporting for executive team on a weekly, monthly, and quarterly basis using
ServiceNow tools and PowerPoint presentations. Verified all incidents were logged in a timely manner and
accurately updated.
 Prepared post incident review documents and attended problem management review meetings to ensure
determination of root cause; prepared accurate, appropriate, and timely communication to internal and
external stakeholders.
 Provided timely feedback to senior management regarding issues affecting quality of service to clients;
facilitated teleconference meetings and weekly staff meetings, coordinating with all time zones to ensure
timely communication.
 Experience with Nagios/Observium/New Relic and Dynatrace monitoring and alerting services for servers,
switches, applications, and services.
 Deploy application code using CI/CD pipeline with Azure DevOps in Azure cloud, scale VM build
automation using Azure DevOps in Azure VM agent plug-ins. Container clustering with Docker and
Kubernetes.
 Built and managed a large deployment of Ubuntu/Linux instances systems with Chef Automation. Wrote
recipes, tools, shell scripts and monitoring checks. Developing scripts for build, deployment, maintenance,
and related tasks using Jenkins, TFS, Docker, Maven, Python and Bash.
 Responsible for troubleshooting, installing and configuration of Splunk.
 Perform load testing in the Cloud environment using Apache JMeter or LoadRunner to analyze and evaluate
traffic and generate reports for future optimization.
 Ensured Splunk environment continuously meets specifications in terms of business requirements (SLA),
app design and infrastructure performance.
 Used Azure DevOps services like Azure Repos, Azure Boards, and Azure Test Plans to plan work and
collaborate on code development and deploy applications.
 Debug issues in Azure WebApps developed using different languages like .NET, C#, Java etc.
 Used various application monitoring tools like Dynatrace to fetch real time performance metrics
applications and services.
 Worked with Azure app gateway for configuring application load balancer.
 Used Guidewire Suite for administration tasks, managing claims etc. Utilized modules for deployment
management and configuration.
 Acted as build and release engineer, deployed services by VSTS (Azure DevOps) pipeline. Created and
maintained pipelines to manage the IAC of all applications using several Azure DevOps services.
 Worked closely with developers and managers to resolve the issues that were raised during the
deployments in different environments.
 Configured and installed monitoring tools Grafana, Kibana, Log stash and Elastic Search on the servers.
 Used Micro service architecture with Springboot based services interacting through a combination of REST
and Apache Kafka message broker,
 Implemented and centralized logging system using Logstash configured as an ELK stack (Elastic-search,
Logstash, and Kibana).
 Integrated Sonar with Jenkins for build code analysis.
 Worked on implementing new versions of many DevOps tools on RHEL Environment for better
performance, to remove vulnerabilities.
 Worked with Nexus artifactory to store builds artifacts and deploy.
 Experience in deploying .NET applications through IIS using built-in tools and other tools.
 Defined branching, labeling, and merge strategies for all applications in Git.
 Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple name
spaced versions.
 Installation, configuration, and customization of services Send mail, Apache, FTP servers to meet the user
needs and requirements.

Environment: Azure Active Directory, Azure storage, IIS, Azure Resource Manager, Azure Blob Storage, Azure VM,
SQL Database, Azure Functions, Azure CLI, Helm Charts, Docker, Kubernetes, Terraform, GitHub Actions, YAML, ,
SQL server, MySQL, Oracle database, Ansible, Tableau, Power BI, Git, Bitbucket, Maven, GitLab CI, Jenkins, ,
Dynatrace, Datadog, QRadar, Cloud Trail, Cloud Watch, ELB, Selenium, Bash Shell Scripting, Chef

Client: Toshiba GCS, NC Jan 2022 – Oct 2023


Role: Sr. Cloud Engineer
Responsibilities:

 Migrated VMWARE VMs to AWS and Managed Services like EC2, S3 Bucket, Route53, ELB, EBS, etc., with
Ops code Chef Cookbooks and recipes
 Worked on AWS Cloud Formation templates to create custom sized VPC, subnets, EC2 instances, ELB,
security groups. Worked on tagging standard for proper identification and ownership of EC2 instances and
other AWS Services like Cloud Front, CloudWatch, Ops Works, RDS, ELB, EBS, S3, glacier, Route53, SNS,
SQS, KMS, Cloud Trail, IAM.
 AWS EC2, VPC, S3, SNS, Redshift, based infrastructure automation through Terraform, Chef, Python, Bash
Scripts and Managing security groups on AWS and custom monitoring using CloudWatch. .
 Implemented and maintained Chef, OpsWorks Configuration management extending several environments
in VMware, AWS Cloud and created Cookbooks for Chef using the Ruby programming language.
 Written Chef Cookbooks, recipes using Ruby to automate the installation of Middleware Infrastructure like
Apache Tomcat, JDK, and configuration tasks for new environments.
 Written Ansible Playbooks with Python SSH as the Wrapper to Manage Configuration of Nodes and Test
Playbooks on AWS instances using Python SDK and Automated various infrastructure activities like
continuous deployment, application server setup, stack monitoring using Ansible playbooks.
 Creating inventory, job templates and scheduling jobs using Ansible Tower. Downloaded and managed
Ansible roles from Ansible Galaxy to automate the infrastructure.
 Created Ansible Playbooks to provision Apache Web servers, Tomcat servers, Nginx, Apache Spark and
other applications.
 Configured AWS IAM and Security Group in Public and Private Subnets in VPC. Created AWS Route53 to
route traffic between different regions.
 Implemented rolling upgrades and patches for Kafka brokers with minimal disruption to services, ensuring
continuous delivery pipelines remained operational.
 Configured and maintained Jenkins to implement the CI process and integrated the tool with Maven to
schedule the builds. Took the sole responsibility to maintain the CI Jenkins server.
 Utilized GitHub Advanced Security features, such as CodeQL and Dependabot, to ensure security in the
machine learning (MLOps) codebase and prevent vulnerabilities from impacting model deployments and
production environments.
 Leveraged GitHub Packages to store and version machine learning models (MLOps), datasets, and Docker
images, ensuring that dependencies, containerized models, and other resources are properly managed and
versioned within the project
 Experience in creating alarms and notifications for EC2 instances using Cloud Watch. Automated AWS
components like EC2 instances, Security groups, ELB, RDS, IAM through AWS Cloud information templates.
 Installed Jenkins on a Linux, created master and slave configuration, and drove all microservices builds out
to the Docker registry. Build scripts using Maven in Jenkins to move from one environment to other.
 Implemented a centralized logging system using log stash configured as an ELK stack (Elastic search, Log
stash, and Kibana) to monitor system logs, VPC Flow logs, Cloud Trail Events, changes in S3 etc.
 Engineered Splunk to build, configure and maintain heterogeneous environments and have in-depth
knowledge of log analysis generated by various systems including security products.
 Designed and implemented GIT metadata including elements, labels, attributes, triggers and hyperlinks and
performed necessary day to day GIT support for different projects.
 Developed Dev, Test and Prod environments of different applications on AWS by provisioning Kubernetes
clusters on EC2 instances using Docker, Ruby/Bash, Chef, and Terraform
 Responsible for writing the Terraform scripts to install the Kubernetes cluster and its dashboard on to the
OpenStack cloud.
 Implemented AWS CloudWatch which monitors Live Traffic, logs, Memory utilization, Disk utilization and
various other factors which are important for deployment.
 Setup Datadog monitoring across different servers and AWS services. Created Datadog dashboards for apps
and monitored real-time and historical metrics.
 Implemented Spark using Scala and Spark SQL for faster testing and processing of data.
 Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data.
 Automated Kafka deployment and configuration using Ansible, Terraform, and Helm charts to ensure
reproducible and consistent environment setups.
 Integrated Kafka into CI/CD pipelines to enable automated testing, deployment, and monitoring of Kafka-
based microservices and streaming applications.
 Implemented rolling upgrades for Kafka brokers, ensuring zero downtime during upgrades and patches.
 Configure ELK stack in conjunction with AWS and using Log Stash to output data to AWS S3.
 Involved in AWS EC2/VPC/S3/SQS/SNS based automation through Terraform, Ansible, Python, Bash
Scripts.
 Administered, deployed and managed RedHat, Ubuntu, Windows and CentOS servers. Using Perl, Bash,
Shell scripting and automated log backup using Python Boto3 API, managed batch jobs in Linux for data
automated import/export of data and system automation programming.

Environment: Jenkins, Git, AWS, JIRA, EC2, S3, RDS, EBS, Elastic Load Balancer, EKS, IAM, Lambda, CloudFront
CDN, VPC,Elastic Beanstalk, Route53, Auto Scaling Groups, Docker, Docker Registry, EKS, Terraform, Shell
Scripting, Ansible Tower, Chef, Nagios, ELK Stack, DataDog, CloudWatch, Nexus, Ansible, Python, Bash Scripts,
GitHub, AppDynamics, Splunk, CloudWatch,Kafka, Logstash, Elasticsearch, Chef Client, Chef Provisioning, Azure,
Docker, Nexus, Kubernetes, Puppet, Azure Web Apps, AzureDNS, Azure Traffic Manager, Azure Virtual Machines,
Grafana, Prometheus, Azure Cosmos DB, API Management.

Client: Elevance Health, VA July 2020 – Dec 2021


Role: Cloud Engineering
Responsibilities:

 Hands-on Experience in creating Azure Key Vaults to hold Certificates and Secrets, designing Inbound and
Outbound traffic rules, and linking them with Subnets and Network Interfaces to filter traffic to and from
Azure Resources.
 Deployed Azure Cloud services (PAAS role instances) into secure VNETS, subnets, and built Network
Security Groups (NSGs) to govern Inbound and Outbound access to Network Interfaces (NICs), VMs, and
Subnets.
 Well-versed in automating Infrastructure using Azure CLI, monitoring and troubleshooting Azure
resources with Azure App Insights, and accessing subscriptions with PowerShell.
 Configured and maintained Azure Storage Firewalls and Virtual Networks, which use virtual Network
Service Endpoints to allow administrators to define network rules that only allow traffic from specific V-
Nets and subnets, so creating a secure network border for their data.
 Implemented and provided Single Sign-On (SSO) access to users using Software as a Service (SAAS)
applications such as Dropbox, Slack, and Salesforce.com using Azure Active Directory (AAD) in Microsoft
Azure.
 Using Git as an SCM tool with Azure DevOps, I built a local repo, cloned the repo, added, committed, and
pushed changes to the local repo, recovered files, set tags, and viewed logs.
 Developed continuous integration and deployment pipelines that automated builds and deployments to
many environments using VSTS/TFS in the Azure DevOps Project.
 Focused on using Terraform Templates to automate Azure IAAS VMs and delivering Virtual Machine Scale
Sets (VMSS) in a production environment using Terraform Modules.
 Involved in creating Jenkins pipelines to drive all microservices builds out to the Docker images and stores
in Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes and also
performed Jenkins jobs for deploying using Ansible playbooks and Bitbucket.
 Involved in integrating Docker container-based test infrastructure into the Jenkins CI test flow and setting
up a build environment integrating with GIT and JIRA to trigger builds using WebHooks and Slave
Machines.
 Written Ansible playbooks, which are the entry point for Ansible provisioning, and in which the automation
is described through tasks in YAML format to build a Continuous Delivery (CD) pipeline and execute
Ansible Scripts to provide Development servers.
 Managed Web apps, Configuration Files, mounting points, and packages, launching Azure Virtual Machines
(VMs) with Ansible Playbooks.
 Managed and used the GitHub Enterprise code management repo to import and manage various corporate
apps.
 Splunk experience includes installing, setting, and troubleshooting the software, and monitoring server
application logs with Splunk to detect production issues.

Environment: Azure, Jenkins, Web logic, Nexus, JIRA, Ansible, Oracle, Terraform, Kubernetes, Prometheus,
Python, Maven, Java, GitHub, Linux, ELK, GIT, LDAP, NFS, Splunk, PowerShell Scripts, Shell Scripts, Ansible, Docker,
Kubernetes.

Client: Robert Bosch, India Sept 2017 – June 2020


Role: AWS Devops Engineer/SRE
Responsibilities:

 Identified, analyzed, coordinated, and resolved environment and infrastructure issues to ensure smooth
running of application.
 Configure, update and manage monitoring of data center and clusters also monitoring applications and
logs.
 Build and configured Jenkins slaves and executers for parallel job execution.
 Installed and configured Jenkins & run deck for continuous integration and performed continuous
deployments.
 Analyzing memory analysis on every build to resolve memory redundancy.
 Worked with Nexus, created a central storage, and provided access to artifacts and metadata about them
exposing build outputs to consumer such as other projects and developers.
 Established processes and tools to maintain code base integrity, including check-in validation rules and
branch/merge processes.
 Created Terraform scripts to move existing on premise applications to cloud, used Terraform templates
along with Ansible modules to configure EC2 instances.
 Worked with JIRA for monitoring projects, assigning permissions to users and groups for the project &
Created Mail handlers and notification schemes.
 Provide regular reports and dashboards to the engineering staff and Senior Leadership on the efficiency of
core systems and SRE response/resolution times.
 Configurations, setup and building of interfaces and modifying the flow to give the opportunity for
reusability, scalability and functionality.
 Responsible for User Management, Plugin Management and END-END automation of Build and Deploy
using Jenkins.
 Created Python scripts to totally automate AWS services which includes web servers, ELB, Cloud Front
distribution, EC2, S3 bucket and application configuration, this script creates stacks, single servers, or joins
web servers to stacks.
 Involved in writing various custom Ansible Playbooks for deployment orchestration and developed Ansible
Playbooks to simplify and automate day-to-day server administration tasks.
 Installed and configured Chef Server and workstation bootstrapped the nodes using Knife. Also, wrote Chef
Cookbooks, Recipes to manage server configurations.
 Integrated Chef cookbooks into Jenkins jobs for CD framework and created roles, environments using Chef
handlers for different auto kickoff requirement jobs.
 Implemented Chef Recipes for deploying build on internal Data Centre Servers. Also, re-used and modified
same Chef Recipes to create a deployment directly into Amazon EC2 instances.
 Involved in build and maintain Highly Available secure multi-zone AWS cloud infrastructure utilizing Chef
with AWS Cloud Formation and Jenkins for continuous integration.
 Assist with the development and implementation of DevOps SRE solutions for large-scale distributed web
applications across multiple tiers and data centers.
 Used Ansible to manage web applications, environment configuration files, users, mount points and
packages.
 Created continuous integration system using GIT, Maven, Jenkins, Chef full automation
 Used Ant, Puppet Scripts with Ivy to build the application and deploy.
 Configuration and Administration of Apache Web Server and SSL Certificates.
 Incorporated the Code quality tools Find Bugs into Maven and Java projects.
 Setup Nagios for monitoring the infrastructure, also used Nagios Handlers, which acts on the service status
with pre-defined steps/scripts.

Environment: AWS EC2, Jenkins 1.5x, Rundeck 2.0.x, Red hat 6.x/5.x, JIRA 6.x, Apache, ANT, Git, Chef, Nexus,
Docker, Nagios 4.0.x/4.1.x.

Client: HCL, India April 2015 – Aug 2017


Build and Release Engineer
Responsibilities:

● Maintained Git-based repositories, focusing on effective version control and branching models that enable
simultaneous code development across teams.
● Engineered automated build systems using CI/CD platforms like Jenkins and GitLab CI, aiming for quick
and consistent code compilation.
● Employed configuration management utilities such as Ansible and Chef to standardize and automate
environment setups across the development lifecycle.
● Architected automated deployment pathways, streamlining code releases from development stages right
through to production.
● Overseen the establishment and maintenance of multiple development, testing, and production settings,
along with associated rollback protocols.
● Integrated quality assurance platforms like SonarQube into CI/CD flows to uphold coding standards and
best practices.
● Instituted semantic versioning protocols, making software history transparent and rollback operations
straightforward.
● Governed project dependencies through package managers such as Maven and npm to ensure build
consistency.
● Compiled exhaustive guides for build and release cycles, which serve as invaluable resources for team
members and newcomers alike.
● Conducted continuous surveillance of build and deployment activities, while delivering analytical reports
to enhance system efficiency.
● Utilized SonarQube for ongoing code quality monitoring, achieving a noticeable improvement in coding
standards.
● Orchestrated the full cycle of development, staging, and production setups to assure seamless code
deployments.
● Established automated test routines within CI pipelines, enhancing the frequency and reliability of code
integrations.
● Employed Grafana and Prometheus for real-time monitoring of application and infrastructure metrics,
sustaining a near-perfect uptime.
● Managed project dependencies with precision using specialized tools like Artifactory and NPM.
● Formulated contingency measures for reversing code deployments, significantly reducing downtime during
unforeseen deployment issues.
● Used Ajax for every page for dynamically displaying the data without a page refresh.

Environment: Docker, Cassandra, Sonar, SVN, GIT, ANT, Maven, SVN, RHEL, Solaris, Unix/Linux,
Jenkins,shellscripting.

Client :Alcatel Lucent, India April 2013 – Mar 2015


Role :Systems Engineer
Responsibilities:

● Took the lead in developing RESTful APIs and optimizing databases, achieving a 20% performance boost.
● Conducted regular code reviews to maintain code quality and best practices, reducing bugs by 25%.
● Designed and maintained relational database schemas, ensuring data integrity and reducing redundancy.
● Proficiently profiled and optimized both application code and SQL queries, improving system
responsiveness.
● Worked closely with frontend developers and designers to provide seamless integration solutions.
● Implemented OAuth and JWT for API security, reducing unauthorized data access by 50%.
● Created and managed unit and integration tests using frameworks like JUnit, leading to a 15% reduction in
bugs during the development phase.
● Used tools like New Relic to monitor system health, improving system uptime to 99%.
● Managed source code repository using Git, following a branching strategy that improved release cycles.
● Authored technical documents and API documentation, facilitating smoother onboarding for new
engineers.
● Streamlined Jenkins-based CI/CD pipelines, reducing the deployment cycle by 40% and halving production
errors.
● Spearheaded cybersecurity initiatives, resulting in a 90% reduction in security incidents.
● Engineered a resilient cloud architecture that reduced system downtime by 50% and improved operational
efficiency by 25%.
● Authored automation scripts for deployment and monitoring, reducing manual tasks by 60% and boosting
system stability by 30%.
● Managed the transition from on-premises to cloud, achieving an 80% reduction in system downtime and a
40% increase in performance.
● Mentored a team of five junior DevOps engineers, leading to a 25% improvement in team productivity and
a 15% reduction in production issues.
● Achieved and maintained a system uptime of 99.99%, setting new standards for reliability.
● Designed automated cloud server provisioning systems, reducing manual intervention by 70%.
● Optimized backend services like Tomcat and MongoDB, improving system performance by 30%.
● Enabled seamless integration with Google Cloud services, enhancing system reliability by 20%.
● Ensured 100% PCI DSS compliance, maintaining a flawless security record.
● Implemented CI/CD strategies that cut time-to-market in half, enabling more agile responses to business
needs.

You might also like