0% found this document useful (0 votes)
9 views

Unit 1 - DO

The document provides an overview of DevOps, emphasizing its importance in improving software delivery speed, reliability, and quality through enhanced collaboration between development and operations teams. It outlines the principles of DevOps culture, the adoption process, benefits, and compares it with Agile methodologies. Additionally, it introduces AWS as a cloud service provider, detailing its advantages, features, and various services that support DevOps practices.

Uploaded by

gowthamj22it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Unit 1 - DO

The document provides an overview of DevOps, emphasizing its importance in improving software delivery speed, reliability, and quality through enhanced collaboration between development and operations teams. It outlines the principles of DevOps culture, the adoption process, benefits, and compares it with Agile methodologies. Additionally, it introduces AWS as a cloud service provider, detailing its advantages, features, and various services that support DevOps practices.

Uploaded by

gowthamj22it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

PSNA COLLEGE OF ENGINEERING AND TECHNOLOGY, DINDIGUL – 624622.

(An Autonomous Institution Affiliated to Anna University, Chennai)

Material Courtesy of Mr. M.Kamaraj, AP

CS2V27 - Dev-Ops
UNIT 1 – INRODUCTION TO DEVOPS
DEVOPS ESSENTIALS:
 DevOps refers to a combination of two processes, i.e., DEVelopment and
OPerationS, in a typical software company.
 DevOps has become an essential practice in software companies to improve their
software delivery speed, reliability, and quality.
 DevOps promotes a set of processes and methods from the three departments
 Development
 Operations
 Quality Assurance
 That communicates and collaborates together for development of software systems.

Definition: DevOps is the combination of practices and tools designed to increase an


organization‟s ability to deliver applications and services faster than traditional software
development processes.
Why DevOps is needed?
 Before DevOps, the development and operation team worked in complete isolation.
 Testing and Deployment were isolated activities done after design-build. Hence they
consumed more time than actual build cycles.
 Without using DevOps, team members are spending a large amount of their time in
testing, deploying, and designing instead of building the project.
 Manual code deployment leads to human errors in production.
 Additionally, the coding and operations teams had separate timelines and were not
synchronized, causing further delays.
Why DevOps is important?
 Quicker deployment cycles
 increase in product quality
 Faster innovation cycles
 Reduction in suspension of operations
DevOps Culture:
 DevOps culture at its core means enhancing collaboration between development
and operations teams to increase shared responsibility and accountability when it
comes to software releases.
1
 There are a number of principles that make up a culture of DevOps and applying
them will help ensure that the implementation of such a culture will be truly
successful.
 Below, we will outline some of these principles:
 Start at the top.
 Invest in the right people for your team.
 Work towards common goals.
 Provide appropriate training and education.
 Embrace failure.
 Automate wherever possible.
 A key aspect of a DevOps culture is automation in order to develop and deploy
software more efficiently and reliably.
DevOps Adoption:
 Adoption of DevOps is driven by various steps.
 The steps are
 Change the culture of the work environment
 Prioritize learning
 Continuous integration
 Implement test automation
 Continuous delivery
Goals:
 To make simple processes increasingly programmable and dynamic.
 Fast delivery of product.
 Lower failure rate of new releases.
 Shortened lead time between fixes.
 Faster means time to recovery.
 Increases net profit of organization.
 To standardize the development environment.
 To reduce work in progress.
 To reduce operating expenses.
 To set up an automated environment.
Benefits:
 Faster software development cycle.
 Improved interoperability among the teams.
 Continuous releases and deployments.
 Early defect detection leading to quality software development.
 Improved customer experience and greater customer satisfaction.
 Increased productivity as a result of streamlined business processes.
 Fostering innovation within the organization.
 Greater employee satisfaction.
Comparison between Agile and DevOps:
S.No Agile DevOps
Agile is to develop software in DevOps is to deliver technology to
small iterations and be thus business units in a timely fashion and
1.
able to adapt to the changing ensure the technology runs without
customer needs. interruption or disruption.

2
It adopts rapid development This is not a rapid development
2.
approach. approach.
The focus of DevOps is not only for
The focus of agile development
software development and release but
3. is merely on software
also for safest deployment in the
development and release.
working environment.
In agile development, every DevOps, on the other hand, assumes
team member has a skill of there will be development teams and
design, development and operational teams, the two will be
4.
coding. Any available team separate. These teams can communicate
member should be able to do between themselves on a frequent and
what‟s required for progress. regular basis.
Communication in agile DevOps communication specifications,
5. development is informal and in documents are involved and it is formal.
the form of daily meetings. It does not occur on daily basis.
Large team size and multiple teams are
6. The team is small in nature.
required in DevOps.
Agile is about software DevOps is about software development
7.
development. and management.
Documentation is not much Documentation is very much important
8.
important in Agile development. in DevOps.
Agile development teams may DevOps absolutely depend on
choose to use certain automated development to make
9. automation tools. But there are everything happen smoothly and
no specific tools required for an reliably. Certain tools are an integral
agile team. part of DevOps.
10. Agile is less flexible. DevOps is more flexible.
11. Agile has limited scope. DevOps has a broader scope.
DevOps Tools:
1. Confluence: Confluence, serving as a knowledge hub for DevOps teams to
streamline communication and information sharing.
2. JIRA: JIRA, enabling efficient task tracking, sprint planning, and issue
management crucial for DevOps workflows.
3. Git: Git, enabling collaborative code management and supporting seamless
integration and deployment within DevOps pipelines.
4. DC/DS (Data Center / Data Services): DC/DS, ensuring reliable and efficient
data management for DevOps applications.
5. Docker: Docker, enhancing portability and scalability in DevOps workflows.
6. AWS (Amazon Web Services): AWS, providing a scalable and reliable
infrastructure to support diverse DevOps needs.
7. Chef: Chef, ensuring consistent and repeatable infrastructure setups, and
facilitates streamlined DevOps operations.
8. Ansible: Ansible, promoting efficient automation across diverse IT environments in
DevOps practices.
9. Kubernetes: Kubernetes, orchestrating deployment, scaling, and operations,
providing a robust platform for container orchestration in DevOps workflows.
10. Datadog: Datadog, providing real-time visibility into infrastructure performance
and application behavior in DevOps environments.
3
11. Splunk: Splunk, allowing DevOps teams to gain insights from machine-generated
data for troubleshooting and optimization.
12. Nagios: Nagios, allowing DevOps to track system metrics, network devices, and
server health, ensuring proactive issue resolution.
13. Codeship: Codeship, enabling automated testing and deployment for streamlined
releases in DevOps pipelines.
14. Jenkins: Jenkins, allowing for customizable and automated release processes in
DevOps workflows.
15. JUnit: JUnit, ensuring code reliability and facilitating test-driven development
within DevOps practices.
16. Selenium: Selenium, allowing DevOps teams to conduct efficient and scalable
browser-based tests, ensuring application functionality across various platforms
and browsers.
17. Maven: Maven, managing dependencies and facilitating project management
within DevOps workflows.
18. SBT (Scala Build Tool): SBT, providing efficient compilation, testing, and
dependency management in DevOps environments.
DevOps Architecture:
 There are different phases of DevOps architecture are
1. Plan: In this phase, all the requirements of the project are gathered. The
schedule and cost of the project is estimated approximately.
2. Code: In this phase the code is written as per the requirements. Entire project is
divided into smaller units. Each unit can be coded as a module.

Fig: DevOps architecture


3. Build: In this phase, the building of all the units is done using tools such as
Maven, Gradle to submit the code to a common code source.
4. Test: At this stage, all the units are tested to find if there exists any bug in the
code. The testing can be done using tools like Selenium, JUnit, PYtest. Some
important testing techniques, such as acceptability testing, safety testing,
integration checking, performance testing are carried out.
5. Integrate: In this phase, a new feature is added to the existing code and testing
is performed. Continuous Development is achieved only because of continuous
integration and testing.
6. Deploy: In this stage, the code is deployed in the client's environment. Some of
the examples of the tools used for Deployment are AWS, Docker.
7. Operate: At this stage, the version can be utilized by the users. Operations are
performed on the code if required. Some of the examples of the tools used are
Kubernetes, open shift.
4
8. Monitor: At this stage, the monitoring of the version at the client's workplace is
done. During this phase, developers collect data, monitor each function and spot
errors like low memory or server connection are broken. The DevOps workflow is
observed at this level depending on data gathered from consumer behavior,
application efficiency and other sources. Some of the examples of the tools used
are Nagios, elastic stack for monitoring.
DevOps Lifecycle:
 The DevOps lifecycle phases are as follows-
1. Continuous Development: In this phase, the planning and coding of software is
done Version control mechanism is used during this phase.
2. Continuous Integration: In this phase, developers are required to commit
changes in the source code frequently. The code supporting new functionality is
continuously integrated with the existing code. Therefore, there is continuous
development of software.
3. Continuous Testing: In this phase, the software is continuously tested for bugs.
Many times automation testing is preferred.
4. Continuous Monitoring: By continuous monitoring, we can get notified before
anything goes wrong. We can gather many performance measures, including
CPU and memory utilization, network traffic, application response times, error
rates and others.
5. Continuous Feedback: In this DevOps stage, the software automatically sends
out information about performance and issues experienced by the end-user. It's
also an opportunity for customers to share their experiences and provide
feedback.
6. Continuous Deployment: In this phase, the code is deployed to the production
servers. Also, it is essential to ensure that the code is correctly used on all the
servers. The deployment process takes place continuously in this DevOps life
cycle phase.
7. Continuous Operations: It is the last phase which involves automating the
application‟s release and all these updates that help you keep cycles short and
give developers more time to focus on developing.
INTRODUCTION TO AWS:
Basics:
 Cloud computing is a term that refers to storing and accessing data over the
internet.
 It does not store any data on the hard disk of our personal computer.
 In cloud computing, we can access data from a remote server.
 AWS stands for Amazon Web Service. It is the most comprehensive cloud service
provider by Amazon.
 It includes services for storage, databases, analytics, networking, mobile,
development tools and enterprise applications.
 AWS manages and maintains hardware and infrastructure, saving organizations
and individuals the cost and complexity of purchasing and running resources
on-site. These resources may be accessed for free or on a pay-per-use basis.
 It provides different services of cloud such as Infrastructure as a Service (IaaS),
Platform as a Service (PaaS) and packaged Software as a Service (SaaS).

5
 IaaS: Infrastructure as a service means delivering computing infrastructure
on demand. Under this service, the user purchases the cloud infrastructure
including servers, networks, operating systems and storage using
virtualization technology. These services are highly scalable. IaaS is used by
network architects.
Examples: AWS, Microsoft Azure.
 PaaS: The platform as a service means it is a service where a third party
provider provides both hardware and software tools to the clients. It
provides elastic scaling of your application which allows developers to build
applications and services over the internet and the deployment models
include public, private and hybrid. PaaS is used by developers.
Examples: Facebook and Google Search Engine.
 SaaS: Software as a Service model that hosts software to make it available
to clients. It is used by the end users.
Examples: Google Apps.
Advantages of AWS:
 AWS allows organizations to use already familiar programming models, any
operating systems, any kind of databases and architectures.
 AWS allows to automatically increasing the capacity of resources as per
requirements so that the application is always available.
 It is a cost effective service that allow you to pay only for what you use.
 AWS requires no upfront investment, long-term commitment and minimum expense
when compared to traditional IT infrastructure that requires a huge investment.
 AWS provides end to end security and privacy to the customers.
Features of AWS:
1. Flexibility:
 The AWS always allows the user to use the operating system, programming
languages and web application platforms that the user is comfortable with.
 Flexibility means that migrating legacy applications to the cloud should be easy.
 Instead of re-writing the applications to adopt new technologies, we just need to
move the applications to the cloud and tap into advanced computing capabilities.
2. Cost effective:
 Instead of purchasing and creating our own expensive servers, we can use AWS
where we need to pay only for the tools and services that we use.
 AWS offers a pay-as-you-go pricing method, which means that we only pay for
the services that are needed and have been used for a period of time.
3. Scalable and elastic:
 AWS is scalable because the AWS auto scaling service automatically increases
the capacity of constrained resources as per requirements so that the application
is always available.
 Elasticity is one of the AWS advantages. It upsizing and downsizing of resources
is possible with AWS.
4. Secure:
 AWS maintains confidentiality, integrity and availability of the user's data.
 Each service provided by the AWS cloud is secure.
 Personal and business data can be encrypted to maintain data privacy.

6
5. High performance:
 High performance computing is the ability to process massive amounts of data at
high speed.
 AWS offers a high-performance computing service so that the companies need
not worry about the speed.
Companies using AWS:
 Netflix
 Instagram
 Linkedin
 Dropbox
 Johnson & Johnson
 Capital One
 Adobe
 Airbnb
 AOL
 Hitachi
 Global Infrastructure
Applications of AWS:
 Building mobile and social applications.
 Storage, backup and disaster recovery.
 Academic computing.
 Website hosting.
 Media sharing such as image or video.
 Content delivery and media distribution.
 Social networking.
 AWS has been serving many gaming studios.
Services offered by AWS:
1. Amazon Elastic Cloud Compute(EC2):
 EC2 stands for Elastic Cloud Compute.
 Amazon EC2 is one of the most used and most basic services on Amazon.
 EC2 is a machine with an operating system and hardware components of
developer‟s choice. But the difference is that it is totally virtualized. The
developer can run multiple virtual computers in a single physical hardware.
 EC2 enables on-demand, scalable computing capacity in the AWS cloud.
 Only pay for what we use.
 EC2 is secure.
2. Amazon Simple Storage Service (S3):
 S3 stands for Simple Storage Service.
 Amazon simple storage service is a scalable, high speed, web-based cloud
storage service.
 Data can be transferred to S3 over the public internet via access to S3
Application Programming Interface (API).
 S3 is used to store and retrieve any amount of data from anywhere on the web.
 It is object-based storage that means we can store the images, word files, pdf
files, etc.
 The files which are stored in S3 can be from 0 Bytes to 5 TB.

7
3. Amazon Virtual Private Cloud(VPC):
 VPC stands for Virtual Private Cloud.
 Amazon VPC provides a logically isolated area of the AWS cloud which can be
used as s private network but it is virtual. It resembles your traditional
networking
 By using Amazon‟s VPC service we can have complete control over our virtual
networking environment including a selection of your IP address range, the
creation of subnets and configuration of route tables and network gateways.
 Additionally, we can easily customize the networking configuration for this VPC.
For instance, we can create a subnet for our web servers which can be accessible
using the internet.
 Amazon provides multiple layers of security to the VPC.
 Each VPC is isolated from another VPC.
4. Amazon Relational Database Services(RDS):
 RDS stands for Relational Database Services.
 AWS RDS is a relational database cloud service using which we can store data as
tables, records and fields on cloud.
 It also helps in relational database management tasks like data migration,
backup, recovery and patching
 Amazon RDS supports PostgreSQL, MySQL, Maria DB, Oracle, SQL Server and
Amazon Aurora.
5. Amazon CloudFront:
 Amazon CloudFront is a fast Content Delivery Network (CDN) service.
 AWS CloudFront is a globally distributed network which securely transfers
content such as software, SDKa, videm, etc to the clients, with high transfer
speed.
 It provides high security with the „content privacy‟ feature.
 When a user requests content that is being served with CloudFront, the request
is routed to the nearest edge location that gives the lowest latency. Edge
locations are the network of data centers distributed worldwide through which
content is delivered. This helps the user to access content with the best possible
performance. The Amazon cloud architecture functions as follows:
 If the content is already cached in the edge location, CloudFront delivers it
immediately with the lowest latency possible.
 If the content is not present in the edge location, CloudFront retrieves it
from the origin.
 Amazon‟s CDN offers a simple, pay-as-you-go pricing model with no upfront fees
or required long-term contracts.
Global Infrastructure of AWS:

Fig: Components of AWS global infrastructure

8
 The Amazon web services are used globally by billions of users. The global
infrastructure is divided into small geographical areas.
 The AWS cloud spans 99 Availability Zones within 31 geographic regions around the
world, with announced plans for 15 more Availability Zones and 5 more AWS
Regions in Canada, Israel, Malaysia, New Zealand and Thailand.
1. Region:
 It is a physical location or geographical area.
 A region is a collection of data centers which are completely isolated from other
regions.
 A region consists of more than two availability zones connected to each other
through links.
 For instance – Mumbai region consists of two availability zones.
2. Availability zones:
 An Availability Zone (AZ) is a grouping of one or more discrete data centers that
provide applications and services in an AWS region.
 Availability zones are connected through redundant and isolated metro fibers.

 If one zone fails to function correctly, the other zone will operate fine without any
effect.
 All availability roses in an AWS region are connected through low latency and
high throughput networking channels.
 Availability tones provide customer application and database operating
environments that are more scalable and fault tolerant.
3. Edge locations:
 Edge locations are the endpoints for AWS used for caching content.
 Edge location is not a region but a small location that AWS has.
 It is used for caching the content.
 Edge locations consist of CloudFront, Amazon‟s Content Delivery Network (CDN).
 Edge locations are mainly located in most of the major cities to distribute the
content to end users with enlaced latency.
 For example, if some user accesses your website from New York City, then this
request would be redirected to the edge location closest to New York where
cached data can be read.
 Currently, there are over 150 edge locations.
4. Regional edge cache:
 A regional edge cache has a larger cache than an individual edge location.
 Regional edge cache lies between CloudFront Origin servers and the edge
locations.
 When the user requests the data, this data is no longer available at the edge
location.
 Therefore, the edge location retrieves the cached data from the Regional edge
cache instead of the right servers that have high latency.
9
INTRODUCTION TO GCP:
 GCP stands for Google Cloud Platform.
 Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing
services that runs on the same infrastructure that Google uses internally for its end-
user products, such as Google Search, Gmail, Google Drive and YouTube.
 Google Cloud Platform provides infrastructure as a service, platform as a service and
serverless computing environments.
 In April 2008, Google announced App Engine, a platform for developing and hosting
web applications in Google-managed data centers.
 Google Cloud Platform is a set of cloud computing services provided by Google that
allows us to store, manage and analyze data.
 It is used for developing, deploying and scaling applications on Google‟s
environment.
 The services of GCP can be accessed by software developers, cloud administrators
and IT professionals over the internet or through a dedicated network connection.
Features:
1. Tools and services:
 Google has many innovative tools for handling cloud data.
 Google Cloud Platform offers the same core data storage and virtual machine
functionality of AWS and Azure or any other cloud provider. Google's strength
lies in big data processing tools, Artificial Intelligence (AI) and machine learning
initiatives and container support.
 Google‟s BigQuery and Dataflow bring strong analytics and processing
capabilities for companies that work heavily with data, while Google's
Kubernetes container technology allows for container cluster management.
 GCP offers number of physical, logistical and human-resource-related
components, such as wiring, routers, switches, firewalls, load balancers services.
2. Speed:
 Google Cloud provides its Google Cloud and Google App customers network
speeds of up to 10 TBs.
 The network has connections throughout the world in the United States, Europe,
main cities in Japan, major hubs in Asia and much more.
3. Cost effective:
 Google Cloud offers a monthly pricing plan. The price model of GCP is awesome,
it allows you to pay only for what we use.
 Google Cloud pricing provides discounts.
4. Security:
 GCP provides multi-level security options to protect resources, such as assets,
network and OS-components.
Advantages:
 GCP offers a first rate security level. A big advantage is that the company constantly
releases updates, adds improvements to security.
 In comparison with competitors, like Amazon Web Services, Digital Ocean, or
Microsoft Azure, GCP‟s advantage is the opportunity to not pay any extra money for
unused services. Google enables users to get Google Cloud hosting at the cheapest
rates.

10
 Once the account is configured on GCP, it can be accessed from anywhere. That
means that the user can use GCP across different devices from different places.
 Google always keeps backup of user‟s data with built-in redundant backup
integration.
 Migrating to Google‟s Cloud Platform eliminates the need to buy on-premises
infrastructure.
 There is Global Presence of Data Centers, Networks and Cloud Services by GCP.
Services using GCP:
1. Compute:
 Compute engine allows us to create Virtual Machines (VM), allocate CPU and
memory and choose from SSD (Solid State Drives) or HDD (Hard Disk Drive)
storage. It allows you to create a workstation or computer virtually and handle
all its details.
 The various services under this are as follows:
 App Engine
 Compute Engine
 Kubernetes Engine
 Cloud Functions
 Cloud Run
2. Networking:
 GCP Provides Networking services tools to manage and scale your network
easily.
 It offers following suit of services -
 VirtualPrivate Cloud (VPC): VPC provides the flexibility to scale and
control how workloads connect regionally and globally. By using VPC you
can bring your own IP addresses to google network across all the regions.
 Cloud DNS: Google Cloud DNS (Domain name system) is scalable, reliable
and managed authoritative DNS service. Cloud DNS is also programmable.
 Premium Tier Network: Premium Tier is recommended for high
performance routing. Users can also take advantage of the global fiber
network, with the globally distributed Points of Presence (PoP).
 Cloud Load Balancing: Cloud balancing is used to distribute workload
across different computing resources to balance the entire system
performance.
3. Storage:
 Google Cloud Storage is unified object storage. The GCP has Buckets, Objects
and various storage classes. It is an online data storage web service that Google
provides to its users to store and access data from anywhere.
 Cloud SQL: It is a web-service that enables users to create, manage and use
relational databases stored on Google Cloud servers.
 Cloud BigTable: BigTable is a fully managed wide-column and key-value NoSQL
database service for large analytical and operational workloads.
4. Big Data:
 Big data is the massive amount of data available to organizations that because of
its volume and complexity is not easily managed or analyzed by many business
intelligence tools.

11
 BigQuery is a data warehouse that processes and analyzes large data sets using
SQL queries. These services can capture and examine streaming data for real-
time analytics.
 Google Cloud Dataproc is a managed Apache Hadoop and Spark service for
batch processing, querying, streaming and machine learning
 Cloud Dataflow is a serverless stream and batch processing service. Users can
build a pipeline to manage and analyze data in the cloud, while Cloud Dataflow
automatically manages the resources.
5. Cloud AI:
 Google Cloud Machine Learning Engine is useful to run complex ML models. It is
scalable across different machines. It supports the increased performance of
linear algebra computations.
 AutoML enables developers with limited machine learning expertise to build
custom ML models in minutes. It is a cloud-based ML. platform and uses a No-
Code approach with a set of prebuilt models via a set of APIs.
INTRODUCTION TO AZURE:
 Microsoft‟s cloud computing platform, Azure, was introduced at the Professional
Developers Conference (PDC) in October 2008 under the codename “Project Red
Dog”.
 Microsoft Azure, formerly known as Windows Azure, is Microsoft‟s public cloud
computing platform.
 It provides a range of capabilities, including Software as a Service (SaaS), Platform as
a Service (PaaS), and Infrastructure as a Service (IaaS).
 It provides a broad range of cloud services, including compute, analytics, storage
and networking.
Features:
1. Unique storage system: Azure has multiple delivery points and data centers. Due
which the content delivery to the user is faster. Also, Azure allows users to store data
in a fast and reliable environment.
2. Flexibility: Azure is flexible. That means, it provides a wide range of services
including virtual machines, storage and networking, allowing businesses to build
custom solution that meet their specific needs. This flexibility allows organizations to
create and deploy applications quickly and easily, improving time to market and
providing a competitive advantage.
3. Scaling: Azure allows us to effortlessly scale our infrastructure up or down as per
our requirements.
4. Security: Data security and compliance are top priorities for any business and
Microsoft Azure provides a range of features and services to help organizations
protect their data in the cloud.
5. Cost effective: With Azure, there are no upfront costs for hardware or infrastructure
and organizations only pay for the services they use. This means that businesses
can reduce ca their capital expenditures and operational expenses, while still having
access to the latest technology and features.
6. Strong support in analytics: Microsoft Azure is equipped with built-in support that
analyzes data and key insights. The service provides features such as Cortana
Analytics, Stream Analytics, Machine Learning, and SQL services. These features will
help to enhance customer service and make informed decisions.

12
Services using Azure:
1. Compute domain:
 The Azure compute domain allows its user to host the model for the computing
resources on which our application can run.
App Service:
 This service allows hosting web apps, mobile apps with back ends,
automated processes and RESTful APIs.
 It is a managed PaaS offering.
Virtual Machine:
 It is an IaaS service.
 Virtual Machines, or VMs, are software emulations of physical computers.
They include a virtual processor, memory, storage and networking
resources. They host an Operating System (OS), of our choice and run
software just like a physical computer. And by using a remote desktop
client, we can use and control the virtual machine.
Functions:
 It is a server-less computing service.
 It is a service that allows developers to quickly create and deploy small
pieces of back- end code, or “functions”, without having to worry about the
underlying infrastructure.
 Users can upload applications with multiple functions.
 Azure Functions can be written in a variety of languages, including C#, F#,
Node.js, Python, and PowerShell.
2. Storage:
 Microsoft Azure provides highly available, durable, scalable and redundant
storage. To use any storage type in Azure, we first have to create an account in
Azure. After creating an account, we can transfer data to or from services in your
storage account. We can create a storage account to store up to 500 TB of data
in the cloud. Following are some storage options used in Azure-
Blob Storage:
 BLOB is an acronym and means Binary Large Object.
 Azure Blob storage is a service that stores unstructured data in the cloud
as objects/blobs.
 Blob storage can store any type of text or binary data, such as a document,
media file, or application installer.
 Blob storage is also referred to as object storage.
Queue Storage:
 Queue storage in Azure is a fully managed, cloud-based service for storing
and retrieving large numbers of messages.
 It is used to build asynchronous, loosely-coupled, scalable, and reliable
applications by allowing communication between micro services.
 A single queue message can be up to 64 KB in size, and a queue can
contain millions of messages, up to the total capacity limit of a storage
account.
File Storage:
 File Storage in Azure is a fully managed, cloud-based file storage service
that allows applications to access and store files using the standard Server
Message Block (SMB) protocol.

13
 It provides a fully managed network file share that can be accessed from
anywhere and by multiple concurrent clients.
 An account can contain an unlimited number of shares and a share can
store an unlimited number of files, up to the 5 TB total capacity of the file
share.
3. Database:
 Using various database services from the Azure users can easily deploy one or
more databases to virtual machines or applications. The core database services
available in Microsoft Azure revolve around the following five offerings. Cosmos
DB, MySQL, PostgreSQL, MS SQL and SQL Managed Instance.
Cosmos DB:
 Azure Cosmos DB is a globally distributed, multi-model database service
that supports various data models and query languages.
 It is designed for building highly scalable, globally distributed applications
that require low latency and high availability.
 It is a NoSQL data store that is available in Azure.
MySQL:
 Azure Database for MySQL is a relational database service powered by the
MySQL community edition.
 Azure Database for MySQL is cost effective and easy to set up, operate and
scale.
MSSQL:
 It is a Microsoft SQL service in the cloud.
 It is a fully managed PaaS database engine that handles the tasks of
upgrading, patching, backups and monitoring.
 MS SQL has artificial intelligence features that are updated often.
4. Networking:
 The two important features in network services of Azure are:
 Auto scaling service is used in a cloud to scale out or scale in the virtual
machine instances according to some condition, for instance If CPU usage
is heavy then we can increase the virtual machine instances and if it drops
to some limit then we can reduce the virtual machine instance.
 Load balancing is a feature that balances the traffic between different
servers. It is used to divide the traffic between the different servers or if one
of the machines stops responding it directs its traffic to the different server.
In the cloud, we could use the load balancer to balance the traffic between
our virtual machines.
Comparison among AWS, GCP and Azure:
S.No AWS GCP Azure
1. The AWS is launched by The GCP is launched by Azure is launched by
Amazon. Google. Microsoft.
2. The object storage service The object storage service The object storage service
n AWS is known as in GCP is known as in Azure is known as Blob
Amazon Simple Storage Google Cloud Storage. Storage.
Service.
3. The virtual networking in The virtual private cloud The Virtual Networks
AWS is Amazon Virtual is used in GCP. (VNets) are used in Azure
Private Cloud (VPC). for virtual networking.
14
4. AWS instances can be For Google Cloud, it can An Azure instance can be
purchased in any of the be purchased in On- bought in any of the
following models: On- demand and sustained- models: On-demand and
demand, Reserved and use models. Short-term commitments.
Spot.
5. AWS charges its user per Google cloud charges its Azure charges its
hour using a pay-as-you- customers per minute. customers per minute.
go model.
6. AWS uses Amazon Google cloud does not Azure uses Azure Search.
CloudSearch as a search have a similar service.
service.
7. Platform as a service in Platform as a service is Platform as a service is
Elastic Beanstalk. Google App Engine. cloud services.
8. Cloud data warehouse in Cloud data warehouse in Cloud data warehouse in
AWS is by RedShift. GCP is by Big Query. Azure by SQL Warehouse.
9. For serverless computing For serverless computing For serverless computing
Lambda is used. cloud functions are used. Azure functions are used.
10. As for analytics, AWS For analytics, Google For analytics, Azure uses
uses Amazon Kinesis. cloud uses Cloud Azure Stream Analytics.
Dataflow and Cloud Data
Preparations.
11. Website is: Website is: Website is:
aws.amazon.com cloud.google.com azure.microsoft.com
VERSION CONTROL SYSTEM:
A version control system (VCS) is a tool that helps manage changes to files,
enabling collaboration and keeping track of different versions of a project.
GIT:
 Linus Torvalds created Git in 2005 for the development of the Linux kernel.
 Git is a version control system.
 Git helps you keep track of code changes.
 Git is used to collaborate on code.
Features of Git:
 It is a free and open source tool.
 It keeps track of any addition, modification and deletion of any file in the project
repository.
 It creates backups.
 Creating branches and making changes on particular branches is easy.
 This tool is used for project management in distributed environments.
 It is scalable
 Git allows users from all over the world to perform operations on a project remotely.
Thus it helps in non-linear development.
 It is secure. Git keeps a record of all the commits done by each of the collaborators
on local copy of the developer.
Git Workflow:
 There are two types of files in git-

15
1. Untracked file: Files that are in your working directory, but not added to the
repository
2. Tracked file: These files are those that Git knows about and are added to the
repository
 When we add the files to the empty repository then they are all untracked. To get Git
to track them, we need to stage them, or add them to the staging environment. Then
using the command like git commit we can add them to the repository. Let us
understand these terms.

 Working Directory: The working directory is a directory which we create for


storing the files in our local file system. We can add, modify or delete the files.
But on completion of the desired tasks with the files, these files are added to the
staging area before posting them into repository
 Staging Area: Before sending the file to the repository, the file needs to be stored
in the staging area. Using the git add command the file is stored in the staging
area. Staged files are files that are ready to be committed to the repository.
 Repository: In git, repository is like a data structure used by git to store
metadata for a set of files and directories. It contains the collection of the files as
well as the history of changes made to those files. So that any changes in the
files are notified each time. The version control and management of the files and
directories is possible in the repository. For sending the files from the staging
area to the repository the git commit command is used.
Installing Git:
 To use Git, we have to install it on our computer.
 To download the Git installer, visit the Git‟s official site and go to the download page.
 The link for the download page is https://git-scm.com/downloads.
 Choose the appropriate download option based on your operating system.
 Installation procedure is typical, just click on Next button by accepting default
options.
 On completing the Git setup, you can launch Git Bash: Git Bash is an application
that provides Git command line experience on the operating system.
 Basically, it is a command-line shell for enabling Git with the command line in the
system.
 We can check the version of a Git using the command
$ git --version
 Then the version of installed Git will be displayed.
Basic Commands:
1. Configuration:
 After installing the Git, the first step is to configure it with user name and email
address.

16
 Following commands are used on Git Bash to configure the Git.
$ git config --global user.name
$git config --global user.email
 Following screenshot illustrates these commands

2. Creating a Repository:
 Repository means a storage space or a directory in which we can keep our project
files.
 We will use the git init command to create a repository.
 It is also called a repo.
Step 1: Create a folder with some suitable name. I have created a folder GitDemo
using command prompt.
$ mkdir GitDemo
Step 2: Now switch to GitDemo directory using the command
$ cd GitDemo
Step 3: Issue the init command as
$ git init
 The empty .git directory will be created.
 Following screenshot illustrates it

 Thus we have created our first git repository.


 Now this folder named GitDemo will be tracked by Git.
 Git creates a hidden folder.git to keep track of the changes.
 Note that by default the branch named master is created.
3. Adding File:
 We have created the first local Git repo.
 But note that it is empty.
 Now we will add some files to this repository using some local text editor.
 For instance, we will add a file named welcome.html
welcome.html
<html>
<head>
<title >First File</title>
</head>
17
<body>
<h1>Welcome!!!</h1>
</body>
</html>
 We can go to the terminal and list the files in our current working directory using ls
command.

 The git status command displays the state of the working directory.
 Hence we will us the git status command.

 From the above screenshot, we can clearly observe that it is aware of our
welcome.html file.
 But this file is an untracked file.
4. Checking Status:
 The git status command allows you to see which files are staged, modified but not
yet staged and completely untracked.
 We can add the file welcome.html into the staging area using the git add
command.
$ git add welcome.html
 We can then using the git status command to check the status of welcome.html
file.
 Following screenshot illustrates it.

18
 Thus the file welcome.html file is staged.
 We can add more files to the staging area.
 For instance, let us create one more file named contact.html as follows-
contact.html
<html>
<head>
<title >Second File </title>
</head>
<body>
<p>Phone No: 1234567890</p>
<p>Email: [email protected] </p>
</body>
</html>
 Similarly, we can make changes to our welcome.html file as follows
welcome.html
<html>
<head>
<title >First File </title>
</head>
<body>
<h1>Welcome!!!</h1>
<h3> We are happy to see you here </h3> (This line is added)
</body>
</html>
 Now we can more than one files at a time in staging area using the command.
$ git add - all
 Then using git status command we can check the status of these files.
 Both the contact.html and welcome.html files are added to the staging area.
 Instead of git add - all we can use the command git add – A
 Here is a screenshot.

5. Commit:
 Finally, we have to add these files to a repository so that these files can be tracked
on addition, deletion or modification.
 We use the commit command, to add these files to the repository.
 The commit command should be along with some short message.
 The commit command performs a commit, and the -m “message” adds a message.
19
$ git commit -m “This is first commit”

Commit without stage


 Sometimes we make very small changes to a file.
 Hence first sending the file to the staging area and then to the repository is a waste
of time.
 So we can perform commits directly without a stage.
 We use -a tag.
 The command is -
$ git commit -a -m “updated contact.html file”
Commit Log
 To view the history of commits done for the repository, we use log command.
$ git log
6. Working with Git Branches:
 Initially when we create a repository we create a main branch by default.
 Apart from this main branch we can create any number of branches.
 We can edit the code directly using this new branch without impacting the original
code in our main branch. After successful editing, we can merge this new branch
with our original branch.
 The branches allow us to work on different parts of a project without impacting the
main branch.
 We can also create different branches and work on different projects at a time.
 Switching among these branches is very simple and easy.
 Thus the branching in Git is lightweight and easy.
 For creating a branch, we use the command git branch
$git branch new_branch
 Here we have created another branch named new_branch.

 We can confirm the creation of this branch using the command git branch.
 We can see that there are two branches main and new_branch.
 But the *beside master specifies that we are currently on that branch.
7. Switching between the Branches:
 We can switch between branches using the command checkout.
 The command can be
20
$ git checkout new_branch

 Note that in the above screenshot.


 We have switched to another branch new_branch.
 Then using the git branch command, we can check which branch is currently
active. Note that the lies besides the new_branch.
 Now, we can add the file welcome.html (which is present in the main branch) to
this branch, make the necessary changes to this file and this file is different from
the main branch file.
 Suppose we have added one image to this file.
 Following boldfaced line shows the modifications made in welcome.html
welcome.html
<html>
<head>
<title >First File </title>
</head>
<body>
<h1>Welcome!!!</h1>
<h3> We are happy to see you here </h3>
<div>
<img src = “hand.jpg”>
</div>
</body>
</html>
 Now, let us check status using git status.

 Note that, the file welcome.html is changed, we have also added one jpg file in the
current working directory.
 But we have not added this file in the staging area for commit.
21
 So modified welcome.html and hand.jpg files cannot be tracked.
 Let us now add both the hand.jpg and welcome.html files to the staging area on
the new_branch and then make a commit.
$ git add --all
 Now using git status we check the status of these files.

 Note that the files hand.jpg and welcome.html files are added to the staging area.
 Finally, we will commit these files using the command git commit.

 Thus now branch new_branch is different from master branch.

8. Merging branches:
 We can merge these branches using the git merge command.

22
 We can now delete new_branch using the command
$ git branch -d new_branch

 Note that the branch named new_branch is deleted.


 Only the master branch is alive.
GITHUB:
 GitHub is a web-based hosting service for Git repositories. It makes Git more user-
friendly and also provides a platform for developers to share code with others.
 Additionally, GitHub repositories are open to the public. Developers worldwide can
interact and contribute to one another‟s code, modify or improve it, making GitHub a
networking site for web professionals. The process of interaction and contribution is
also called social coding.
What is a repository?
 Repository means a storage space or directory that holds all the project‟s files. Each
file has revision history. It is also called a repo.
 The repository helps to communicate and manage project‟s work systematically.
 We cart keep files and images inside our repository.
Features:
1. It makes the project management smooth by keeping track of all the updates made
in the project.
2. We can push the package privately and secure the projects from being public.
Similarly, we can distribute the work publicly if we want to do so.
3. GitHub helps the team members to review, develop and propose new code. It allows
users to share the changes in the code among themselves.
4. Code hosting is easy using GitHub.
5. It helps in streamlining the project workflow.
6. It allows effective team management.
7. It is intuitive and easy to learn.
Hosting Service for Git Repository:
 For using the GitHub we have to sign up on the web page https://github.com/
 Use the same email ID which you have used for Git.
 Fill up the required information, set up the user name and password for GitHub.
 Sign in using user name/email and password to GitHub.
1. Create Repository:
 The first step is to create a repository on github.
 For that, we just click on Create Repository to create a new repository.
 Just specify the name of the repository, make it public or private, add or do not add
the README file, just create it the way you want.

23
 I have created a repository by the name my_project:
 Copy the code by clicking on the code.

 Now open the Git bash window and add the command git remote add origin URL
as
$ git remote add origin https://github.com/mkraja77/my_project.git

 Here git remote add origin URL specifies that you are adding a remote repository,
with the specified URL, as an origin to your local Git repo.
 Now let us push our master branch to the origin url and set it as the default remote
branch.
$ git push --set-upstream origin master

24
 This command will prompt you for username and password and will authenticate
you.
 Then only it will push our master branch to the origin url and set it as the default
remote branch.
 Now go back to GitHub and we can note that the master branch is now updated.

 We can add the README file by editing the code inside this file.
 Add some changes to the code and then commit the changes. For now, we will
“commit directly to the master branch”.
2. Fetch, Merge and Pull:
Fetch
 Git Fetch is the command that tells the local repository that there are changes
available in the remote repository without bringing the changes into the local
repository.
 Fetching is what you do when you want to see what everybody else has been
working on.
 The syntax for fetch command are
git fetch <remote>
Fetch all of the branches from the repository. This also downloads all of the
required commits and files from the other repository.
git fetch <remote><branch>
Same as the above command, but only fetch the specified branch.
git fetch -all
By using following fetch command
$ git fetch origin
 Following screenshot illustrates it.

25
 We can check the status of the changes made using the command git status.

 Note that on github repository we have made changes in the README file, hence
there is 1 commit.
 We can also verify by showing the differences between our local master and
origin/master using git diff command.
Merge
 The merge combines the current branch, with a specified branch.
 The command is
$git merge origin/master
Pull
 The pull is a combination of fetch and merge.
 It is used to pull all changes from a remote repository into the branch you are
working on.
 Go to GitHub repository and add some files to it.
 For instance, I have created an about.html file and committed it to GitHub‟s
my_project repository.
about.html
<html>
<head>
<title >About Us File</title>
</head>
<body>
<h1>Quality is our BEST POLICY!!!</h1>
</body>
</html>
 It can be represented by following screenshot

26
 Now, Use pull to update our local Git:

 Open the repository on your local machine and you will the see the newly added
about.html file on your local repository.
3. Push:
 The push command is used to push the changes which we have made in the local
Git repository to the GitHub repository.
Step 1: Change the about.html file in the local folder. I have added an image inside
this html file. about.html
about.html
<html>
<head>
<title >About Us File</title>
</head>
<body>
<h1> Quality is our BEST POLICY!!!</h1>
<div>
<img src = “hand.jpg”>
</div>
</body>
</html>
Step 2: Commit the changes using Git bash.
$git commit -a -m Modified about.html, added an image
Step 3: Now, push the changes to remote origin using the command.

27
$ git push origin

Step 4: Go to GitHub and confirm that the repository has a new commit:
4. Creating Branch:
 On GitHub we can create a branch by clicking on the master branch.
 We have to type the name of the branch to create a new branch.
 On a new branch we can update any desired file and create a separate version of
that file without disturbing the original file.
Difference between Git and GitHub:
S.No Git GitHub
GitHub is a Graphical User Interface
1 It is a command line tool.
(GUI).
2 It is a software. It is a service.
3 It is installed locally on the PC. It is hosted on the web.
4 It is maintained by Linux. It is maintained by Microsoft.
Git is a version control system to GitHub is a hosting service for Git
5
manage source code history repositories.
It has no user management It has a built in user management
6
feature. feature.
It has a free-tier and pay-for-use
7 It is open-source licensed.
tier.
Git has minimum external tool It has an active marketplace for tool
8
configuration. integration.

28

You might also like