0% found this document useful (0 votes)
10 views183 pages

Advanace Web Devlopment

The document provides a comprehensive overview of HTML5 and CSS3, detailing their structure, elements, and functionalities. It covers essential HTML tags, text formatting, multimedia elements, and form controls, along with CSS3 features like Grid Layout for responsive design. The content serves as a foundational guide for web development, emphasizing the importance of both markup and styling in creating modern web pages.

Uploaded by

rata.divya2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views183 pages

Advanace Web Devlopment

The document provides a comprehensive overview of HTML5 and CSS3, detailing their structure, elements, and functionalities. It covers essential HTML tags, text formatting, multimedia elements, and form controls, along with CSS3 features like Grid Layout for responsive design. The content serves as a foundational guide for web development, emphasizing the importance of both markup and styling in creating modern web pages.

Uploaded by

rata.divya2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 183

Prepared by: Anmol Singh

Advance Web Development


BCA 6th SEMASTER
UNIT  1
 Introduction to HTML5:
HTML stands for HyperText Markup Language. It is the standard language
used to create and design web pages on the internet. It uses tags to define
elements such as headings, paragraphs, links, images, and other
components.

 Basic Structure of HTML:

<!DOCTYPE>

<html>
<head>
<title>HTML, Hello World</title>

</head>
<body>
<h1>Welcome To My Page</h1>

<p>Hello World</p>
</body>
</html>

1) <!DOCTYPE html> – This is the document type declaration (not


technically a tag). It declares a document as being an HTML document.
The doctype declaration is not case-sensitive.
2) <html> – This is called the HTML root element. All other elements are
contained within it.
3) <head> – The head tag contains the “behind the scenes” elements for a
webpage. Elements within the head aren’t visible on the front end of a
webpage. HTML elements used inside the <head> element include:
4) <style> – This HTML tag allows us to insert styling into our web pages and
make them appealing to look at with the help of CSS.
5) <title> – The title is what is displayed on the top of your browser when
you visit a website and contains the title of the webpage that you are
viewing.
6) <base> – It specifies the base URL for all relative URL’s in a document.
7) <noscript> – Defines a section of HTML that is inserted when the
scripting has been turned off in the user’s browser.
8) <script> – This tag is used to add functionality to the website with the
help of JavaScript.
9) <meta> – This tag encloses the metadata of the website that must be
loaded every time the website is visited. For eg:- the metadata charset
allows you to use the standard UTF-8 encoding on your website. This in
turn allows the users to view your webpage in the language of their
choice. It is a self-closing tag.
10) <link> – The ‘link’ tag is used to tie together HTML, CSS, and JavaScript.
It is self-closing.
11) <body> – The body tag is used to enclose all the visible content of a
webpage. In other words, the body content is what the browser will
show on the front end.
 HTML Text Formatting Tag:

Element name Description

<b> This is a physical tag, which is used to bold the text written between it.

<strong> This is a logical tag, which tells the browser that the text is important.

<i> This is a physical tag which is used to make text italic.

<em> This is a logical tag which is used to display content in italic.

<u> This tag is used to underline text written between it.

<sup> It displays the content slightly above the normal line.

<sub> It displays the content slightly below the normal line.

<del> This tag is used to display the deleted content.

<ins> This tag displays the content which is added

<big> This tag is used to increase the font size by one conventional unit.

<small> This tag is used to decrease the font size by one unit from base font size.

 HTML5 Text Formatting Tag:


Tag Description
<mark> Highlights important text to draw attention.
Represents a date, time, or duration for better understanding by browsers
<time>
and search engines.
Defines an abbreviation or acronym, usually displaying the full meaning on
<abbr>
hover.
Specifies the source of a quotation or reference, often used for book titles,
<cite>
articles, or research papers.
<dfn> Defines a term being introduced or explained in the content.
<strong> Indicates strong importance, making the text bold by default.
<em> Represents emphasized text, usually displayed in italics for added stress.
 Structured Element in HTML4:

1) <div> Tag - The <div> tag is a block-level element used to group larger
sections of content. It creates a distinct section on a web page that can be
styled with CSS or manipulated with JavaScript.

 Characteristics:
a) It starts on a new line and takes up the full width available.
b) Commonly used for layout purposes or to wrap sections of a page,
such as headers, footers, or main content areas.

<div class="container">
<h1>Welcome to My Website</h1>
<p>This is a paragraph inside a div.</p>
</div>

2) <span> Tag - The <span> tag is an inline element used to group small
portions of text or other inline elements. It does not create a new line and
only takes up as much width as its content.

 Characteristics:
a) It is useful for styling or applying behavior to a specific part of text
without affecting the flow of the surrounding content.
b) Often used for highlighting text or adding styles to parts of a
sentence.

<p>This is a

<span class="highlight">highlighted text</span>

within a paragraph.</p>
 Structured Element in HTML5:
1) <header> – This is the top section of a webpage or a part of a page. It
usually has a logo, a title, and navigation links. You can have multiple
<header> elements if different sections need their own headers.
2) <footer> – This is the bottom section of a page or a part of a page. It often
contains copyright information, contact details, social media links, or extra
navigation. Like <header>, you can use multiple <footer> elements.
3) <section> – This is used to divide a webpage into different parts. Each
section focuses on a specific topic and usually has a heading (like <h1> or
<h2>). A blog post, for example, may have sections for the introduction,
main content, and conclusion.
4) <article> – This is for content that can stand on its own, like a blog post,
news article, or forum post. Unlike <section>, an <article> is a complete
piece of content by itself.
5) <nav> – This is used for navigation menus. It contains links that help users
move around the website. Only main navigation links should go inside
<nav>, not every link on the page.
6) <aside> – This is for content that is related to, but not part of, the main
content. It is often used for sidebars, ads, extra links, or author details.
Inside an article, it might contain notes or references.
7) <figure> – This is a container for images, diagrams, videos, or other media.
It keeps these elements organized and linked with captions.
8) <figcaption> – This is used inside <figure> to provide a caption or
description for the image or media. It helps explain what the image is
about.
9) <main> – This holds the main content of the webpage. It should not include headers,
footers, sidebars, or navigation menus—only the important content that the page is
about.

10) <address> – The <address> tag is used to provide contact information for the author,
organization, or owner of a webpage or article. It typically includes details like an
email address, phone number, physical address, or website link.
 HTML5 Multimedia and Graphics Elements
1) <audio> – This element allows embedding audio files into a webpage,
enabling users to listen to music, podcasts, or other sound clips directly
from the browser. It supports multiple formats like MP3, Ogg, and WAV,
and provides controls for play, pause, and volume.
2) <video> – Similar to <audio>, this element is used to embed video
content, supporting formats like MP4, WebM, and Ogg. It includes built-in
controls for playback, volume, fullscreen mode, and subtitles, ensuring a
seamless user experience.
3) <source> – This tag is used within <audio> or <video> to define multiple
media sources. By listing different formats, browsers can select the best-
supported option, ensuring compatibility across devices.
4) <track> – Enhancing accessibility, the <track> element allows the addition
of captions, subtitles, or metadata to videos. It supports different kinds of
text tracks, such as subtitles for different languages or descriptions for
visually impaired users.
5) <canvas> – A versatile drawing surface that enables the creation of
graphics, animations, charts, and game visuals using JavaScript. It provides
a blank space where developers can dynamically render 2D or 3D graphics
via WebGL.
6) <svg> – Unlike <canvas>, which is pixel-based, <svg> is a markup-based
format for scalable vector graphics. It allows the creation of high-quality,
resolution-independent images, making it ideal for logos, icons, and
interactive visuals.
7) <figure> – This semantic container groups media elements like images,
videos, and charts, helping to associate them meaningfully with
surrounding content. It improves structure and accessibility in web design.
8) <figcaption> – Used within a <figure>, this element provides a descriptive
caption or explanation for the enclosed media. It enhances readability and
helps users understand the significance of the visual content.
HTML Form:
HTML forms are simple form that has been used to collect data from the users.
HTML form has interactive controls and various input types such as text,
numbers, email, password, radio buttons, checkboxes, buttons, etc. We can see
its application in multiple sites, including registration forms, customer feedback
forms, online survey forms and many more.

 HTML4 Form Tag:


1) <form> - The <form> tag defines an HTML form used to collect user
input. It contains form controls like text fields, checkboxes, radio
buttons, and more, where users can enter or select data. Forms are
commonly used for user registration, login, and survey submissions.
2) <input> - The <input> tag defines an input field where users can enter
data. The type of data can vary based on the type attribute, which can
include text, password, email, checkbox, radio button, and more.
3) <textarea> - The <textarea> tag defines a multi-line text input, allowing
users to input more than one line of text, which is useful for comments
or detailed descriptions.
4) <select> - The <select> tag defines a drop-down list, which allows users
to choose one option from a set of predefined choices.
5) <optgroup> - The <optgroup> tag is used to group related options
within a <select> element, allowing for better organization of long lists
by categorizing similar options.
6) <option> - The <option> tag defines individual items in a drop-down
list, allowing users to select one from the available options.
7) <button> - The <button> tag defines a clickable button, which can be
used to submit a form, reset form data, or trigger other actions based
on its type attribute.
8) <keygen> - The <keygen> tag defines a key-pair generator, used in
forms for secure transactions by generating public/private key pairs.
This tag is now deprecated and no longer widely supported.

 HTML5 Form Tag:


1) <label> - The <label> tag defines a label for an input element, helping to
describe what input is expected. It improves form accessibility by
associating a text label with a specific form element.
2) <fieldset> - The <fieldset> tag is used to group related elements in a
form. It helps to logically organize form fields that are connected in
some way, such as personal information or payment details.
3) <legend> - The <legend> tag defines a caption for the <fieldset>
element. It provides a title or heading to describe the grouped section
of form elements.
4) <datalist> - The <datalist> tag specifies a list of predefined options for
an <input> field. It offers suggestions to users as they type, enhancing
input accuracy and convenience.
5) <output> - The <output> tag defines a region where the result of a
calculation or user action can be displayed, commonly used for dynamic
form calculations or output after submission.
 HTML5 INPUT Types/Form Control:
1) type="email" - Defines an input field for entering email addresses. It
ensures that the input is in the correct email format and often provides
basic validation.
2) type="url" - Defines a field for entering website URLs. It ensures that
the input is in the correct URL format and typically performs basic
validation.
3) type="date" - Defines an input field for selecting a date from a date
picker, allowing users to input date values in a specific format.
4) type="time" - Defines an input field for selecting a time from a time
picker, allowing users to input time values in a specific format.
 Introduction to CSS3:
CSS3 (Cascading Style Sheets Level 3) is the third major version of the CSS
(Cascading Style Sheets) language used to style and layout web pages. It builds
on the earlier versions (CSS1 and CSS2) by introducing new features and
improvements that make web design more flexible and powerful.

Grid Layout:
CSS Grid Layout is used to design responsive web layouts with rows and
columns. It allows you to place items exactly where you want, making it
easier to create flexible and modern web pages.
Unlike older methods like floats, CSS Grid gives more control over alignment
and adapts well to different screen sizes.

 Grid Property:
1) display - This fundamental property transforms an HTML element into a
grid container, enabling all grid functionality for that element and its
direct children. Example: display: grid;.
2) grid-template-columns - This defines the structure of the grid’s
columns by specifying their widths and how many columns there are.
You can use units like px, %, fr (fractional units), or auto. For example,
grid-template-columns: 1fr 2fr 1fr; creates three columns where the
middle one is twice as wide as the others.
3) grid-template-rows - This works the same way as grid-template-
columns, but controls the height and number of rows. You can set row
sizes in px, %, fr, or auto. For example, grid-template-rows: 100px auto
1fr; creates three rows with different heights.
4) grid-template-areas - This allows you to visually map out your grid
using named areas, making the layout easier to read and manage. You
assign names to sections like "header", "sidebar", and "content", and
then place items into these areas using grid-area. This approach works
especially well for page layouts.
5) grid-template - This is a shorthand that lets you define grid-template-
rows, grid-template-columns, and grid-template-areas all at once. This
is helpful for combining structure and named areas in one place, but it
can get long for complex layouts.
6) column-gap - This property defines the amount of space between
columns in the grid. For example, column-gap: 20px; leaves 20 pixels of
space between each column, making it easy to add consistent spacing
between columns.
7) row-gap - This property defines the space between rows in the grid. For
example, row-gap: 15px; would leave 15 pixels between each row. This
is useful for adding spacing without extra margins.
8) gap - This is a shorthand for setting both row-gap and column-gap in
one go. You can set them independently like gap: 15px 20px; (rows
then columns) or just gap: 20px; to apply the same spacing in both
directions.
9) grid-auto-columns - When your grid adds new columns automatically (if
there are more items than defined columns), this property defines how
wide those extra columns should be. For example, grid-auto-columns:
100px; makes all auto-added columns 100px wide.
10) grid-auto-rows - Similar to grid-auto-columns, this defines the height
of automatically created rows when grid items are placed outside the
explicitly defined grid structure. For example, grid-auto-rows: 80px;
would make all such rows 80 pixels tall.
11) grid-auto-flow - This controls how items are automatically placed into
the grid, either filling in row by row (row) or column by column
(column). The dense keyword tries to fill all gaps by backfilling smaller
items into empty spaces, which can change the visual order.
12) grid - This is a super shorthand that lets you set both explicit (template
rows, columns, areas) and implicit (auto-rows, auto-columns, auto-
flow) grid properties in a single declaration. It’s powerful but often less
readable than using individual properties.
13) grid-column-start - This property defines where a grid item starts
horizontally, by referring to grid lines (the vertical dividers between
columns). For example, grid-column-start: 2; places the item starting at
the second column line.
14) grid-column-end - This defines where a grid item ends horizontally,
again using grid lines. For example, grid-column-end: 4; would make the
item stretch to the fourth column line.
15) grid-row-start - This defines where a grid item starts vertically, based
on row lines. For example, grid-row-start: 1; places the item at the first
row line.
16) grid-row-end - This defines where a grid item ends vertically, also
using row lines. For example, grid-row-end: 3; would make the item
span from row 1 to row 3.
17) grid-column - This is a shorthand for defining both grid-column-start
and grid-column-end in one property. For example, grid-column: 2 / 4;
places the item across columns 2 and 3, ending at line 4.
18) grid-row - This is a shorthand for defining both grid-row-start and grid-
row-end together. For example, grid-row: 1 / 3; spans the item across
two rows.
19) grid-area - This versatile property can either assign a name to an item
(for use with grid-template-areas) or define all four positioning values
(row start, column start, row end, column end) at once. Example: grid-
area: 1 / 2 / 3 / 4;.
20) justify-content - This property horizontally aligns the entire grid inside
its container when the grid is smaller than the container. Values include
start, end, center, stretch, space-around, space-between, and space-
evenly.
21) align-content - Similar to justify-content, this property vertically aligns
the entire grid when the grid’s height is smaller than the container. It
uses the same alignment values like start, end, center, and stretch.
22) place-content - This is a shorthand for setting both align-content and
justify-content at once. For example, place-content: center space-
between; centers vertically and spaces items evenly horizontally.

23) align-items - This property vertically aligns the content inside all grid
items within their individual cells. Items can align to start, end, center,
or stretch to fill the whole cell.
24) justify-items - This property horizontally aligns the content inside all
grid items within their cells. It uses the same values as align-items like
start, end, center, or stretch.
25) align-self - This overrides align-items for a specific grid item, letting
you individually control vertical alignment for just that item. Example:
align-self: center; centers it vertically.
26) justify-self - This overrides justify-items for a specific grid item, letting
you control its horizontal alignment independently. Example: justify-
self: end; pushes the item to the right side of its cell.
27) place-self - This shorthand sets both align-self and justify-self for a
specific item in one go. For example, place-self: start center; vertically
aligns to the top and horizontally centers the content.

 EXAMPLE:

<!DOCTYPE html>
<html>
<head>
<style>
.grid-container {
display: grid;
grid-template-columns: 1fr 1fr 1fr; /* Three equal columns */
grid-template-rows: 100px 200px; /* Two rows with specific
heights */
gap: 20px; /* Space between all grid items
*/
background-color: #f0f0f0;
padding: 20px;
}

.grid-item {
background-color: #3498db;
color: white;
border-radius: 5px;
padding: 20px;
font-size: 20px;
text-align: center;
}

/* Make the first item span two columns */


.item1 {
grid-column: 1 / 3;
background-color: #e74c3c;
}

/* Make the last item span two rows */


.item4 {
grid-row: 1 / 3;
grid-column: 3 / 4;
background-color: #2ecc71;
}
</style>
</head>
<body>
<div class="grid-container">
<div class="grid-item item1">Item 1 (spans 2 columns)</div>
<div class="grid-item item4">Item 4 (spans 2 rows)</div>
<div class="grid-item">Item 2</div>
<div class="grid-item">Item 3</div>
</div>
</body>
</html>
 Feature of Grid:
1) Two-dimensional layout control - CSS Grid gives you the ability to
control both rows and columns at the same time, unlike Flexbox, which
is mainly designed for arranging items in a single row or single column
at a time.
2) Grid lines and track sizing - You can set exact widths and heights for
your rows and columns using any CSS unit (like pixels, percentages, or
ems). You can also use the fr unit, which divides the available space into
flexible fractions, making your grid more adaptable.
3) Gap control - With gap, row-gap, and column-gap, you can add
consistent spacing between grid items without adding extra space at
the edges of the grid container, keeping your layout clean.
4) Item placement - You can position items exactly where you want them
within the grid using line numbers, line names, or span values.
Properties like grid-column and grid-row make this precise control
possible.
5) Named template areas - The grid-template-areas property allows you
to map out your layout directly in your CSS, using names like "header",
"sidebar", and "main", which makes your layout easier to read and
understand.
6) Auto-placement algorithm - Items that don’t have specific placements
will automatically be positioned by CSS Grid, following rules you set
with grid-auto-flow, which controls the flow direction and order.
7) Alignment control - You can align the whole grid and also individual
grid items using properties like justify-content (horizontal alignment),
align-items (vertical alignment), and justify-self (alignment for a single
item).
8) Overlapping capability - Grid items are not locked into their own boxes
— they can overlap each other if needed, giving you flexibility for more
creative layouts.
9) Responsive design support - With features like minmax(), auto-fill, and
auto-fit, you can build grids that automatically adjust to different
screen sizes, sometimes without needing any media queries.
10) Implicit grid handling - If your content exceeds the defined grid (for
example, you have more items than expected), CSS Grid can
automatically create new rows or columns based on rules you set,
ensuring nothing gets cut off.
11) Order independence - You can completely change the visual order of
your grid items without changing their actual order in the HTML. This
makes it easier to create complex layouts while keeping your HTML
organized.
12) Nested grids - Grid containers can be placed inside grid items,
meaning you can nest grids within grids to create more complex,
layered layouts.
CSS Flexbox:
CSS Flexbox (Flexible Box Layout) is a one-dimensional layout method
designed for arranging items in rows or columns. It was introduced to
provide a more efficient way to distribute space and align items within a
container, even when their size is unknown or dynamic.
 Flexbox Properties:
1) Parent Properties:
a) display - This property defines the element as a flex container, which
means all its direct children automatically become flex items. Without
setting display - flex (or inline-flex), none of the other flexbox
properties will work.
b) flex-direction - This property sets the main axis direction, which
controls whether the flex items are placed horizontally in a row,
vertically in a column, or reversed in either direction. It can be set to
row, row-reverse, column, or column-reverse.
c) flex-wrap - This property controls whether the flex items stay on a
single line or if they are allowed to wrap onto multiple lines when
there’s not enough space in the container. Options are nowrap (all
items on one line), wrap (items wrap onto new lines if needed), or
wrap-reverse (similar to wrap, but in reverse order).
d) flex-flow - This is a shorthand property that combines flex-direction
and flex-wrap into a single line. For example, flex-flow - row wrap
means the items will be arranged in a row and allowed to wrap onto
new lines when necessary.
e) justify-content - This property controls how flex items are spaced along
the main axis (set by flex-direction). It defines how extra space is
distributed, with options like flex-start (items bunched at the start),
center (items in the center), space-between (items spread out with
space only between them), and more.
f) align-content - This property controls how multiple rows or columns of
flex items are spaced along the cross axis (perpendicular to the main
axis). It only applies when flex items wrap into more than one line,
allowing control over the spacing between lines.
g) align-items - This property controls the vertical alignment in a row
layout (or horizontal alignment in a column layout) of all flex items
within the container. Options like stretch, center, and flex-start allow
you to easily control how items align within each line.

2) Children/Flex-items Properties:
a) order - This property changes the visual order of an item within the flex
container without changing the order in the HTML itself. Items with
lower order values appear first, and higher values appear later, giving
you control over layout independent of source order.
b) flex-grow - This property allows a flex item to grow and take up extra
space when there is unused space in the flex container. Items with a
higher flex-grow value take up more space compared to items with a
lower value.
c) flex-shrink - This property controls how much a flex item shrinks when
there isn’t enough space in the container. Items with higher flex-shrink
values shrink more than those with lower values when space is limited.
d) flex-basis - This property defines the initial size of a flex item before any
growing or shrinking happens. It can be set using any size unit (like px,
%, rem, or auto), and serves as the item’s starting size.
e) flex - This shorthand property combines flex-grow, flex-shrink, and flex-
basis into a single, convenient line. For example, flex - 1 0 auto means
the item can grow, won’t shrink, and starts at its natural size.
f) align-self - This property lets you override the align-items setting for an
individual item, allowing it to have a different alignment along the cross
axis than the other items in the container. It’s useful when you want
one item to stand out or align differently from the rest.
 EXAMPLE:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Flexbox Example</title>
<style>
.container {
display: flex; /* Enables Flexbox */
flex-direction: row; /* Items placed in a row (horizontal) */
justify-content: space-between; /* Items spread out with space between
*/
align-items: center; /* Items vertically centered */
height: 200px; /* Just to give container height */
border: 2px solid black; /* Border to see the container */
padding: 10px;
}

.item {
background-color: lightblue; /* Background color for items */
padding: 20px;
border: 1px solid blue; /* Border to see each item */
text-align: center; /* Center text in items */
flex-grow: 1; /* Each item can grow equally */
margin: 0 10px; /* Add spacing between items */
}

/* Optional: To control size for one specific item */


.item:nth-child(2) {
flex-grow: 2; /* Middle item grows twice as much */
background-color: lightgreen; /* Different color to stand out */
}
</style>
</head>
<body>

<div class="container">
<div class="item">Item 1</div>
<div class="item">Item 2</div>
<div class="item">Item 3</div>
</div>

</body>
</html>
 Features of CSS Flexbox:
1) One-dimensional layout - Flexbox is designed to lay out items along a
single axis at a time — either a horizontal row or a vertical column. It’s
perfect for simpler layouts like navbars, lists, or small sections where you
only need to control one direction.
2) Flexible sizing - Flex items can grow to fill extra space or shrink to avoid
overflow, depending on the available space in the container. This allows
your layout to automatically adjust to different screen sizes without
manually setting widths or heights.
3) Easy alignment - Flexbox makes it easy to align items along both the main
axis (horizontal or vertical) and the cross axis (the other direction). You
can center items, push them to the edges, or spread them evenly with
just a few properties.
4) Space distribution - Flexbox gives you control over how space is
distributed between items. With properties like justify-content, you can
easily spread items evenly, add space only between items, or center
everything inside the container.
5) Automatic wrapping - If there are too many items to fit in one line,
Flexbox can wrap items onto additional lines automatically (when you use
flex-wrap), making it useful for responsive designs where space is limited.
6) Reordering without HTML changes - With the order property, you can
change the visual order of flex items without changing their actual order
in the HTML file. This makes it easier to rearrange content for different
screen sizes or layouts.
7) Alignment flexibility for individual items - Flexbox allows individual items
to have different alignments from the rest using align-self, so you can
fine-tune the positioning of special items without affecting others.
8) Space-efficient design - Flexbox automatically ensures that available
space is efficiently used. Items can grow to fill space or shrink to fit tighter
areas, giving you better control over spacing without fixed dimensions.
9) Built-in responsiveness - Flexbox makes responsive design much easier
compared to older techniques like floats. By combining flex-grow, flex-
shrink, and flex-basis, items automatically adjust to different screen sizes,
reducing the need for media queries.
10) Shorthand properties for simplicity - Flexbox includes handy shorthand
properties like flex (for growth, shrink, and basis) and flex-flow (for
direction and wrapping), which let you write cleaner and shorter CSS
code.
 CSS Preprocessor:
CSS preprocessors are tools that extend the functionality of regular CSS. They
allow developers to write styles using programming-like features such as
variables, nesting, functions, and reusable blocks of code. Preprocessors make
CSS easier to manage, especially for large projects, by adding structure and
reducing repetition. However, browsers cannot read preprocessor code directly,
so it must be compiled into regular CSS before being used on websites.

 Some Popular CSS Preprocessor:


1) Sass (Syntactically Awesome Stylesheets) - Sass is the most popular and
widely used CSS preprocessor. It introduces features like variables, nesting,
mixins, functions, and partials, which make CSS much more powerful and
easier to organize. Sass supports two syntaxes: SCSS (which is very similar
to regular CSS, just with added features) and the indented syntax (which
drops curly braces and semicolons in favor of indentation).
2) LESS - LESS is another popular CSS preprocessor that offers similar features
to Sass, such as variables, mixins, nesting, and functions. LESS was designed
to be simple and easy to learn, especially for developers already familiar
with CSS. One of its strengths is how well it integrates with JavaScript
environments, which made it a good fit for older versions of Bootstrap.
3) Stylus - Stylus is a highly flexible and customizable preprocessor that allows
developers to write styles with or without semicolons, colons, and braces.
This gives developers freedom in writing styles the way they prefer. Stylus
supports advanced features like functions, conditionals, and loops, making
it very powerful.
4) PostCSS - PostCSS is a post-processor but can also function like a
preprocessor when used with certain plugins. Instead of providing built-in
features like variables and nesting, PostCSS relies on plugins to add specific
functionality.
Introduction to SASS:
Sass is the most popular and widely used CSS preprocessor. It introduces
features like variables, nesting, mixins, functions, and partials, which make
CSS much more powerful and easier to organize. Sass supports two syntaxes:
SCSS (which is very similar to regular CSS, just with added features) and the
indented syntax (which drops curly braces and semicolons in favor of
indentation). Sass is used in large projects and frameworks like Bootstrap
because of its flexibility and power.

 Features of SASS:

1) Operators in Sass - In Sass, operators allow you to perform mathematical


and logical operations directly within your styles. This helps you
dynamically calculate widths, margins, colors, and more — making your
stylesheets smarter and more flexible.

Operator Operators Example

Arithmetic + (Addition) width: 100px + 50px;

- (Subtraction) margin: 100% - 20%;

* (Multiplication) height: 50px * 2;

/ (Division) font-size: (16px / 2);

% (Modulus) width: 10px % 3;

Comparison == (Equal) @if $width == 100px

!= (Not equal) @if $color != red

> (Greater than) @if $width > 100px

< (Less than) @if $width < 200px


>= (Greater than or equal) @if $width >= 100px

<= (Less than or equal) @if $width <= 200px


@if $theme == dark and
Logical and
$isMobile
or @if $theme == dark or $isMobile

not @if not $isMobile

2) Variables in Sass - In Sass, variables let you store values (like colors,
fonts, or sizes) and reuse them throughout your stylesheet. This makes
your CSS more manageable, consistent, and easier to update.

EXAMPLE:

// Declare variable
$primary-color: #4CAF50;

button {
// Use variable
background-color: $primary-color;
color: white;
}
3) Nesting in Sass - In Sass, nesting allows you to write CSS rules inside
other rules, following the same structure as your HTML. This makes your
stylesheets more organized and easier to read, especially when dealing
with nested elements like menus, cards, or forms.
Style.scss Style.css

4) Mixins in Sass - Mixins in Sass are reusable chunks of styles that you can
define once and use anywhere. They make it easy to apply the same set
of styles across multiple selectors, with the flexibility to accept
parameters (like arguments in a function).

Style.scss Style.css
5) Functions in Sass - Sass functions are reusable blocks of logic that return
a value. Unlike mixins (which output CSS rules), functions are used inside
properties to calculate or generate values.

EXAMPLE:

// Declare function
@function calculateSpacing($base, $multiplier) {

$spacing: $base * $multiplier; // Statement 1 - Calculation


@if $spacing > 100px { // Statement 2 - Conditional logic
$spacing: 100px; // Statement 3 - Limit maximum spacing
}
@return $spacing; // Final Statement - Return value

// Use function
.container {
padding: calculateSpacing(20px, 4); // This will return 80px
}

.large-container {
padding: calculateSpacing(30px, 5); // This will return 100px (capped by the
if statement)
}
6) Imports & Partials in Sass - Sass offers a cleaner way to organize large
stylesheets by splitting them into smaller files, which can be combined
using @import (or in modern Sass, @use and @forward).

A partial is just a Sass file whose filename starts with an underscore (_)
for example: _buttons.scss (Partial), _header.scss (Partial).

The @import rule in Sass allows you to include the content of one file
into another. It helps break a big stylesheet into smaller, manageable
pieces.

_button.scss:

button {
background-color: blue;
color: white;
}

_header.scss:

header {
background-color: grey;
padding: 20px;
}

Main.scss:

// main.scss
@import 'buttons';
@import 'header';

body {
font-family: Arial, sans-serif;
}
7) Inheritance in Sass - Inheritance in Sass allows you to share styles
between selectors using the @extend directive. This is useful when
multiple elements share common styles — you can define the common
styles once and then "extend" them into other selectors.

EXAMPLE:

body {
font-family: Arial, sans-serif;
}

// Base class (parent)


.base-button {
padding: 10px 20px;
border: none;
cursor: pointer;
}

// New class (inherits from base-button)


.primary-button {
@extend .base-button;
background-color: blue;
color: white;
}

// Another class (inherits from base-button)


.secondary-button {
@extend .base-button;
background-color: gray;
color: black;
}
 Function of SCSS:

1) Color Functions - These help you modify colors - lighten, darken, etc.

Function Example Result

lighten() lighten(#3498db, 20%) Makes color 20% lighter

darken() darken(#3498db, 20%) Makes color 20% darker

adjust-hue() adjust-hue(#3498db, 45deg) Rotates color hue

mix() mix(#3498db, #2ecc71, 50%) Mixes two colors

rgba() rgba(#3498db, 0.5) Adds transparency

EXAMPLE:
button {
background-color: lighten(#3498db, 10%);
}

2) String Functions - Useful for handling and manipulating text.

Function Example Result

quote() quote(Hello) "Hello"

unquote() unquote("Hello") Hello

str-length() str-length("Hello") 5

to-upper-case() to-upper-case(hello) HELLO

EXAMPLE:
p{
content: quote("Welcome");
}
3) Numeric Functions - These handle numbers - useful for calculations.

Function Example Result

percentage() percentage(0.5) 50%

round() round(4.3) 4

ceil() ceil(4.3) 5

floor() floor(4.8) 4

min() min(10px, 15px) 10px

EXAMPLE:

div {
width: calc(percentage(0.5) - 10px); // You can calculate values using
percentage()
}

4) List Functions - Lists in SASS are collections of values, like font-stack:


Arial, sans-serif.

Function Example Result

length() length(1px solid red) 3

nth() nth(1px solid red, 2) solid

append() append(1px solid, red) 1px solid red

join() join(a b, c d) abcd


Example:
$list: 10px 15px 20px; //list

div{
padding: nth($list, 2); // 15px
}
5) Map Function - A map in SCSS (SASS) is a collection of key-value pairs.
primary, secondary, and danger are keys. Each key points to a value (a
color in this case).

Function Example Result

map-get() map-get($map, key) Value for that key

map-has-key() map-has-key($map, key) true/false

map-keys() map-keys($map) List of keys

map-values() map-values($map) List of values

Example:
$colors: (
primary: #3498db,
secondary: #2ecc71
);

button {
background-color: map-get($colors, primary); // #3498db
}

6) Introspection Functions - These check things like type and if values exist.

Function Example Result

type-of() type-of(10px) number

unit() unit(10px) px

unitless() unitless(10px) FALSE

is-number() is-number(10px) TRUE


 Introduction to JavaScript ES6+:
JavaScript ES6+ refers to ECMAScript 6 and later versions of JavaScript.
ECMAScript (often abbreviated as ES) is the official standard that defines how
the JavaScript language works. ES6 (ECMAScript 2015) was a major update to
JavaScript. It introduced a lot of new features that make writing JavaScript
cleaner, more powerful, and easier to work with. ES6+ means ES6 and all the
versions released after it (ES7, ES8, ES9, and so on).

JavaScript ES6+ Concept:

1) Introduction to Promises:
In JavaScript, Promises are used to handle asynchronous operations, which
are tasks that take time to complete, like fetching data from a server or
reading a file. The main reason we use promises is to avoid "callback hell" -
a messy situation where you have callbacks inside callbacks, making the
code hard to read and maintain.
A promise is like a guarantee that some work will either, Complete
successfully or Fail In both cases, the subscriber (your code) will be notified
so you can react to the result.

Syntax:

let promise = new Promise(function(resolve, reject) {


// Code to do something (like fetching data)
// If successful: resolve(result)
// If failed: reject(error)
});

resolve() is called when the job is done successfully.


reject() is called when there’s an error.
 Properties of a Promise:
a) State: Initially pending then changes to either "fulfilled" when resolve
is called or "rejected" when reject is called.
b) Result: Initially undefined then changes to value if resolved or error
when rejected.

 Handling the Promise - .then() and .catch():

a) .then() - .then() runs when the promise is fulfilled (resolved). It takes a


callback function, and the value passed to resolve() is given to that
callback.

b) .catch() - .catch() runs when the promise is rejected (failed). It also


takes a callback function, and the error passed to reject() is given to
that callback.

EXAMPLE:
let promise = new Promise((resolve, reject) => {
let success = 0;
if (success) {
resolve("Task completed!");
} else {
reject("Task failed!");
}
});

promise.then((message) => {
console.log("Success:", message);
});

promise.catch((error) => {
console.error("Error:", error);
});
Rather using promise variable separately for then and catch you can also
chain them which is called chaining .then and .catch like as

promise
.then((message) => {
console.log("Success:", message);
})
.catch((error) => {
console.error("Error:", error);
});

 Promises Chaining:

We can chain promises and make them pass the resolved values to one
another. The idea is to pass the result through the chain of .then
handlers. Here is the flow of execution:
1. The initial promise resolves in 1 second (Assumption).
2. The next .then() handler is then called, which returns a new promise
and gets the result of the previous one.
3. The next .then() gets the result of the previous one and this keeps on
going.
Every call to .then() returns a new promise whose value is passed to the
next one and so on.
We can even create custom promises inside .then().
Example:

let fetchData = new Promise((resolve, reject) => {


setTimeout(() => resolve("Data fetched"), 1000);
});

fetchData
.then((data) => {
console.log(data); // "Data fetched"
return "Processing data";
})
.then((processMessage) => {
console.log(processMessage); // "Processing data"
return "Data processed";
})
.then((finalMessage) => {
console.log(finalMessage); // "Data processed"
})
.catch((error) => {
console.error("Error:", error); // Catch any error in the chain
});
 Promise API:

There are 6 static methods of Promise class:

1. Promise.all(promises) - Promise.all() waits for all the promises in the


array to finish. If all promises succeed, it returns an array of all results.
But if even one promise fails, the whole Promise.all() immediately fails
with that error, and all other results are discarded. It’s useful when
you want to run several tasks together and need all of them to
succeed.
Example:

let p1 = new Promise((resolve, reject) => {


setTimeout(() => {
resolve("Value 1");
}, 1000);
});

let p2 = new Promise((resolve, reject) => {


setTimeout(() => {

resolve(“value2”);
}, 2000);
});

let p3 = new Promise((resolve, reject) => {


setTimeout(() => {
resolve("Value 3");
}, 3000);
});

let promise_all = Promise.all([p1, p2, p3])


promise_all.then((value) => {
console.log(value)
})
2. Promise.allSettled(promises) - Promise.allSettled() also waits for all
the promises to finish, but it doesn’t stop if a promise fails. Instead, it
collects the result of each promise — whether it fulfilled or rejected. It
returns an array of objects where each object has the status
(fulfilled/rejected) and the corresponding value or reason. This is
useful when you want to know the outcome of all promises, even if
some failed.
EXAMPLE:
let p1 = new Promise((resolve, reject) => {
setTimeout(() => {
resolve("Value 1");
}, 11000);
});

let p2 = new Promise((resolve, reject) => {


setTimeout(() => {
// resolve("Value 2");
reject(new Error("Error"));
}, 2000);
});

let p3 = new Promise((resolve, reject) => {


setTimeout(() => {
resolve("Value 3");
}, 3000);
});

let promise_all = Promise.allSettled([p1, p2, p3])


promise_all.then((value) => {
console.log(value)
})
3. Promise.race(promises) - Promise.race() waits for the first promise to
either resolve or reject, and immediately uses that result (or error). It
doesn’t matter if other promises are still running — the first one to
finish determines the final result. This is useful when you want to take
the result of the fastest task and ignore slower ones.
EXAMPLE: (change part only here rest of code same as previous)

let promise_all = Promise.race([p1, p2, p3])


promise_all.then((value) => {
console.log(value)
})

4. Promise.any(promises) - Promise.any() waits for the first promise to


succeed (resolve), ignoring any promises that fail (reject). If at least
one promise succeeds, Promise.any() succeeds with that value.
However, if all promises fail, it returns a special AggregateError listing
all the errors. This is useful when you want at least one successful
result, no matter which one.
EXAMPLE: (change part only here rest of code same as previous)

let promise_all = Promise.any([p1, p2, p3])


promise_all.then((value) => {
console.log(value)
})
5. Promise.resolve(value) - Promise.resolve() creates a promise that is
already resolved with the given value. It’s useful when you want to
quickly wrap a normal value into a promise, so it can be used in
promise chains or functions that expect a promise.
EXAMPLE: (change part only here rest of code same as promise.all)

let promise_all = Promise.resolve(6)


promise_all.then((value) => {
console.log(value)
})

6. Promise.reject(error) - Promise.reject() creates a promise that is


already rejected with the given error. This is useful when you want to
immediately return a failed promise, often in cases where some
condition fails and you want to pass the error to a .catch() handler.
EXAMPLE: (change part only here rest of code same as promise.race)

let promise_all = Promise.reject(new Error("Hey"))


promise_all.then((value) => {
console.log(value)
})
2) Async/Await keyword:
There is a special syntax to work with promises in JavaScript.
async keyword - A function can be made async by using async keyword
like this. An async function always returns a promise. Other values are
wrapped in a promise automatically. We can do something like this:

EXAMPLE:

async function harry() {


return 7;
}

harry().then(alert);

So, async ensures that the function returns a promise and wraps non-
promises in it.

await keyword - There is another keyword called await that works only
inside async functions. The await keyword makes JavaScript wait until the
promise settles and returns its value. It’s just a more elegant syntax of
getting the promise result than .then(), but it’s easier to read and write.

EXAMPLE:
let value = await promise;
Complete Code async/await:
async function harry() {
let delhiWeather = new Promise((resolve, reject) => {
setTimeout(() => {
resolve("27 Deg")
}, 2000)
})

let bangaloreWeather = new Promise((resolve, reject) => {


setTimeout(() => {
resolve("21 Deg")
}, 5000)
})

// delhiWeather.then(alert)
// bangaloreWeather.then(alert)
console.log("Fetching Delhi Weather Please wait ...")
let delhiW = await delhiWeather
console.log("Fetched Delhi Weather: " + delhiW)
console.log("Fetching Bangalore Weather Please wait ...")
let bangaloreW = await bangaloreWeather
console.log("Fetched Bangalore Weather: " + bangaloreW)
return [delhiW, bangaloreW]
}

const cherry =() => {


console.log("Hey I am cherry and I am not waiting ")
}

console.log("Welcome to weather control room")


let a = harry()
let b = cherry()

a.then((value)=>{
console.log(value)
})
Arrow Function:
An arrow function is a shorter syntax for writing functions in JavaScript.
Introduced in ES6, arrow functions allow for a more concise and readable
code, especially in cases of small functions. Unlike regular functions, arrow
functions don’t have their own this, but instead, inherit it from the
surrounding context.
a) Arrow functions are written with the => symbol, which makes them
compact.
b) They don’t have their own this. They inherit this from the surrounding
context.
c) For functions with a single expression, the return is implicit, making the
code more concise.
d) Arrow functions do not have access to the arguments object, which is
available in regular functions.

 Different Ways to use arrow function:


1) Arrow Function without block body - An arrow function without a block
body is a shorthand way to write a function with a single expression.
When you omit the curly braces ({}), the return is implicit — meaning the
expression's result is automatically returned.
EXAMPLE:
const greet = () => console.log('Hello!');
greet();

2) Arrow Function with Block body - An arrow function with a block body
uses curly braces {} to define the function body. When using a block body,
you must use the return keyword explicitly if you want to return a value.
EXAMPLE:
const sum = (a, b) => {
const result = a + b;
return result;
};
console.log(sum(5,5));
3) Arrow Function without Parameters - An arrow function without
parameters is defined using empty parentheses (). This is useful when
you need a function that doesn’t require any arguments.
EXAMPLE:
const gfg = () => {
console.log( "Hi from GeekforGeeks!" );
}

gfg();

4) Arrow Function with Single Parameters - If your arrow function has a


single parameter, you can omit the parentheses around it.

EXAMPLE:
const square = x => {
x * x;
}
console.log(square(4));

5) Arrow Function with Multiple Parameters - Arrow functions with


multiple parameters, like (param1, param2) => { }, simplify writing
concise function expressions in JavaScript, useful for functions requiring
more than one argument.
EXAMPLE:
const add = (x, y, z) => {
console.log(x + y + z)
}

add(10, 20, 30);


6) Arrow Function with Default Parameters - Arrow functions support
default parameters, allowing predefined values if no argument is passed,
making JavaScript function definitions more flexible and concise.

EXAMPLE:
const con = (x, y, z = 30) => {
console.log(x + " " + y + " " + z);
}

con(10, 20);

7) Arrow Function With this Keyword –


EXAMPLE:
const x = {
name: "Anmol",
role: "Js Developer",
exp: 20,
show: function() {

setTimeout(() => {
console.log(`The name is ${this.name}\nThe role is ${this.role}`)
}, 2000)
}
}

console.log(x.name, x.exp)
x.show()

In this code, the show method is part of the object x, so when x.show() is
called, this correctly refers to x. Inside show, you have a setTimeout, and
inside it, you used an arrow function. This is important because arrow
functions do not have their own this — they inherit this from the outer
function (in this case, show). That means this inside the setTimeout arrow
function still refers to x, so this.name is "Anmol" and this.role is "Js
Developer". If you used a regular function instead of an arrow function
inside setTimeout, this would default to the global object (like window),
and this.name and this.role would be undefined. This is the key benefit of
using arrow functions in cases like this — they preserve the correct this
without needing .bind(), making the code cleaner and easier to
understand.
JavaScript Template Literal:
A JavaScript template literal is a way to create strings that allows for
embedded expressions, multi-line strings, and string interpolation using
backticks (``). Unlike regular strings, template literals can dynamically insert
variables or expressions directly into the string using ${} syntax.1)

 Use Cases - JavaScript Template Literals:

1) Multi-line Strings - Template literals allow you to write multi-line strings


directly, without needing to use special characters like \n or string
concatenation. With backticks, the string can span multiple lines, and the
line breaks are preserved exactly as they are written in the code.

EXAMPLE:
const poem = `Roses are red,
Violets are blue,
JavaScript is awesome,
And so are you!`;
console.log(poem);

2) Dynamic Expressions - One of the biggest benefits of template literals is


that they allow embedding variables and expressions directly into strings
using the ${} syntax. This makes it easy to combine text with dynamic
values, such as numbers, user input, or calculated results, without having
to break the string and use concatenation. It helps create cleaner, more
readable dynamic strings.
EXAMPLE:
const a = 5, b = 10;
const result = `Sum of ${a} and ${b} is ${a + b}.`;
console.log(result);
3) HTML Templates - Template literals are commonly used when
generating HTML content dynamically in JavaScript. They allow you to
insert things like page titles, user names, or dynamically generated
content into elements like <div>, <p>, or <h1>. This is especially useful
when working with client-side rendering frameworks or templating
engines.
EXAMPLE:
const title = "Welcome";
const html = `<h1>${title}</h1>`;
console.log(html);

4) Conditionals in Templates - You can also embed conditional logic directly


inside template literals using the ternary operator (condition ? trueValue
: falseValue). This allows you to display different parts of the string based
on a condition, such as showing "Admin" if a user is an admin, or "Guest"
if they are not. This avoids writing extra if-else logic outside the string
and keeps the template compact.
EXAMPLE:
const isAdmin = true;
const userRole = `User role: ${isAdmin ? "Admin" : "Guest"}.`;
console.log(userRole);

5) Loops with Templates - Template literals work seamlessly with loops and
array methods, especially methods like map(), to generate dynamic lists
directly inside the template. For example, you can loop over an array of
items and format each one with a prefix or bullet point, then combine all
of them into a single string. This is particularly helpful for creating lists,
menus, or data tables dynamically.
EXAMPLE:
const items = ["apple", "banana", "cherry"];
const list = `Items: ${items.map(item => `\n- ${item}`)}`;
console.log(list);
6) Embedding Functions - You can also call functions directly inside a
template literal. This allows you to process or format values on the fly
before they are inserted into the final string. For example, you can apply
functions to capitalize text, format dates, or transform user input right at
the point of insertion. This keeps the code concise and expressive, since
all the logic is directly within the template.
EXAMPLE:
const toUpper = str => str.toUpperCase();
const s = `Shouting: ${toUpper("hello")}`;
console.log(s);
Destructuring in JavaScript:
Destructuring Assignment is a JavaScript expression that allows to unpack of
values from arrays, or properties from objects, into distinct variables data
can be extracted from arrays, objects, and nested objects, and assigned to
variables.

 Use Cases-Destructing in JavaScript:


1) Array Destructuring - Array destructuring allows you to unpack values
from arrays into separate variables in a concise way. Instead of accessing
each array element individually, destructuring lets you assign values to
variables in a single line by matching their positions in the array.
Example:
// Create an Array
const fruits = ["Bananas", "Oranges", "Apples", "Mangos"];

// Destructuring
let [fruit1, fruit2] = fruits;
console.log(fruit1, fruit2); // Output: Bananas Oranges

2) Skipping Array Values - With array destructuring, you can skip values you
don’t need by leaving empty commas in the destructuring pattern. This is
useful when you only care about certain elements in the array and want
to ignore the rest.
EXAMPLE:
// Create an Array
const moreFruits = ["Bananas", "Oranges", "Apples", "Mangos"];

// Destructuring
let [fruit3, , , fruit4] = moreFruits; // Skipping some values
console.log(fruit3, fruit4); // Output: Bananas Mangos
3) The Rest Property - The rest property (...rest) in array destructuring lets
you capture all remaining items into a new array after specific elements
have been destructured. This is useful when you need to separate the
first few items from the rest of the array.
EXAMPLE:
// Create an Array
const numbers = [10, 20, 30, 40, 50, 60, 70];

// Destructuring
const [a, b, ...rest] = numbers;
console.log(a, b); // Output: 10 20
console.log(rest); // Output: [30, 40, 50, 60, 70]

4) Swapping JavaScript Variables - Destructuring provides a simple way to


swap two variables without needing a temporary variable. By
destructuring a two-element array where the values are swapped, you
can reassign variables in a single step.
EXAMPLE:
// Swapping JavaScript Variables
let firstName = "John";
let lastName = "Doe";

// Destructing
[firstName, lastName] = [lastName, firstName];
console.log(firstName, lastName); // Output: Doe John

5) Object Destructuring - Object destructuring allows you to extract


properties from an object into variables with matching names. This
makes it easier to work with object data without repeatedly accessing
properties via dot notation. You can also rename variables while
destructuring and set default values.
EXAMPLE:
// Create an Object
const person = {
firstName: "John",
lastName: "Doe",
age: 50
};

// Destructuring
let { firstName: fName, lastName: lName } = person;
console.log(fName, lName); // Output: John Doe

6) String Destructuring - String destructuring allows you to unpack


individual characters from a string into separate variables. This is useful if
you need to access the first few characters directly or treat a string like
an array of characters.
EXAMPLE:
// Create a String
let name = "W3Schools";

// Destructuring
let [a1, a2, a3, a4, a5] = name;
console.log(a1, a2, a3, a4, a5); // Output: W 3 S c h
UNIT  2

 Frontend Framework:
Introduction to ReactJS:
ReactJS is a free and open-source front-end JavaScript library which is used to
develop various interactive user-interfaces. It is a simple, feature rich and
component based UI library.
When we say component based, we mean that React develops applications
by creating various reusable and independent codes. ReactJS can be used to
develop small applications as well as big, complex applications.

 Feature of ReactJS:
1) Component Based − ReactJS makes use of multiple components to build
an application. These components are independent and have their own
logic which makes them reusable throughout the development process.
This will drastically reduce the application's development time.
2) Better and Faster Performance − ReactJS uses Virtual DOM. Virtual DOM
compares the previous states of components of an application with the
current states and only updates the changes in Real DOM. Whereas,
conventional web applications update all components again. This helps
ReactJS in creating web applications faster.
3) Extremely Flexible − React allows developers and teams to set their own
conventions that they deem are best suited and implement it however
they see fit, as there are no strict rules for code conventions in React.
4) Creates dynamic applications easily − Dynamic web applications require
less coding while offering more functionality. Thus, ReactJS can create
them easily.
5) Develops Mobile Applications as well − Not only web applications, React
can also develop mobile applications using React Native. React Native is
an open-source UI software framework that is derived from React itself. It
uses React Framework to develop applications for Android, macOS, Web,
Windows etc.
6) Debugging is Easy − The data flow in React is unidirectional, i.e., while
designing an app using React, child components are nested within parent
components. As the data flows is in a single direction, it gets easier to
debug errors and spot the bugs.

 How does React work?


React operates by creating an in-memory virtual DOM rather than directly
manipulating the browser’s DOM. It performs necessary manipulations
within this virtual representation before applying changes to the actual
browser DOM.

1) Virtual DOM is like a copy of the real webpage - React creates a


lightweight version of the webpage (Virtual DOM) using JavaScript
objects. This copy is faster to work with than the actual webpage.
2) When something changes (like a button click) - React creates a new
version of the Virtual DOM with the updated content. It compares the
new version with the old version to see what changed (this is called
"diffing").
3) React updates only the changed parts - Instead of rebuilding the whole
webpage, React updates only the part that changed. This makes it fast
because React avoids doing extra work.
 React Environment Setup:
Step 1: Navigate to the folder where you want to create the project and
open it in terminal

Step 2: In the terminal of the application directory type the following


command

npx create-react-app <<Application_Name>>

Step 3: Navigate to the newly created folder using the command

cd <<Application_Name>>

Step 4: A default application will be created with the following project


structure and dependencie. It will install some packages by default which
can be seen in the dependencies in package.json.

Step 5: To run this application type the following command in terminal

npm start

Step 6: The following output will be displayed in the browser

ReactJS Installation

You can modify the application according to your preferences and change
the code accordingly.
App.js:

import './App.css';
function App() {
const pstyle = {
backgroundColor: 'purple',
fontSize: '20px',
color: 'yellow',
textAlign:'left',
};

return (
<div className="App">
<h1 className="text-center">Hello World</h1>
<p style={pstyle}>Lorem ipsum dolor sit amet consectetur adipisicing elit.
Quae praesentium sunt veritatis harum illo. Facere cupiditate saepe asperiores
illum earum?</p>
</div>
);
}
export default App;

Index.js:

import React from 'react';


import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);

reportWebVitals();
 Map() method:
map() method in React is used to loop through an array and create a new
array of elements. In React, you use map() to generate lists of
components or HTML elements dynamically. React needs a unique key for
each item to track changes efficiently.

map_method.js:
import React from 'react';

export const MapMethod = () => {


const myArray = ['apple', 'banana', 'orange'];

const myList = myArray.map((item) => <p key={item}>{item}</p>);

return (
<div>
<h2>My Fruits:</h2>
{myList}
</div>
);
};

App.js:
import './App.css';
import {HelloWorld} from "./MyComponent/hello-world";
import {MapMethod} from "./MyComponent/map_method";

function App() {
return (
<div className="App">
<h1 className="text-center">Hello World</h1>
{HelloWorld()}
{MapMethod()}
<hello/>
</div>
);
}
export default App;
 React Render HTML:
React's goal is in many ways to render HTML in a web page. React
renders HTML to the web page by using a function called createRoot()
and its method render().

1) The createRoot Function - The createRoot() function takes one


argument, an HTML element. The purpose of the function is to define
the HTML element where a React component should be displayed.
2) The render Method - The render() method is then called to define the
React component that should be rendered. There is another folder in
the root directory of your React project, named "public". In this folder,
there is an index.html file. There is a single <div> in the body of this file.
This is where our React application will be rendered.

App.js:
import './App.css';
function App() {
return (
<div className="App">
<h1 className="text-center">Hello World</h1>
</div>
);
}
export default App;

index.html:
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>

</body>
 JSX (JavaScript XML):
JSX (JavaScript XML) is a feature in React that allows you to write HTML-
like code directly within JavaScript. It makes it easier to create and
structure components by combining HTML and JavaScript in one place.
Instead of using complex JavaScript code to create elements, you can
write them in a format that looks like HTML, and React will convert it into
JavaScript behind the scenes.

1) Expressions in JSX - With JSX you can write expressions inside curly
braces { }. The expression can be a React variable, or property, or any
other valid JavaScript expression. JSX will execute the expression and
return the result:

const myElement = <h1>React is {5 + 5} times better with JSX</h1>;

2) Inserting a Large Block of HTML - To write HTML on multiple lines, put


the HTML inside parentheses:

const myElement = (
<ul>
<li>Apples</li>
<li>Bananas</li>
<li>Cherries</li>
</ul>
);

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(myElement);
3) One Top Level Element - The HTML code must be wrapped in ONE top
level element. So if you like to write two paragraphs, you must put
them inside a parent element, like a div element.

const myElement = (
<div>
<p>I am a paragraph.</p>
<p>I am a paragraph too.</p>
</div>
);

Alternatively, you can use a "fragment" to wrap multiple lines. This will
prevent unnecessarily adding extra nodes to the DOM. A fragment
looks like an empty HTML tag: <></>.
4) Elements Must be Closed - JSX follows XML rules, and therefore HTML
elements must be properly closed.

const myElement = <input type="text" />;

5) Attributes in JSX - JSX supports HTML like attributes. All HTML tags and
its attributes are supported. Attributes has to be specified using
camelCase convention (and it follows JavaScript DOM API) instead of
normal HTML attribute name. For example, class attribute in HTML has
to be defined as className. The following are few other examples:

a) htmlFor instead of for


b) tabIndex instead of tabindex
c) onClick instead of onclick

import React from 'react';


import ReactDOM from 'react-dom/client';

const myElement = <h1 className="myclass">Hello World</h1>;

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(myElement);
 React Components:

Components are independent and reusable bits of code. They serve the
same purpose as JavaScript functions, but work in isolation and return
HTML. Components come in two types, Class components and Function
components, in this tutorial we will concentrate on Function components.
1) Class Component - A class component must include the extends
React.Component statement. This statement creates an inheritance to
React.Component, and gives your component access to
React.Component's functions. The component also requires a render()
method, this method returns HTML.

class Car extends React.Component {


render() {
return <h2>Hi, I am a Car!</h2>;
}
}

2) Function Component - Here is the same example as above, but created


using a Function component instead. A Function component also
returns HTML, and behaves much the same way as a Class component,
but Function components can be written using much less code, are
easier to understand, and will be preferred in this tutorial.

function Bike() {
return <h2>Hi, I am a Bike!</h2>;
}

3) Rendering a Component - Now your React application has a


component called Car, which returns an <h2> element. To use this
component in your application, use similar syntax as normal HTML:
<Car />

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(<Car />);
4) Props - Components can be passed as props, which stands for
properties. Props are like function arguments, and you send them into
the component as attributes.

function Cycle(props) {
return <h2>I am a {props.color} Cycle!</h2>;
}

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(<Car color="red"/>);

5) Components in Components - We can refer to components inside


other components:

function Airplane() {
return <h2>I am a Airplane and I live in Garage!</h2>;
}

function Garage() {
return (
<>
<h1>Who lives in my Garage?</h1>
<Airplane />
</>
);
}

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(<Garage />);

6) Components in Files - React is all about re-using code, and it is


recommended to split your components into separate files. To do that,
create a new file with a .js file extension and put the code inside it:
Cycle.js:
export function Cycle(props) {
return <h2>I am a {props.color} Cycle!</h2>;
}
App.js:
import './App.css';
import {Cycle} from "./MyComponent/component";
function App() {
return (

<div>

<Cycle color='Blue'/>

</div>
);
}
export default App;

 Props Component:
Props are arguments passed into React components. Props are passed to
components via HTML attributes. props stands for properties. React
Props are like function arguments in JavaScript and attributes in HTML. To
send props into a component, use the same syntax as HTML attributes:

Cycle.js:
export function Cycle(props) {
return <h2>I am a {props.color} Cycle!</h2>;
}

App.js:
import './App.css';
import {Cycle} from "./MyComponent/component";
function App() {
return (

<div>

<Cycle color='Blue'/>

</div>
);
}
export default App;
 Pass Data Using Props:

1) Using Variable - If you have a variable to send, and not a string as in the
example above, you just put the variable name inside curly brackets:

function Car(props) {
return <h2>I am a { props.brand }!</h2>;
}

function Garage() {
const carName = "Ford";
return (
<>
<h1>Who lives in my garage?</h1>
<Car brand={ carName } />
</>
);
}

2) Using Object:

function Car(props) {
return <h2>I am a { props.brand.model }!</h2>;
}

function Garage() {
const carInfo = { name: "Ford", model: "Mustang" };
return (
<>
<h1>Who lives in my garage?</h1>
<Car brand={ carInfo } />
</>
);
}
 Prop Validation:
React community provides a special package, prop-types to address the
properties type mismatch problem. prop-types allows the properties of the
component to be specified with type through a custom setting (propTypes)
inside the component.

PropType Description Example


PropTypes.string A string name: PropTypes.string
PropTypes.number A number age: PropTypes.number
A boolean
PropTypes.bool isActive: PropTypes.bool
value
PropTypes.func A function onClick: PropTypes.func
PropTypes.array An array items: PropTypes.array
PropTypes.object An object data: PropTypes.object
Anything
that can be
rendered
PropTypes.node (string, children: PropTypes.node
number,
element,
etc.)
A React
PropTypes.element component: PropTypes.element
element
One of a
size: PropTypes.oneOf(['small',
PropTypes.oneOf([]) specific set
'medium', 'large'])
of values
One of value:
PropTypes.oneOfType([]) several PropTypes.oneOfType([PropTypes.string,
types PropTypes.number])
Array of
items:
PropTypes.arrayOf() specific
PropTypes.arrayOf(PropTypes.string)
type
Object with
values of data:
PropTypes.objectOf()
specific PropTypes.objectOf(PropTypes.number)
type
PropTypes.any Any value value: PropTypes.any
App.js:
import React from 'react';
import PropTypes from 'prop-types';

export const UserAge = ({ age }) => {


return <div>User age: {age}</div>;
};

UserAge.propTypes = {
age: PropTypes.number.isRequired, // Ensures 'age' is a required number prop
};
 ReactJS Styling:
There are different ways to apply css in ReactJS in which most
commonly used are follows:
1) Inline Styling - Inline Styling is one of the safest ways to style the React
component. It declares all the styles as JavaScript objects using DOM
based css properties and set it to the component through style attributes.
Since the inline CSS is written in a JavaScript object, properties with
hyphen separators, like background-color, must be written with camel
case syntax:

App.js:
import './App.css';
function App() {

return (
<div className="App">
<h1 className="text-center">Hello World</h1>
<p style={{ color: 'yellow', backgroundColor: 'purple'}}>Lorem ipsum dolor
sit amet consectetur adipisicing elit.
</p>
</div>
);
}
export default App;

2) JavaScript Object - You can also create an object with styling information,
and refer to it in the style attribute:

App.js:
import './App.css';
function App() {
const pstyle = {
backgroundColor: 'purple',
fontSize: '20px',
color: 'yellow',
};

return (
<div className="App">
<h1 style={pstyle}Hello World</h1>
</div>
);
}
export default App;
3) CSS Modules - Css Modules provides safest as well as easiest way to
define the style. It uses normal css stylesheet with normal syntax. While
importing the styles, CSS modules converts all the styles into locally
scoped styles so that the name conflicts will not happen. Let us change
our component to use CSS modules
 Conditional Statement:
1) if Statement - The if statement allows you to choose which component
to show based on a condition. If the condition is true, one component is
rendered; if false, another component is shown. This is useful when you
need more complex logic before deciding what to display.

2) && Operator - The && operator shows a component only if the


condition is true. If the condition is false, nothing is displayed. This is
helpful for simple cases where you only need to display content when a
condition is met.

3) Ternary Operator (? :) - The ternary operator allows you to choose


between two components based on a condition. If the condition is true,
it shows the first component; if false, it shows the second component.
It’s useful for quick "if-else" situations inside JSX.
 React Lists:
In React, a list is a collection of items (like an array) that can be rendered
dynamically using JavaScript loops. Lists are commonly used to display data
sets, such as an array of objects or strings, in a structured format like a list
or table.

1) map() Method - The map() method creates a new array by applying a


function to each element in the original array. In React, map() is often
used to generate a list of components because it allows you to write clean
and concise code.

2) for Loop - You can use a for loop to create lists in React by manually
pushing elements into an array and then returning that array in JSX. This
approach is useful when you need more control over the loop execution.

3) while Loop - A while loop can also be used to generate lists, but it is less
common. Similar to the for loop, you need to push elements into an array
and return the array in JSX.

List.js:
import React from 'react';

function Car(props) {
return <li>I am a {props.brand}</li>;
}

export function CarList() {


const cars = ['Ford', 'BMW', 'Audi'];
const carList1 = [];
const carList2 = [];
let j = 0;

for (let i = 0; i < cars.length; i++) {


carList1.push(<Car key={i} brand={cars[i]} />);
}

while (j < cars.length) {


carList2.push(<Car key={j} brand={cars[j]} />);
j++;
}
return (
<>
<h1>Who lives in my garage? (example usin map) </h1>
<ul> {cars.map((car, index) => <Car key={index} brand={car} />)}</ul>
<h1>Who lives in my garage? (example using for loop) </h1>
<ul>{carList1}</ul>

<h1>Who lives in my garage? (example using while loop) </h1>


<ul>{carList2}</ul>
</>
);
}

 Keys:
Keys help React identify which elements have changed, been added, or
removed. Each key should be unique among siblings to allow React to
efficiently update and render the list without re-rendering unchanged
elements.

EXAMPLE:

function Garage() {
const cars = [
{id: 1, brand: 'Ford'},
{id: 2, brand: 'BMW'},
{id: 3, brand: 'Audi'}
];
return (
<>
<h1>Who lives in my garage?</h1>
<ul>
{cars.map((car) => <Car key={car.id} brand={car.brand} />)}
</ul>
</>
);
}
 State in ReactJS:
State in ReactJS is an object that holds a component's dynamic data and
controls how the component behaves. When the state changes, React
automatically updates and re-renders the component to show the latest
data. This helps create interactive user interfaces that respond to user
actions.
a) Creating State Object - Creating a state is essential to building dynamic
and interactive components. We can create a state object within the
constructor of the class component.
b) Updating State in React - In React, a State object can be updated using
setState() method. React may update multiple setState() updates in a
single go. Thus using the value of the current state may not always
generate the desired result.
c) Managing Complex State - In React class components, state can be more
than just primitive values like numbers or strings. You can manage
complex state objects or arrays, but you need to be careful when
updating them to avoid directly mutating the state
Syntax:
this.state = { objectName: { property1: value1, property2: value2 } };
this.setState((prevState) => ({ objectName: { ...prevState.objectName,
updatedProperty: newValue } }));

initialize state as an object in the constructor, then use setState() to


update the object, preserving previous values with the spread operator
EXAMPLE:
import React from "react";

export class StateComponent extends React.Component {


//create state
constructor(props) {
super(props);
this.state = {
count: 0,
};
}
//change state
increment = () => {
this.setState((prevState) => ({
count: prevState.count + 1,
}));
};
//change state
decrement = () => {
this.setState((prevState) => ({
count: prevState.count - 1,
}));
};

render() {
return (
<div>
<h1>
The current count is :{" "}
{this.state.count}
</h1>
<button onClick={this.increment}>
Increase
</button>
<button onClick={this.decrement}>
Decrease
</button>
</div>
);
}
}
 Event in ReactJS:

Events are just some actions performed by a user to interact with any
application. They can be the smallest of actions, like hovering a mouse
pointer on an element that triggers a drop-down menu, resizing an
application window, or dragging and dropping elements to upload them etc.
Events in React are divided into three categories:
a) Mouse Events − onClick, onDrag, onDoubleClick
b) Keyboard Events − onKeyDown, onKeyPress, onKeyUp
c) Focus Events − onFocus, onBlur

 Event Management:
1) Adding Events in React - In React, events are written in camelCase
format. For example, instead of onclick used in HTML, React uses onClick.
Event handlers in React are written inside curly braces {}.
Example Comparison:
In React: <button onClick={shoot}>Take the Shot!</button>
In HTML: <button onclick="shoot()">Take the Shot!</button>

export function Football() {


const shoot = () => {
alert("Great Shot!");
}

return (
<button onClick={shoot}>Take the shot!</button>
);
}

2) Passing Arguments to Event Handlers - To pass an argument to an event


handler, you can use an arrow function. This allows you to pass data or
values directly to the event handler.
export function Cricket() {
const shoot = (a) => {
alert(a);
}

return (
<button onClick={() => shoot("Six!")}>Take the shot!</button>
);
}

3) React Event Object - The React Event Object is a special object that React
creates when an event (like a click) happens. It wraps the native browser
event and gives it a consistent structure across all browsers. It helps React
work the same way on different browsers. It gives you extra information
about the event, like:
a) event.target – The element that triggered the event.
b) event.type – The type of event (e.g., "click").
c) event.preventDefault() – Stops the default action (like stopping a
form from submitting).
d) event.persist() - If you want to use the event data after the handler
finishes(like a setTimeout), you need to store it using
event.persist().

export function Win() {


const shoot = (event) => {
alert(event.type);

setTimeout(() => {
alert("Hurray, Won The Match! 🎉"); // `null` (because of pooling)
}, 3000);

event.persist();
}

return (
<button onClick={(event) => shoot(event)}>Take the shot!</button>
);
}
 React Forms:
React forms allow users to interact with the webpage, similar to how forms
work in HTML. You can create forms in React using the <form> element,
<input> fields, <textarea>, <select>, and other form components.

1) Adding Forms in React - Forms in React are added like any other HTML
element. However, in React, forms are usually controlled by the state
rather than letting the browser handle the data. This gives more control
over user input and how the form behaves.
2) Handling Forms - In HTML, the DOM usually handles form data directly.
However, in React, form data is managed through the component’s state.
This allows you to control the data and update the state in response to
user input using the onChange event handler.
3) Using useState to Manage Form State - React's useState hook is used to
manage form data. When an input changes, you can update the state
using setState or setName. This creates a "single source of truth" where
the input value reflects the state value.
4) Submitting Forms - You can prevent the default form submission (which
reloads the page) using event.preventDefault() inside an onSubmit
handler. This allows you to handle the form submission manually, such as
validating the data or sending it to a server.
5) Handling Multiple Input Fields - To handle multiple inputs in one state
object, you can use the name attribute to identify the input field. You can
then update the state dynamically using [event.target.name] to set the
value of the corresponding field in the state object.
EXAMPLE:
import React, { useState } from 'react';
import './Component.css';

export const FormState = () => {


const [formData, setFormData] = useState({
name: '',
amount: '',
date: '',
category: ''
});
// Handle input changes
const handleChange = (e) => {
const { name, value } = e.target;
setFormData({
...formData,
[name]: value
});
};

// Handle form submission


const handleSubmit = (e) => {
e.preventDefault();

// Simple validation
if (!formData.name || !formData.amount || !formData.date ||
!formData.category) {
alert('Please fill out all fields!');
return;
}

console.log('Form Data Submitted:', formData);

// Clear form after submission

};

return (
<div id="expenseForm">
<form onSubmit={handleSubmit}>
<label htmlFor="name">Title</label>
<input
type="text"
id="name"
name="name"
placeholder="Enter expense title"
value={formData.name}
onChange={handleChange}
/>

<label htmlFor="amount">Amount</label>
<input
type="number"
id="amount"
name="amount"
placeholder="Enter expense amount"
value={formData.amount}
onChange={handleChange}
/>

<label htmlFor="date">Spend Date</label>


<input
type="date"
id="date"
name="date"
value={formData.date}
onChange={handleChange}
/>
<label htmlFor="category">Category</label>
<select
id="category"
name="category"
value={formData.category}
onChange={handleChange}
>
<option value="">Select</option>
<option value="Food">Food</option>
<option value="Entertainment">Entertainment</option>
<option value="Academic">Academic</option>
</select>

<input type="submit" value="Submit" />


</form>
</div>
);
};

 React Animation:

https://www.tutorialspoint.com/reactjs/reactjs_animation.htm
 React Router in Web Applications:

Routing in web applications refers to binding a web URL to a specific


resource or component within the application. In React, routing allows you
to map URLs to components, enabling dynamic navigation between
different views.

 Installing React Router - To set up routing in a React project, you need to


install the react-router-dom package using npm:

cd /path/to/your/project npm install react-router-dom

 React Router Components:

1) Router - The Router component is the foundation of React Router. It


wraps the entire application and provides the routing context, enabling
route management and navigation. It allows the use of history-based
navigation and route matching. Common types include BrowserRouter
(which uses the browser’s history API) and HashRouter (which uses the
hash portion of the URL).
2) Link - The Link component is used to create navigation links in a React
app. It works like an HTML <a> tag but prevents a full-page reload by
using React Router’s internal navigation system. This enables faster and
smoother client-side navigation while maintaining the application state.
3) Route - The Route component is responsible for rendering a specific
component when the URL matches a defined path. It supports static
and dynamic paths (e.g., /product/:id) and can be nested to create
more complex routing structures.
4) Outlet - The Outlet component is used to render child routes within a
parent route. It acts as a placeholder where the matching child
component is displayed. This makes it easy to create nested routes and
maintain consistent layout structures.
 Hook in React:
A hook in React is a special function that lets you use state and lifecycle
features in functional components without needing a class
component.We use hooks to make React code simpler and easier to
manage. They let us add features like saving data (state) and running
code at certain times (like when the page loads) without using complex
class components.

1) UseState hook - The useState hook in React lets you create and
manage data (state) in a function component. It allows your
component to remember values (like user input or a counter) and
update them when needed.
We use useState because it makes it easy to update the component
automatically when the state changes, without needing complex class
components.

a) initialValue - Initial value of the state. state can be specified in any


type (number, string, array and object).
b) state - Variable to represent the value of the state.
c) setState - Function variable to represent the function to update the
state returned by the useState.
EXAMPLE:
import React, { useState } from 'react';

export function UseStateCounter() {


const [count, setCount] = useState(0);

return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
<button onClick={() => setCount(count - 1)}>Dicrement</button>
<button onClick={() => setCount(0)}>Reset</button>
</div>
);}
2) useEffect hook - The useEffect hook in React lets you run code
at specific times, like when the component loads, updates, or
gets removed. We use useEffect to handle side effects, such as
fetching data, updating the DOM, or setting up timers, without
needing lifecycle methods from class components.

a) Update function - Update function is the function to be


executed after each render phase. This corresponds to
componentDidMount and componentDidUpdate events
b) Dependency - Dependency is an array with all the variables on
which the function is dependent. Specifying the dependency is
very important to optimize the effect hook.

1. Run Code After Initial Render (ComponentDidMount) - Runs


once after the component renders for the first time when the
dependency array [] is empty.

useEffect(() => {
alert('Component mounted');
}, []); // Empty array = Runs only once (on mount)

2. Run Code After State or Props Change


(ComponentDidUpdate) - Runs every time the specified state
or prop in the dependency array changes.

export function UseEffectCounter() {


const [count, setCount] = useState(0);
useEffect(() => {
alert(`Count changed to: ${count}`);
}, [count]); // Runs whenever 'count' changes
3. Cleanup on Unmount (ComponentWillUnmount) - Returns a
cleanup function that runs when the component unmounts,
useful for clearing intervals, unsubscribing from events, etc.

useEffect(() => {
const timer = setInterval(() => {
console.log('Running every second');
}, 1000);

// Cleanup when component unmounts


return () => clearInterval(timer);
}, []);

3) useRef - The useRef hook in React lets you create a reference to a


DOM element or store a value that doesn’t trigger a re-render when
it changes. We use useRef to access or modify a DOM element
directly (like focusing an input) or to keep a value that persists
between renders without causing updates.

a) val is the initial value to be set for the returned mutable object,
refObj.
b) refObj is the object returned by the hook.

Modify DOM element directly:


//useRef
export function UseRefAccesssChangeColor() {
const inputRef = useRef(null);

const handleFocus = () => {


inputRef.current.style.backgroundColor ='red'; // Access the input and
change color
};

return (
<div>
<br /><br /><input ref={inputRef} type="text" placeholder="Type
something..." />
<br /><br /><br /><button onClick={handleFocus}>Change Color</button>
</div>
);
}
Store value that does not change on re-render:
export function UseRefRerenderWithoutResetValue() {
const a = useRef(0);
const [count, setCount] = useState(0);

const handleClick = () => {


a.current = a.current + 1;
console.log(`Value of a is = ${a.current}`); // Updates without causing
re-render
setCount(count + 1); // Triggers a re-render
};

return (
<div>
<p>State Count: {count}</p>
<button onClick={handleClick}>Increment</button>
</div>
);
}

4) useContext - The useContext hook in React lets you access data (like
a theme or user info) from a central place without passing it down
through props. We use useContext to make it easier to share data
between components, avoiding "prop drilling" (passing props through
many layers).

Parent.jsx:
import React, { createContext } from 'react';
import { ChildA } from './Child_A'; // Import named export

// Create Contexts
const data = createContext();
const data1 = createContext();

export function UseContextParent() {


const name = "Yoshita";
const gender = "Female";

return (
<data.Provider value={name}>
<data1.Provider value={gender}>
<ChildA />
</data1.Provider>
</data.Provider>
);
}

export { data, data1 }; // Export contexts properly


Child_C.jsx:
import React, { useContext } from 'react';
import { data, data1 } from './Parent'; // Fix context import

export function ChildC() {


const FirstName = useContext(data);
const gender = useContext(data1);

return (
<>
<h1>Hi my name is {FirstName} and my gender is {gender}</h1>
</>
);
}

Child_A.jsx:
import React from "react";
import { ChildB } from "./Child_B"; // Importing named export

function ChildA() {
return <ChildB />;
}

export { ChildA }; // Exporting as a named export

Child_B.jsx:
import React from "react";
import { ChildC } from "./Child_C"; // Importing named export

export function ChildB() {


return <ChildC />;
}
5) Custom Hook - A custom hook is a reusable function in React that starts
with use and lets you share stateful logic between components. It helps
reduce code duplication, keeps code cleaner, and makes logic easier to
manage and reuse.

CustomeHook.jsx:
import React from 'react';
import {useCounter} from './CustomCounter';

export function Counter() {


const { count, increment, decrement, reset } = useCounter();

return (
<div>
<h1>Count: {count}</h1>
<button onClick={increment}>Increment</button>
<button onClick={decrement}>Decrement</button>
<button onClick={reset}>Reset</button>
</div>
);
}

CustomeCounter.jsx:
import { useState } from 'react';

export function useCounter(initialValue = 0) {


const [count, setCount] = useState(initialValue);

const increment = () => setCount(count + 1);


const decrement = () => setCount(count - 1);
const reset = () => setCount(0);

return { count, increment, decrement, reset };


}
State Management Using Redux:
Redux is a state management library that helps you manage the state (data)
of your application in a single place, making it easier to track and update.
Instead of passing data between components manually, Redux stores all the
state in one central store. When you want to update the state, you send an
action (a description of what should change), and a reducer (a function)
decides how to update the state based on that action. This makes state
changes predictable and easier to debug, especially in large applications.

Index.js:
import React from 'react';
import ReactDOM from 'react-dom/client';
import { Provider } from 'react-redux';
import {store} from './Store';
import App from './App';

const root = ReactDOM.createRoot(document.getElementById('root'));


root.render(
<Provider store={store}>
<App />
</Provider>
);

App.js
import './App.css'
import Navbar from './Components/Navbar'
import { useSelector, useDispatch } from 'react-redux'
import { decrement, increment, multiply } from './Counter/counterSlice'

function App() {
const count = useSelector((state) => state.counter.value)
const dispatch = useDispatch()

return (
<>
<Navbar />
<div>
<button onClick={() => dispatch(decrement())}>-</button>
Currently count is {count}
<button onClick={() => dispatch(increment())}>+</button>
<button onClick={() => dispatch(multiply())}>*</button>
</div>

</>
)
}

export default App


Store.js
import { configureStore } from '@reduxjs/toolkit';
import counterReducer from './Counter/counterSlice';

export const store = configureStore({


reducer: {
counter: counterReducer,
},
})

navBar.js
import React from 'react'
import { useSelector} from 'react-redux'

const Navbar = () => {


const count = useSelector((state) => state.counter.value)

return (
<div>
I am a navbar and counter is {count}
</div>
)
}

export default Navbar

counterSlice.jsx
import { createSlice } from '@reduxjs/toolkit'

const initialState = {
value: 0,
}

export const counterSlice = createSlice({


name: 'counter',
initialState,
reducers: {
increment: (state) => {
state.value += 1
},
decrement: (state) => {
state.value -= 1
},
multiply: (state)=>{
state.value *=2
}
},
})

// Action creators are generated for each case reducer function


export const { increment, decrement, multiply } = counterSlice.actions

export default counterSlice.reducer


Advance Back-End Development:

 For Express, Middleware,Oauth check previous sem notes, JWT


main nhi kr rha.

 Introduction to GraphQL:
GraphQL is a query language for APIs that allows clients to request
exactly the data they need, and nothing more. Unlike REST, where you
often get too much or too little data from fixed endpoints.
It works by defining a schema (like a blueprint) that describes the data
and how to access it. This makes it faster and more efficient because you
can combine multiple data requests into one query, reducing network
calls and improving performance. It's popular for building modern,
responsive web and mobile apps.

 GraphQL Architecture:

1) GraphQL Server with Connected Database - In this setup, the GraphQL


server is directly connected to a database. When the client sends a
query, the server reads the request, fetches the data from the
database, and returns it to the client in the correct format. This
approach is useful for new projects because it creates a simple and
direct connection between the server and the database.

2) GraphQL Server Integrating Existing Systems - This setup is used when


a company has existing infrastructure like microservices, legacy
systems, or third-party APIs. The GraphQL server acts as a middle layer
between the client and these systems. When a client makes a request,
the GraphQL server gathers data from different sources and combines
it into a single response.

3) Hybrid Approach - In the hybrid approach, the GraphQL server can


handle data from both a connected database and existing systems. It
decides where to get the data based on the type of request. This
approach offers more flexibility because it combines the benefits of
direct database connections and integration with existing systems.
 GraphQL Type System:
GraphQL is a strongly typed language, which means that the data you
work with must follow specific rules and structures. The type system in
GraphQL helps define how the data looks, how you can query it, and how
you can modify it.
1) Scalar Type - Scalar types are the basic types in GraphQL used to store
simple values like numbers, text, or true/false values. Common scalar
types are Int for numbers, Float for decimal values, String for text,
Boolean for true/false, and ID for unique identifiers. Scalars help
define the basic format of the data.

2) Object Type - An object type is used to define a group of related fields


that represent a real-world object. For example, a Student object can
have fields like id, name, and age. Objects allow you to combine
multiple pieces of information into a single structured unit.

3) Query Type - A query type is used to fetch data from the server. It
defines how a client can request specific information. Instead of
getting all the data, the client can ask for only the fields it needs,
making data retrieval more efficient.
4) Mutation Type - A mutation type is used to modify data on the server,
such as adding, updating, or deleting records. It works like POST, PUT,
or DELETE in REST APIs, allowing the client to send data to the server
and receive a response.

5) Enum Type - An enum type defines a fixed list of possible values for a
field. It’s useful when you want to restrict the values to specific
options, like days of the week or user roles. This ensures that only
valid values are used.

6) List Type - A list type allows a field to return multiple values of the
same type. Instead of getting just one result, you can get an array of
items, like a list of students or products. This makes it easy to handle
large sets of data.

7) Non-Nullable Type - A non-nullable type ensures that a field cannot


be empty or null. If you define a field as non-nullable, it must always
return a value, helping to prevent errors caused by missing data.
 Component of GraphQL:

1) Server-Side Components - GraphQL server-side components handle


client requests and respond with the requested data. The GraphQL
server processes the queries sent by the client, validates them against
the schema, and executes resolver functions to return the correct data.
Apollo Server is a popular tool used to set up a GraphQL server.

a) Schema - The schema defines the structure of the data and the
operations allowed in GraphQL. It acts as a contract between the
client and the server, specifying what data can be requested and
how it is organized.

Parameter Description
This is a required argument. It represents a GraphQL
typeDefs
query as a UTF-8 string.
This is an optional argument (empty object by
Resolvers
default). This has functions that handle the query.
This is an optional argument and can be used to
logger
print errors to the server console.
This is an optional argument and allows
parseOptions customization of parse when specifying typeDefs as
a string.
This is true by default. When set to false, causes your
allowUndefinedIn
resolve functions to throw errors if they return
Resolve
undefined.
resolverValidation This is an optional argument and accepts an object
Options with Boolean properties.
inheritResolvers This is an optional argument and accepts a Boolean
FromInterfaces argument to check resolvers object inheritance.

Syntax:

b) Query - A query is a request from the client to fetch specific data


from the server. It allows the client to define the exact shape and
fields of the data it needs, reducing unnecessary data transfer.
c) Resolver - Resolvers are functions that handle the logic for fetching
or modifying data. When a query or mutation is received, the
resolver executes the necessary steps to get the correct data and
send it back to the client.
Arguments Description
The object that contains the result returned from the
root
resolver on the parent field.
An object with the arguments passed into the field in the
args
query.
This is an object shared by all resolvers in a particular
context
query.
It contains information about the execution state of the
info query, including the field name and path to the field from
the r
Syntax:

d) Mutation - A mutation is used to modify data on the server, such as


adding, updating, or deleting records. While queries are used to

fetch data, mutations are used to change data and return the
updated result to the client.

2) Client-Side Components - Client-side components are responsible for


sending requests and handling responses from the server. They define
how the client interacts with the GraphQL server and display the
fetched data to the user.

a) GraphiQL - GraphiQL is a browser-based tool used to test and edit


GraphQL queries and mutations. It provides a user-friendly
interface for developers to experiment with and debug their
GraphQL operations.
b) Apollo Client - Apollo Client is a library used to create GraphQL
client applications. It helps manage data fetching, caching, and
state management in front-end applications. It integrates well with
popular JavaScript frameworks like React.
 Environment setup and First project:
1) Step 1: Setup the Project - Create a folder name Backend and initialize
a new Node.js project by using npm init it will create package.jason file
to manage dependencies and create a file name index.js in that folder.
2) Step 2: Install Required Dependencies - Now that Express is
configured, the next step is to download the following dependencies:

"dependencies": {
"@apollo/server": "^4.11.3",
"axios": "^1.8.3",
"body-parser": "^1.20.3",
"cors": "^2.8.5",
"express": "^4.21.2",
"graphql": "^16.10.0"
}

3) Step 3: Define the Schema - A GraphQL schema defines what kind of


object can be fetched from a service, and what fields it has,The schema
can be defined using GraphQL Schema Definition Language. Now, add
the following code snippet in the index.js file:

typeDefs: `
type User {
id: ID!
name: String!
username: String!
email: String!
phone: String!
website: String!
}

type Todo {
id: ID!
title: String!
completed: Boolean
user: User
}

`,}
4) Step 4: Define Query - A GraphQL Query consists of fields that define
how the response would look like. The Query is sent to the GraphQL
server which returns the response in the required format. Now, add
the following code snippet in the index.js file in typeDefs:
type Query {
getTodos: [Todo]
getAllUsers: [User]
getUser(id: ID!): User

5) Step 5: Create a Resolver - The first step in creating a resolver is to add


some code to process the request for greeting field. This is specified in
a resolver. The structure of the resolver function must match the
schema. Add the following code snippet in the index.js file:

resolvers: {
Todo: {
user: (todo) => USERS.find((e) => e.id === todo.id),
},

6) Step 6: Define Routes to Fetch Data from ReactJS/GraphiQL


Application - Add the following code snippet in the index.js file in
resolver:

Query: {
getTodos: () => TODOS,
getAllUsers: () => USERS,
getUser: async (parent, { id }) => USERS.find((e) => e.id === id),
},
},

7) Step 7: Rest of code - At last setup middleware, add graphQL


endpoints, start Apollo and express serve:

app.use(bodyParser.json());
app.use(cors());

await server.start();

app.use("/graphql", expressMiddleware(server));
app.listen(3000, () => console.log("Serevr Started at PORT 3000"));
}

8) Step 7: Start the Application - Execute index.js using Node.js as follows:


 React Integration:
App.js:
import React from "react";
import ReactDOM from "react-dom/client";
import "./index.css";
import App from "./App";
import reportWebVitals from "./reportWebVitals";
import { ApolloClient, InMemoryCache, ApolloProvider } from
"@apollo/client";

const client = new ApolloClient({


uri: "http://localhost:3000/graphql",
cache: new InMemoryCache(),
});

const root = ReactDOM.createRoot(document.getElementById("root"));


root.render(
<React.StrictMode>
<ApolloProvider client={client}>
<App />
</ApolloProvider>
</React.StrictMode>
);
reportWebVitals();

Index.js:
import React from "react";
import ReactDOM from "react-dom/client";
import "./index.css";
import App from "./App";
import reportWebVitals from "./reportWebVitals";
import { ApolloClient, InMemoryCache, ApolloProvider } from
"@apollo/client";

const client = new ApolloClient({


uri: "http://localhost:3000/graphql",
cache: new InMemoryCache(),
});
const root = ReactDOM.createRoot(document.getElementById("root"));
root.render(
<React.StrictMode>
<ApolloProvider client={client}>
<App />
</ApolloProvider>
</React.StrictMode>
);
reportWebVitals();
UNIT  3

Database Management:

Introduction to Database:

A database is a structured collection of data that is stored and managed in


a way that allows for easy access, retrieval, and manipulation. Databases
are used to organize information so that it can be efficiently managed,
updated, and queried.

 Components of a Database
1) Data: Data is the actual information stored within the database. It
includes various types of information such as names, addresses,
product details, or transaction records. The quality and accuracy of this
data are crucial for the effective functioning of any application that
relies on it.
2) Database Management System (DBMS): A DBMS is the software that
enables users to interact with the database. It provides tools and
functionalities for creating, retrieving, updating, and deleting data.
Popular examples of DBMS include MySQL, PostgreSQL, Oracle, and
Microsoft SQL Server. The DBMS acts as a mediator between the user
and the database, ensuring data integrity and security.
3) Schema: The schema defines the structure of the database, outlining
how data is organized. It includes the design of tables, fields (columns),
data types, and relationships between different data entities. The
schema serves as a blueprint for the database, ensuring consistency
and clarity in how data is stored and accessed.
4) Queries: Queries are commands used to request specific data from the
database. They allow users to retrieve, manipulate, and analyze the
information stored within. SQL (Structured Query Language) is the most
commonly used language for writing queries, enabling user to perform
operations like filtering, sorting and joining data from different table.
 Types of Databases:
1) Relational Databases: Relational databases store data in structured
tables composed of rows and columns. Each table has a defined
schema, and relationships between tables are established using foreign
keys, allowing for complex queries and data integrity. They use SQL
(Structured Query Language) for data manipulation. Examples: MySQL,
PostgreSQL.
2) NoSQL Databases: NoSQL databases are designed to handle
unstructured or semi-structured data, offering flexibility in storing
various data types and structures. They are particularly useful for big
data applications, as they can efficiently manage large volumes of data.
NoSQL databases often prioritize scalability and performance over strict
consistency. Examples: MongoDB, Cassandra.
3) Object-Oriented Databases: Object-oriented databases store data as
objects, similar to the way programming languages like Java or C++
manage data. This model allows for complex data representations and
relationships, making it easier to work with data in applications that
require object-oriented programming principles. Object-oriented
databases are less common but can be beneficial in certain specialized
applications. Examples: db4o, ObjectDB.
4) Hierarchical Databases: Hierarchical databases organize data in a tree-
like structure, where records have parent-child relationships. Each
parent can have multiple children, but each child can only have one
parent. This structure is simple and efficient for certain types of data
relationships but can be inflexible for complex queries. Example: IBM
Information Management System (IMS).
5) Network Databases: Network databases improve upon hierarchical
databases by allowing multiple relationships between records, forming
a graph structure. This flexibility enables more complex relationships
and connections, making it suitable for applications where data
interconnections are vital. Examples: Integrated Data Store (IDS),
TurboIMAGE.
SQL, tor Sructured Query Language or Relational Database:

SQL stands for Structured Query Language, is a standardized programming


language used for managing and manipulating relational databases. It
enables users to perform various operations such as querying data, updating
records, inserting new data, and deleting existing data. SQL is widely used for
database management in applications ranging from small systems to large
enterprise solutions.

 SQL Languages Component:


1) Data Definition Language (DDL): DDL is the short name for Data
Definition Language, which deals with database schemas and
descriptions, of how the data should reside in the database.

a) CREATE: to create a database and its objects like (table, index, views,
store procedure, function, and triggers)
b) ALTER: alters the structure of the existing database
c) DROP: delete objects from the database
d) TRUNCATE: remove all records from a table, including all spaces
allocated for the records are removed
e) COMMENT: add comments to the data dictionary
f) RENAME: rename an object
2) Data Manipulation Language (DML): DML is the short name for Data
Manipulation Language which deals with data manipulation and
includes most common SQL statements such SELECT, INSERT, UPDATE,
DELETE, etc., and it is used to store, modify, retrieve, delete and update
data in a database. Data query language(DQL) is the subset of “Data
Manipulation Language”. The most common command of DQL
is SELECT statement. SELECT statement help on retrieving the data from
the table without changing anything in the table.

a) SELECT: retrieve data from a database


b) INSERT: insert data into a table
c) UPDATE: updates existing data within a table
d) DELETE: Delete all records from a database table
e) MERGE: UPSERT operation (insert or update)
f) CALL: call a PL/SQL or Java subprogram
g) EXPLAIN PLAN: interpretation of the data access path
h) LOCK TABLE: concurrency Control

3) Data Control Language (DCL): DCL is short for Data Control Language
which acts as an access specifier to the database.(basically to grant and
revoke permissions to users in the database

a) GRANT: grant permissions to the user for running DML(SELECT,


INSERT, DELETE,…) commands on the table
b) REVOKE: revoke permissions to the user for running DML(SELECT,
INSERT, DELETE,…) command on the specified table
4) Transactional Control Language (TCL): TCL is short for Transactional
Control Language which acts as an manager for all types of transactional
data and all transactions. Some of the command of TCL are

a) Roll Back: Used to cancel or Undo changes made in the database


b) Commit: It is used to apply or save changes in the database
c) Save Point: It is used to save the data on the temporary basis in the
database

5) Data Query Language (DQL): Data query language(DQL) is the subset


of “Data Manipulation Language”. The most common command of DQL
is 1the SELECT statement. SELECT statement helps us in retrieving the
data from the table without changing anything or modifying the table.
DQL is very important for retrieval of essential data from a database.
 MySQL DBMS:
MySQL is a popular open-source relational database management system
(RDBMS) that uses Structured Query Language (SQL) for managing and
manipulating data. It allows users to create, read, update, and delete data
in databases. MySQL is widely used for web applications, data warehousing,
and various enterprise applications due to its reliability, performance, and
ease of use.

 My SQL Working With Express(CRUD):

1) Connecting App to Database: Her are the simple steps to establish


connection of Node.js application with the MySQL database using
Sequelize:

Step1: At the first step you have to download MySQL database and My
SQL workbench in your system by visiting MySQL web page:

https://www.mysql.com/downloads/.

Step2: After successfully installation of MySQL database to use it with


Node.js you have to install mysql module, express and sequelize module,
if mysql module is not working you can also install mysql2:
Step3: Now to use MySQL module you have to import the sequelize
module in your db.js file first:

const { Sequelize } = require('sequelize');

Step4: After completion of all the above step now you are ready to
connect your database using sequelize as:

// db.js
const { Sequelize } = require('sequelize');

const sequelize = new Sequelize('student', 'root', '2005', {


host: 'localhost',
dialect: 'mysql',
});

module.exports = sequelize;

Step5: You can check that connection is established or not by writing this
code in you index.js file:

// index.js
const express = require('express');
const sequelize = require('./db');

const app = express();


const PORT = process.env.PORT || 3000;

// Middleware to parse JSON


app.use(express.json());

// Test database connection


sequelize.authenticate()
.then(() => console.log('Database connected...'))
.catch(err => console.log('Error: ' + err));

// Sync models (optional, for development)


sequelize.sync({ force: false })
.then(() => console.log('Database synced...'))
.catch(err => console.log('Sync Error: ' + err));

app.get('/', (req, res) => res.send('Express + Sequelize'));

app.listen(PORT, () => console.log(`Server running on port ${PORT}`));


Step 6: At least start the server to see connection is established or not:

2) Create Database: Sequelize does not provide any way to create database
we have to create it manually using create databse <database name>
query on your sql workbench or command prompt.

3) Crate Table: To create a table in MySQL using Sequelize in your Express


app, you first define a model. A model is like a template that tells
Sequelize what your table should look like – what columns it has, and
what kind of data each column stores.
Once the model is ready, Sequelize will take care of actually creating the
table for you in the database. This happens when you sync the model
with the database. So, you don’t need to write the SQL code yourself —
Sequelize does it automatically based on the model you defined.

const { DataTypes } = require('sequelize');


const sequelize = require('./db');

const Student = sequelize.define('Student', {


rollNo: {
type: DataTypes.INTEGER,
allowNull: false,
unique: true,
},
name: {
type: DataTypes.STRING,
allowNull: false,
},
email: {
type: DataTypes.STRING,
allowNull: false,
unique: true,
validate: {
isEmail: true,
},
},
class: {
type: DataTypes.STRING,
allowNull: false,
},
}, {
tableName: 'Students',
timestamps: true,
});

module.exports = Student;

 Primary Key: Primary key is a unique identifier which is used to identify


each row in table unique. It helps in searching and updating record.
Primary key is like student roll no of same class (ex 10th). In same one
class there can be student with same name but each have unique roll no.
Value of Primary key can never be null and same. We can attribute with
primary while creating the table and if table is already exist we can
create primary key using alter table query:

a) While creating table:

rollNo: {
type: DataTypes.INTEGER,
allowNull: false,
primaryKey: true, // Set rollNo as the primary key
},
b) After creating table:

// Function to add primary key


async function addPrimaryKey() {
const queryInterface = sequelize.getQueryInterface();
try {
// Add primary key to rollNo
await queryInterface.addConstraint('Students', {
fields: ['rollNo'],
type: 'primary key',
name: 'students_rollno_pk',
});
console.log('Primary key added to rollNo');
} catch (err) {
console.error('Error adding primary key:', err.message);
}
}

4) Insert: Insert query is used to insert values in the table. We can insert
value in table by two ways:

a) Insert one record at once:

// Sync models and insert one student


sequelize.sync({ force: false })
.then(() => {
console.log('Database synced...');

// Insert one student record


return Student.create({
name: "Ravi Kumar",
email: "[email protected]",
class: "12A"
});
})
.then(student => {
console.log("Student inserted:", student.toJSON());
})
.catch(err => console.log('Sync/Insert Error:', err));
b) Insert Multiple record:

// Insert Mutliple Student record


return Student.bulkCreate([
{ rollNo: 2, name: "Anuj Kumar", email: "[email protected]",
class: "12A" },
{ rollNo: 3, name: "Anjali Sharma", email:
"[email protected]", class: "11B" },
{ rollNo: 4, name: "Mohit Verma", email: "[email protected]",
class: "10C" }
]);

})
5) Select From: Select from query is used to select the fields from the table.
Three many different way to select the field of table in which some of
are follows:

a) Select whole table:

.then(() => {
// Select only name and class columns
return Student.findAll({
attributes: ['name', 'class']
});
})

b) Select specific column:

.then(() => {
// Fetch all students
return Student.findAll();
})
.then(students => {
console.log("All Students:");
students.forEach(student => console.log(student.toJSON()));
})
.catch(err => {
console.error('Error:', err);
});
6) Where Clause: In MySQL, the WHERE clause can be used in SELECT,
DELETE and UPDATE queries. The WHERE clause allows you to specify a
search condition for the rows returned by a query.

Operator Description
.eq Equal
.gt Greater than
.lt Less than
.gte Greater than or equal
.lte Less than or equal
.ne Not equal
.or Logical or
.like Search for a pattern
.and Logical and

Example:
.then(() => {
// 🎯 Students NOT in class 10C
return Student.findAll({attributes: ['name', 'class'],where: {class:
{[Op.ne]: '10C'}}});
})

.then(students => {
console.log("Students NOT in class 10C:");
students.forEach(student => console.log(student.toJSON()));
})

7) Order By: Use the ORDER BY statement to sort the result in ascending or
descending order. The ORDER BY keyword sorts the result ascending by
default. To sort the result in descending order, use the DESC keyword.
Example:

Student.findAll({
attributes: ['rollNo', 'name', 'class'],
order: [['rollNo', 'DESC']]
})
8) Delete: DELETE query helps in removing one or more rows from a table.
Example:
Student.destroy({where: {rollNo: 2}})
.then(() => {
console.log('Student with rollNo 2 deleted');
})
.catch(err => {
console.error('Deletion error:', err);
});

9) Drop: By using you can delete the entire table as well as database:
Drop Table
Student.drop()
.then(() => console.log("� Student table dropped"))
.catch(err => console.error("❌ Drop error:", err));

Drop database:
Student.drop()
.then(() => console.log("� Student table dropped"))
.catch(err => console.error("❌ Drop error:", err));

10) Update: Table often needs to modify one or more records stored in a
MySQL database. This is done by passing the UPDATE query string as an
argument to the mysql.query() method.
Example:
Student.update(
{ class: '12B' }, // What to update
{ where: { rollNo: 1 } } // Condition
)
.then(() => {
console.log('📝 Student updated');
})
.catch(err => {
console.error('❌ Update error:', err);
});
11) Limit clause: LIMIT clause can be used with the select, update and
delete query restricts the all this operation to a specified number. For
example, LIMIT 5 deletes only first 5 records in the given order, select
limt 2 will select only two and same as update.
Example:
Student.findAll({
limit: 2
})
.then(students => {
console.log("📋 Limited students:");
students.forEach(student => console.log(student.toJSON()));
})
.catch(err => {
console.error("❌ Error:", err);
});

12) Join: A JOIN in SQL is used to combine rows from two or more tables,
based on a related column between them. There are different types of
JOINs in SQL, each serving a distinct purpose.

 Types of Joins:

a) INNER JOIN: Returns only rows with matching values in both


tables.
b) LEFT JOIN (LEFT OUTER JOIN): Returns all rows from the left table,
and matching rows from the right table. If there’s no match, NULL
is returned for columns from the right table.
c) RIGHT JOIN (RIGHT OUTER JOIN): Returns all rows from the right
table, and matching rows from the left table. If there’s no match,
NULL is returned for columns from the left table.
d) FULL JOIN (FULL OUTER JOIN): Returns all rows from both tables. If
there’s no match, NULL is returned for missing matches from
either side.
e) CROSS JOIN: Produces a Cartesian product of both tables,
combining every row from the first table with every row from the
second.
 Example:

Step 1: Create another Table (profile.js):


const { DataTypes } = require('sequelize');

module.exports = (sequelize) => {


const StudentProfile = sequelize.define('StudentProfile', {
rollNo: {type: DataTypes.INTEGER,allowNull: false,primaryKey: true,},

bio: {type: DataTypes.STRING,},

hobbies: {type: DataTypes.STRING,},

// Foreign Key reference to Student table


StudentId: {type: DataTypes.INTEGER,allowNull: false,
references: {
model: 'Students',
key: 'rollNo'
}
},
}, {
tableName: 'StudentProfile',
timestamps: true,
});

return StudentProfile;
};

Step 2: Associate students and profile table (index.js):


const StudentModel = require('./Student')(sequelize);
const StudentProfileModel = require('./profile')(sequelize);

// Setup associations
StudentModel.hasOne(StudentProfileModel, {foreignKey: 'rollNo', sourceKey:
'rollNo'});
StudentProfileModel.belongsTo(StudentModel, {foreignKey: 'rollNo', targetKey:
'rollNo'});
Step 3: Locate the column in profile table you want to join with
student table (index.js):
// Find all students with associated profiles
return StudentModel.findAll({
include: [{
model: StudentProfileModel,
attributes: ['bio', 'hobbies']
}]
});
})

Step 4: Join and print the result (index.js):


.then(students => {
students.forEach(student => {
console.log({
name: student.name,
class: student.class,
profile: student.StudentProfile // Corrected JOINED data
});
});
})
.catch(err => {
console.error('Error:', err);
});
NoSQL Database:
NoSQL (Not Only SQL) databases are a class of database management
systems (DBMS) that differ from traditional relational databases in their
design, data storage, and data retrieval mechanisms. They are designed to
handle a wide variety of data models, large-scale data, high-velocity data,
and modern use cases such as web and mobile applications. NoSQL
databases are highly scalable and flexible, making them suitable for big data,
real-time applications, and environments requiring fast performance.
 Key Concepts of NoSQL in DBMS:
1. Non-relational Data Models: Unlike relational databases (which organize
data in structured tables with predefined schemas), NoSQL databases
store data in non-relational formats. These formats include:

 Key-Value Stores: Data is stored as key-value pairs (e.g., Redis,


Amazon DynamoDB).

 Document Stores: Data is stored in documents (usually JSON or


BSON) where each document contains fields and values (e.g.,
MongoDB, CouchDB).

 Column-Family Stores: Data is stored in columns and rows but in a


more flexible way than relational tables (e.g., Apache Cassandra,
HBase).

 Graph Databases: Data is represented as nodes (entities) and edges


(relationships) (e.g., Neo4j, ArangoDB).
2. Schema Flexibility: NoSQL databases are schema-less or have flexible
schemas, meaning you don’t need to define a strict schema upfront. This
allows applications to evolve more quickly, especially when dealing with
unstructured or semi-structured data.
3. Horizontal Scalability: NoSQL databases are designed for horizontal
scaling, meaning they can scale out by adding more servers rather than
scaling up (adding more power to a single server). This is crucial for
handling large amounts of data and traffic, as in modern web
applications.
4. High Availability and Partitioning: Many NoSQL databases implement
partitioning (sharding/dividing), where data is divided and distributed
across multiple servers to improve performance and reliability.
5. Handling Big Data: NoSQL databases are often used in big data
environments where the data volume is too large, too fast, or too diverse
for traditional relational databases to handle effectively. This is especially
common in applications like IoT, social media, analytics, and machine
learning.
6. Performance and Caching: Many NoSQL databases offer high
performance for read/write operations, especially in large-scale
environments. Key-value stores like Redis or Memcached are often used
as in-memory caches to speed up performance.

 Popular NoSQL Databases:

a) MongoDB: A widely used document-oriented NoSQL database.


b) Apache Cassandra: A highly scalable column-family database.
c) Neo4j: A popular graph database.
d) Redis: A fast, in-memory key-value store.
e) Amazon DynamoDB: A key-value and document database service
offered by Amazon Web Services (AWS).
 Firebase DBMS:
Firebase is a cloud-based NoSQL Database Management System (DBMS)
that provides two main database options: Cloud Firestore and Realtime
Database. Unlike traditional SQL databases, Firebase stores data in
flexible, scalable formats like documents and JSON trees, allowing real-
time synchronization and offline support for web and mobile applications.
Firebase platform by Google, it simplifies backend development by
offering seamless integration with authentication, hosting, storage, and
serverless functions - making it ideal for modern, real-time, full-stack
applications.
 Firebase Working With Express.(CRUD):

1) Create Database:

Step 1: Setup Project - Begin by visiting


https://console.firebase.google.com and clicking the "Add Project"
button. Follow the guided steps to name your project, optionally
enable or skip Google Analytics, and finish creating your Firebase
project. Once complete, you’ll be redirected to your Firebase Console
dashboard.

Step 2: Enable Firestore Database - Inside your Firebase project,


locate the left-hand menu and click on the "Build" section (it looks
like stacked blocks). Select "Firestore Database" from the options and
click "Create Database". Choose the "Start in test mode" option for
easier development and pick your preferred cloud region, then click
"Enable" to create and initialize your Firestore database.
Step 3: Setup Express + Firebase Project Locally - On your
local machine, create a new folder for your project and
initialize a Node.js project using npm init -y. After that, install
the required dependencies, namely express for your server
and firebase for connecting to Firestore. This sets up your
backend environment for development.

Step 4: Get Firebase Config - Go back to your Firebase


Console (home page) and click the gear icon to open Project
Settings. Scroll down to the “Your apps” section and click the
web (</>) icon to register a new web app. Give your app a
name, click "Register app", and Firebase will provide a
configuration object containing your API keys and identifiers.
This config will be used to connect your Express app to
Firebase.
Firebase-config.js:
// Import the functions you need from the SDKs you need
import { initializeApp } from "firebase/app";

// Your web app's Firebase configuration


const firebaseConfig = {
apiKey: "AIzaSyAtlq8TUuMjv1N7SaOGi1T6rDd-lIABB1A",
authDomain: "nosql-database-35667.firebaseapp.com",
projectId: "nosql-database-35667",
storageBucket: "nosql-database-35667.firebasestorage.app",
messagingSenderId: "449559563292",
appId: "1:449559563292:web:b8d4072764eb6ea83a63d9"
};

// Initialize Firebase
const app = initializeApp(firebaseConfig);
2) Connect with Database:
To connect with your database first ensure that database is created
successfully and configuration is done properly then you have to
write this code in index.js file:
const express = require('express');
const { db } = require('./firebase-config.js');

const app = express();


const PORT = 3000;

app.get('/', (req, res) => {


res.send('Connected to Firebase!');
});

app.listen(PORT, () => {
console.log(`Server is running at http://localhost:${PORT}`);
});

3) Create Collection:
Now next step is to create collection. As firebase does not store data
into tables in row and column rather it store in files which makes it
possible to store any kind of data. Also collection will be shown in
firebase database when you insert the first value. Here how you can
create collection:
const express = require('express');
const { db } = require('./firebase-config');
const { collection, addDoc } = require('firebase/firestore');

const app = express();


const PORT = 3000;

app.use(express.json());

// Route to create a collection and add a document


app.post('/add-user', async (req, res) => {
try {
const { rollNo, name, email } = req.body;

// Add document to 'Students' collection


const docRef = await addDoc(collection(db, 'Students'), {
rollNo,
name,
email,
});

console.log(" Collection created successfully!");


} catch (err) {
console.error("Error creating collection:", err.message);

}
});

app.listen(PORT, () => {
console.log(`🚀 Server running at http://localhost:${PORT}`);
});

4) Insert values:
We can insert value by integrate frontend which we going to study in
next part here we work with backend only so there is two approach
to insert by sending post request through url or postman or insert
value directly:

a) Using Postman:

b) Insert directly:
 Insert One:
// Function to directly insert a student
async function insertStudent(rollNo, name, email) {
try {
// Add document to Firestore
const docRef = await addDoc(collection(db, 'Students'), {
rollNo,
name,
email,
});

console.log(`Student added: ${name} (ID: ${docRef.id})`);


} catch (err) {
console.error('Error adding student:', err.message);
}
}

// Example usage
insertStudent('201', 'John Doe', '[email protected]');

 Insert Multiple:
// Function to insert multiple students
async function insertMultipleStudents(students) {
try {
for (const student of students) {
const { rollNo, name, email } = student;

const docRef = await addDoc(collection(db, 'Students'), {


rollNo,
name,
email,
});

console.log(`Student added: ${name} (ID: ${docRef.id})`);


}
} catch (err) {
console.error('Error adding students:', err.message);
}
}

// Example usage with multiple student objects


const studentList = [
{ rollNo: '202', name: 'Alice Smith', email: '[email protected]' },
{ rollNo: '203', name: 'Bob Johnson', email: '[email protected]' },
{ rollNo: '204', name: 'Charlie Brown', email: '[email protected]' },
];

insertMultipleStudents(studentList);
5) Fetch Data from Database:
You can fetch data from firebase as follows:

const { getDocs } = require('firebase/firestore');

// Function to fetch all students


async function getAllStudents() {
try {
const querySnapshot = await getDocs(collection(db, 'Students'));

querySnapshot.forEach((doc) => {
console.log(` ${doc.id}:`, doc.data());
});
} catch (err) {
console.error('Error fetching students:', err.message);
}
}

getAllStudents();

6)Where Clause:
const { getDocs, query, where } = require('firebase/firestore');

async function getStudents() {


try {
// Example: Fetch students where rollNo is '202'
const q = query(collection(db, 'Students'), where('rollNo', '==',
'202'));

const querySnapshot = await getDocs(q);

querySnapshot.forEach((doc) => {
console.log(`${doc.id}:`, doc.data());
});
} catch (err) {
console.error('Error fetching students:', err.message);
}
}
getStudents();
Operator Description Example

== Checks if the field equals the specified value. where('age', '==', 25)
Checks if the field is greater than the specified
> where('age', '>', 18)
value.
< Checks if the field is less than the specified value. where('age', '<', 30)
Checks if the field is greater than or equal to the
>= where('age', '>=', 18)
specified value.
Checks if the field is less than or equal to the
<= where('age', '<=', 30)
specified value.
Checks if the field value is one of the specified where('age', 'in', [25, 30,
in
values (array of values). 35])
Checks if the field value is not one of the where('age', 'not-in',
not-in
specified values. [25, 30])
(Currently unsupported in Firestore). Firestore
does not support "not equal to" queries. You can Not supported directly
!=
workaround this by using in with all values in Firestore.
except the ones you want to exclude.

7) Limit Clause:
The limit clause in Firestore allows you to restrict the number of
documents returned by a query. It's useful when you only need a
subset of the documents, rather than all the documents in a
collection.
// Fetch the first 3 students
const q = query(collection(db, 'Students'), limit(3));

8) Update document values:


const { updateDoc, doc } = require('firebase/firestore');

async function updateStudentDetails(rollNo, newDetails) {


try {
// Find the document reference
const studentDocRef = doc(db, 'Students', rollNo); // Assuming rollNois
the document ID

// Update multiple fields at once


await updateDoc(studentDocRef, newDetails);

console.log(`Student details updated successfully for rollNo:


${rollNo}`);
} catch (err) {
console.error('Error updating student details:', err.message);
}
}

// Example usage:
const newDetails = {
name: 'John Doe Updated',
email: '[email protected]',
};
updateStudentDetails('201', newDetails);

9) Delete data:
If you want to delete data (i.e., delete specific documents or fields)
from a Firestore collection, you can use the deleteDoc() function to
delete a document or updateDoc() to delete specific fields from a
document.
a) Delete a Specific Document: To delete an entire document from a
collection, use the deleteDoc() method. This will remove the
document and all its data from the Firestore database.
const { deleteDoc, doc } = require('firebase/firestore');

async function deleteStudent(rollNo) {


try {
// Reference to the document you want to delete
const studentDocRef = doc(db, 'Students', rollNo); // Assuming rollNo
is the document ID

// Delete the document


await deleteDoc(studentDocRef);

console.log(`Student with rollNo: ${rollNo} deleted successfully.`);


} catch (err) {
console.error('Error deleting student:', err.message);
}
}

// Example usage:
deleteStudent('201');
b) Delete Specific Fields in a Document: If you only want to delete
specific fields from a document (instead of deleting the entire
document), you can use the updateDoc() method with
FieldValue.delete() to remove a field.
const { updateDoc, doc, deleteField } = require('firebase/firestore');

async function deleteStudentFields(rollNo) {


try {
// Reference to the document
const studentDocRef = doc(db, 'Students', rollNo);

// Remove both 'email' and 'name' fields from the document


await updateDoc(studentDocRef, {
email: deleteField(),
name: deleteField(),
});

console.log(`Fields 'email' and 'name' deleted successfully for rollNo:


${rollNo}`);
} catch (err) {
console.error('Error deleting student fields:', err.message);
}
}

// Example usage:
deleteStudentFields('201');

c) Drop collection - In Firestore, there is no direct method to drop a


collection like there is in traditional SQL databases. Firestore
collections are automatically removed when they are empty (i.e.,
when all documents within that collection have been deleted).
10) Sort Data:
In Firestore, you can sort query results using the orderBy()
function. This allows you to order documents based on one or
more fields in ascending or descending order.
const q = query(studentsRef, orderBy('name', 'asc'))
 Full-Stack Development:

Integrating Frontend:
 React app:
It handles the user interface of the website first of all we have to
setup our react app which involves the create react app and
installing required packages:

Now main part begin is writing the actual code below code is belong to
app.js:
Step 1: Import require module -
import { useEffect, useState } from "react";
import axios from "axios";
import "./App.css"; // Make sure your styles are imported

const API_URL = "http://localhost:5000/api/users";

Step 2: Fetch the user Input -


function App() {
const [users, setUsers] = useState([]);
const [form, setForm] = useState({ rollNo: "", email: "", name: "" });

const fetchUsers = async () => {


try {
const res = await axios.get(API_URL);
setUsers(res.data);
} catch (error) {
console.error("Fetch error:", error.message);
}
};

useEffect(() => {
fetchUsers();
}, []);
Step 3: Handle the user Input –
const handleSubmit = async (e) => {
e.preventDefault();
try {
await axios.post(API_URL, form);
setForm({ name: "", email: "", rollNo: "" });
fetchUsers();
} catch (error) {
console.error("Submit error:", error.message);
}
};

const handleDelete = async (id) => {


try {
await axios.delete(`${API_URL}/${id}`);
fetchUsers();
} catch (error) {
console.error("Delete error:", error.message);
}
};

const handleUpdate = async (id) => {


const newName = prompt("Enter new name:");
const newRoll = prompt("Enter new roll number:");
if (newName && newRoll) {
try {
await axios.put(`${API_URL}/${id}`, { name: newName, rollNo: newRoll });
fetchUsers();
} catch (error) {
console.error("Update error:", error.message);
}
}

Step 4: Create Form -


return (
<div>
<h1>User Management</h1>

<form onSubmit={handleSubmit}>
<input
value={form.rollNo}
onChange={(e) => setForm({ ...form, rollNo: e.target.value })}
placeholder="Roll No"
required
/>
<input
value={form.name}
onChange={(e) => setForm({ ...form, name: e.target.value })}
placeholder="Name"
required
/>

<input
value={form.email}
onChange={(e) => setForm({ ...form, email: e.target.value })}
placeholder="Email"
required
/>
<button type="submit">Add User</button>
</form>

<ul className="user-list">
{users.map((user) => (
<li key={user.id} className="user-item">
<span>
{user.name} ({user.email}) - Roll No: {user.rollNo}
</span>
<div className="user-actions">
<button className="edit" onClick={() =>
handleUpdate(user.id)}>Edit</button>
<button className="delete" onClick={() =>
handleDelete(user.id)}>Delete</button>
</div>
</li>
))}
</ul>
</div>
);
}

export default App;

 Now if you required you can also add css below code is belong to app.css
file:
.container {
max-width: 800px;
margin: 0 auto;
padding: 20px;
}

h1 {
text-align: center;
}

.error {
color: red;
text-align: center;
}

form {
display: flex;
flex-direction: column;
gap: 10px;
margin-bottom: 20px;
}

input {
padding: 8px;
font-size: 16px;
}

button {
padding: 8px;
cursor: pointer;
background-color: rgb(48, 171, 74);
color: white;
border: none;
border-radius: 4px;
}

button:hover {
background-color: #0056b3;
}

.edit {
background-color: #28a745;
}

.edit:hover {
background-color: #218838;
}

.delete {
background-color: #dc3545;
}

.delete:hover {
background-color: #c82333;
}

.user-list {
list-style: none;
padding: 0;
}

.user-item {
display: flex;
justify-content: space-between;
align-items: center;
padding: 10px;
border-bottom: 1px solid #ddd;
}

.user-actions {
display: flex;
gap: 10px;
}

div{
background-color: #007bff;
}
Backend Building:
 Server.js:
It is responsible for backend working as it is responsible for start and
handle the backend server and also work as an gateway for backend:
const express = require("express");
const cors = require("cors");
const dotenv = require("dotenv");
const sequelize = require("./db");
const userRoutes = require("./user-routes");

dotenv.config();
const app = express();
app.use(cors());
app.use(express.json());

app.use("/api/users", userRoutes);

sequelize.sync().then(() => {
app.listen(process.env.PORT, () =>
console.log(`Server started on port ${process.env.PORT}`)
);
});

 db.js:
It is responsible for connected backend to the database weather it is
structure database or non structure database:
const { Sequelize } = require("sequelize");

const sequelize = new Sequelize('student', 'root', '2005', {


host: 'localhost',
dialect: 'mysql',
});

module.exports = sequelize;
 user.js:
As we use MySQL database which have the schema structure so user.js
creates the schema of our table:
const { DataTypes } = require("sequelize");
const sequelize = require("./db");

const User = sequelize.define("Users", {


rollNo: {
type: DataTypes.INTEGER,
allowNull: false,
unique: true,
},
name: {
type: DataTypes.STRING,
allowNull: false,
},
email: {
type: DataTypes.STRING,
unique: true,
allowNull: false,
},
});

module.exports = User;

 User-routes.js:
At least express is used to handle the CRUD operation using route
module:
const express = require("express");
const User = require("./user");
const router = express.Router();

// CREATE
router.post("/", async (req, res) => {
try {
const user = await User.create(req.body);
res.json(user);
} catch (err) {
res.status(500).json({ error: err.message });
}
});

// READ ALL
router.get("/", async (req, res) => {
const users = await User.findAll();
res.json(users);
});

// UPDATE
router.put("/:id", async (req, res) => {
try {
const user = await User.findByPk(req.params.id);
if (!user) return res.status(404).send("User not found");
await user.update(req.body);
res.json(user);
} catch (err) {
res.status(500).json({ error: err.message });
}
});

// DELETE
router.delete("/:id", async (req, res) => {
try {
const result = await User.destroy({ where: { id: req.params.id } });
res.json({ deleted: result });
} catch (err) {
res.status(500).json({ error: err.message });
}
});

module.exports = router;
UNIT  4
 Performance Optimization:
Code splitting:
Code splitting is a technique used to break your application into smaller,
separate chunks of code that can be loaded independently - Instead of
sending one large JavaScript bundle to the browser, your app only loads the
code it needs at a given time.
 Feature:
1) Faster Initial Load - By loading only the essential code required for the
first screen, code splitting reduces the initial download size, making the
app load faster for the user. This is especially helpful for users on slow
networks or mobile devices.
2) Reduces Bandwidth Waste - Users don’t download code for parts of the
app they may never visit, which saves data and speeds up loading on
slower connections. It leads to a leaner and more efficient delivery of
resources.
3) Improves Caching - When you update your app, only the changed chunks
need to be re-downloaded, allowing the rest of the app to stay cached in
the browser. This helps avoid unnecessary re-downloads and speeds up
repeat visits.
4) Smoother User Experience - With less code to parse and execute at once,
the browser performs better, resulting in a more responsive and snappy
experience. It reduces memory usage and JavaScript execution time.
5) Better Scalability - As your app grows, code splitting helps manage
complexity by keeping the size of each chunk under control, making the
app easier to maintain and extend. It keeps large codebases more modular
and maintainable.
6) Lazy Loading Support - Code splitting pairs perfectly with lazy loading,
allowing components or routes to be loaded only when they’re actually
needed, which further improves speed and efficiency. This makes the app
feel faster and more dynamic for the user.
 Approach to implement Code Splitting:
1) Using Lazy Loading:

Lazy loading is a performance optimization technique where certain parts of


your application are loaded only when they’re needed, rather than during
the initial page load – This helps make your app faster and more efficient.

a) React.lazy(): In the `App.js` file, we use `React.lazy()` to lazily load the


`LazyComponent`. The `import()` function is passed as an argument to
`React.lazy()`, which dynamically imports the component. It returns a
`Promise` for the module containing the component.
b) Suspense: We wrap the `<LazyComponent />` with the `<Suspense>`
component. Suspense allows us to specify a loading indicator (fallback)
while the lazily loaded component is being fetched. In this example, we
display a simple "Loading..." message as the fallback.
c) Asynchronous Loading: When the `App` component mounts, React
starts fetching the code for `LazyComponent` asynchronously. While the
component is being loaded, the `Suspense` component displays the
loading indicator.
d) Rendering LazyComponent: Once the code for `LazyComponent` is
fetched and loaded, React renders the `LazyComponent` in place of the
`<Suspense>` component.
App.js
// App.js
import React,
{
Suspense
} from 'react';

const LazyComponent =
React.lazy(() => import('./MyComponent/CodeSpllitingUsingLibraries'));

const App = () => (


<div>
<h1>React Code Splitting Example</h1>
<Suspense fallback={<div>Loading...</div>}>
<LazyComponent />
</Suspense>
</div>
);
export default App;

CodeSpllitingUsingLibraries.jsx:
// LazyComponent.js
import React from 'react';

const LazyComponent = () => (


<div>
<h2>Lazy Loaded Component</h2>
<p>
This component was loaded
asynchronously using code splitting.
</p>
</div>
);

export default LazyComponent;

2) Using Component Based Lazy Loading:

Component-level lazy loading means loading individual components


or widgets only when they’re actually needed in the UI - This helps
reduce the main bundle size and improves performance, especially
for rarely-used or heavy components.
App.js
import React, { useState, Suspense, lazy } from 'react';

const ChartWidget = lazy(() => import('./ChartWidget'));

function App() {
const [showChart, setShowChart] = useState(false);

return (
<div>
<h1>Dashboard</h1>
<button onClick={() => setShowChart(true)}>Load Chart</button>

{showChart && (
<Suspense fallback={<div>Loading Chart...</div>}>
<ChartWidget />
</Suspense>
)}
</div>
);
}
export default App;

UsingComponentBasedLazyLoading.jsx:
import React from 'react';

const ChartWidget = () => {


return <div>📈 This is a heavy chart component!</div>;
};

export default ChartWidget;

3) Route-based Splitting - Route-based code splitting means loading different


chunks of code depending on which route (page) the user navigates to -
Instead of loading all components upfront, each route lazily loads only the
component it needs when it's accessed.

React Router in Web Applications:


Caching:
 Caching Strategies in React:
1) Browser Caching - React relies on the browser’s native caching
mechanism to store assets like images, CSS, and JavaScript files.
However, caching dynamic data (such as API responses) often requires
more custom solutions.
2) Using Local Storage / Session Storage - One of the simplest caching
mechanisms is using the browser’s localStorage or sessionStorage. This
works well for caching small amounts of data that persist between
sessions. This method is easy to implement but isn’t ideal for large
datasets or data that frequently changes, as the storage size is limited,
and manual cache invalidation is required.

export default App;

// Store data in localStorage


localStorage.setItem('userData', JSON.stringify(data));

// Retrieve data
const cachedData = JSON.parse(localStorage.getItem('userData'));

3) State Management Libraries (Redux with Redux-Persist) - When using


global state management libraries like Redux, caching can be handled
by middleware such as redux-persist, which automatically saves and
rehydrates the state to localStorage or sessionStorage. This approach is
useful when dealing with large-scale applications where certain pieces
of state (e.g., user sessions) need to be preserved across page reloads
or sessions.

import { persistStore, persistReducer } from 'redux-persist';


import storage from 'redux-persist/lib/storage'; // defaults to localStorage

const persistConfig = {
key: 'root',
storage,
};

const persistedReducer = persistReducer(persistConfig, rootReducer);


4) SWR and React Query (Best for API Caching) - For caching server
responses, libraries like SWR and React Query provide powerful caching
and data-fetching solutions. These libraries fetch data, cache it, and
automatically update stale data in the background. SWR (Stale-While-
Revalidate) is simple and effective for API caching, while React Query
offers advanced features like background refetching, query invalidation,
and pagination.

import useSWR from 'swr';

const fetcher = (url) => fetch(url).then((res) => res.json());

function App() {
const { data, error } = useSWR('/api/user', fetcher);

if (error) return <div>Failed to load</div>;


if (!data) return <div>Loading...</div>;

return <div>Hello {data.name}</div>;


}

5) Service Workers for Offline Caching - Using Service Workers is an


advanced approach that allows caching assets and API requests for
offline use. With service workers, you can cache dynamic data at the
network level, and it works seamlessly with Progressive Web Apps
(PWA).
6) Memoization with useMemo and useCallback - While not traditional
caching, React’s built-in memoization hooks (useMemo and
useCallback) prevent unnecessary recalculations or function re-
creations during renders. This can save computation time and improve
performance when working with expensive operations.

const cachedValue = useMemo(() => {


return expensiveFunction(data);
}, [data]);
 Best Practices for Caching in React:
1) Cache data wisely – Don’t cache everything blindly – focus on caching
frequently accessed or expensive API calls that don't change often.
Unnecessary caching can lead to complexity and stale data.
2) Set an expiration – Cached data should not live forever. Make sure to
set time-based invalidation or manual triggers to refresh old or
outdated content, avoiding inconsistencies for users.
3) Background updates – Use smart libraries like SWR or React Query to
serve stale (cached) data immediately and then fetch updated data in
the background – this gives users fast responses and fresh info.
4) Optimize memory usage – Be mindful of what you're storing in
localStorage, sessionStorage, or state libraries like Redux. Avoid storing
large datasets or data that doesn't need to persist.
5) Combine multiple techniques – Use a layered caching approach:
browser-level caching for assets, API caching with React Query or SWR,
and memoization for computational optimization within components.
6) Clear cache when needed – Always provide a mechanism to clear or
reset cached data when it becomes invalid (e.g., user logout, version
changes) to avoid unexpected bugs or stale content.
 Optimize Image and Assets in React:
1) Image – We can optimize images using several ways:

 Image Formats – Use the right format for the right purpose (JPEG,
PNG, SVG, WebP).
 Compression Techniques – Use tools like TinyPNG or ImageMagick
to compress images.
 Lazy Loading – Load images only when they are about to enter the
viewport.

Example:
import React from 'react';
import LazyLoad from 'react-lazyload';

const ImageComponent = () => (


<LazyLoad height={200}>
<img src="optimized-image.webp" alt="Optimized" />
</LazyLoad>
);

export default ImageComponent;

2) CSS – We can optimize CSS using several ways:

 Minification – Remove unnecessary characters to reduce file size.


 Removing Unused CSS – Use tools like PurgeCSS to eliminate unused
styles.
 CSS-in-JS – Use libraries like styled-components for scoped and
optimized styling.

Example: (Same as above)


3) JavaScript – We can optimize JavaScript using several ways:

 Minification and Compression – Use Terser or UglifyJS to reduce JS


size.
 Code Splitting – Load only the required chunks on demand.
 Tree Shaking – Remove unused code from the final bundle.

Example: (Same as above)

4) Videos – We can optimize videos using several ways:

 Choosing the Right Video Format – Use efficient formats like MP4
and WebM.
 Compression Techniques – Use HandBrake to reduce video size.
 Lazy Loading – Load videos only when needed.

Example: (same as above)

5) Optimizing Assets – We can optimize assets using several tools:

 Webpack – For module bundling and asset optimization.


 Image Optimization Tools – TinyPNG, ImageMagick, Squoosh, etc.
 Babel – Transpile JavaScript for wider browser compatibility.

Example:
const TerserPlugin = require('terser-webpack-plugin');
const MiniCssExtractPlugin = require('mini-css-extract-plugin');

module.exports = {
optimization: {
minimize: true,
minimizer: [new TerserPlugin()],
},
plugins: [new MiniCssExtractPlugin()],
};
 Performance Optimization:
CI/CD Pipelines:
CI/CD (Continuous Integration / Continuous Delivery / Deployment) is a
modern approach to software development that automates the process of
building, testing, and releasing software. It replaces the slow, manual
workflows of traditional development methods with faster, repeatable, and
more reliable pipelines that allow for quick and continuous improvements.

These pipelines make it possible for teams to work on multiple versions of a


product at the same time, so while one version is being developed, another
might be tested or deployed. This keeps the software development cycle
moving quickly and efficiently.

 Continuous Integration (CI):


Continuous Integration (CI) is the practice of regularly merging small code
changes into a shared repository, followed by automated builds and tests.
This helps detect errors early, provides quick feedback to developers, and
makes it easier to fix issues before they grow. If the build fails, developers
can quickly correct bugs and try again, keeping the development process
smooth.
 Continuous Delivery (CD):
Continuous Delivery picks up where CI ends by taking successfully tested
builds and preparing them for deployment. The focus is on ensuring that
the code can be safely and reliably released at any time. The build is
thoroughly tested in staging environments, and although it’s not deployed
automatically, it’s always ready to go live with a manual approval.
 Continuous Deployment:
Continuous Deployment is an extension of Continuous Delivery that
automatically deploys every validated build to production without human
intervention. This speeds up the release process but also requires high
confidence in test coverage and automated checks. If something breaks,
teams must be ready to roll back to a stable version quickly.

 Benefits of CI/CD:
1) Faster Development Cycles - CI/CD enables developers to push changes
frequently, reducing the time between writing code and releasing it. This
leads to quicker delivery of features, bug fixes, and updates to users.
2) Early Bug Detection - Since code is tested automatically after each
change, issues are caught early in the development process. This reduces
the cost and effort needed to fix bugs later.
3) Better Code Quality - Automated testing, code reviews, and validation at
each stage ensure that only high-quality code makes it to production.
This helps build more stable and reliable applications.
4) Reduced Manual Work - CI/CD automates repetitive tasks like testing,
building, and deployment. This saves developer time and reduces human
error in the process.
5) Easier Rollbacks - With smaller, more frequent changes, it’s easier to
identify what went wrong and roll back specific builds without affecting
the entire system.
6) Improved Collaboration - CI/CD fosters a collaborative environment
where developers, testers, and operations teams work closely. By
integrating and testing code frequently, teams gain better visibility into
each other's changes, which improves communication and reduces
integration issues.
 Challenges of CI/CD:

1) High Initial Setup Effort - Setting up a CI/CD pipeline involves


configuring tools, integrating with repositories, and defining test
and deployment workflows, which can be time-consuming at
first.
2) Tooling Complexity - Managing different tools for integration,
testing, deployment, and monitoring can be complex. Teams
need to choose the right stack and ensure smooth integration
between tools.
3) Requires Strong Test Coverage - For CI/CD to be effective, there
must be robust automated testing in place. Without proper test
coverage, bugs can slip through the pipeline and end up in
production.
4) Security and Compliance Risks - Automating deployment can
introduce security risks if access and validation controls are not
strict. Sensitive environments need manual checks or
gatekeeping policies.
5) Maintenance Overhead - Maintaining CI/CD pipelines requires regular
updates to scripts, tools, and configurations as the project evolves.
Without proper upkeep, the pipeline can become outdated or brittle,
leading to build failures and lost development time.
6) Debugging Pipeline Failures - When builds fail in a CI/CD pipeline,
identifying the root cause can be tricky due to the complexity of
automation layers. Debugging may involve multiple tools, logs, and
environments, making it more time-consuming than local
troubleshooting.
 Stages in CI/CD Pipeline:
1) Code Commit - The pipeline begins when a developer commits code to
a version control system like Git. Each commit can trigger the pipeline
automatically, ensuring changes are integrated frequently and
continuously.
2) Build - The committed code is compiled or packaged into a deployable
artifact (e.g., JAR, Docker image). This verifies that the code builds
correctly and is free from basic errors.
3) Automated Testing - Automated tests such as unit, integration, and
regression tests run to check if the new code behaves correctly and
doesn’t break existing features.
4) Code Analysis & Validation - Tools analyze the code for style, security
issues, and potential bugs, ensuring it meets quality and organizational
standards.
5) Staging/Pre-Production Deployment - If the code passes tests, it's
deployed to a staging environment that mirrors production for final
checks like user acceptance testing.
6) Approval/Gating (for Continuous Delivery) - A manual approval step
allows QA or management to review and sign off before the code goes
to production.
7) Production Deployment - The final, validated code is deployed to the
live environment, making it available to users either automatically or
manually, depending on the process.
 Popular CI/CD Tool:
1) Jenkins - A widely-used open-source automation tool that supports
custom CI/CD pipelines through a vast plugin ecosystem, making it
highly flexible for various project needs.
2) GitHub Actions - Built directly into GitHub, it allows you to automate
development workflows using YAML configuration and is great for
seamless code-to-deploy integration.
3) GitLab CI/CD - An all-in-one DevOps platform with integrated CI/CD
features, enabling teams to build, test, and deploy within the same
environment as their version control.
4) CircleCI - A cloud-native CI/CD tool optimized for speed and scalability,
offering features like parallel builds, caching, and seamless VCS
integrations.
5) Travis CI - A simple, cloud-based CI service ideal for open-source and
private projects, offering easy setup and GitHub integration with
automated build and test processes.
Containerization with Docker:
Docker is a tool that makes it easy to create, run, and manage applications by
putting them into containers. A container is like a lightweight, portable box
that has everything the app needs to work—code, tools, libraries—so it can
run the same way anywhere, whether on your computer, a server, or in the
cloud. This helps developers avoid problems like "it works on my machine"
and makes software more consistent and easier to share.

 Installing Docker:
(Click here to learn more)

 Difference B/W Docker and Virtual Machine:

Feature Docker (Containers) Virtual Machines (VMs)

1. Architecture Shares host OS kernel Includes full OS with kernel

2. Boot Time Very fast (seconds) Slower (minutes)


Lightweight – uses fewer Heavy – uses more CPU,
3. Resource Usage
resources RAM, and storage
4. Isolation Level Process-level isolation Full OS-level isolation
Highly portable and
Less portable, may vary
5. Portability consistent across
across platforms
environments
Near-native Slightly slower due to
6. Performance
performance virtualization overhead
Best for microservices, Best for running different
7. Use Case
DevOps, CI/CD pipelines OSs or legacy software
8. Image Size Smaller image sizes Larger disk images
 Benefit of Using Docker:
1) Consistent Environment - Docker provides a uniform environment for
development, testing, and production, which helps eliminate the “it
works on my machine” problem and ensures predictable behavior across
platforms.
2) Lightweight and Fast - Containers don’t need a full OS to run, so they
start almost instantly and use minimal resources, allowing faster
development cycles and better performance.
3) Easy Scaling and Deployment - Docker containers can be deployed and
replicated quickly across clusters using orchestration tools like
Kubernetes or Docker Swarm, making it ideal for horizontal scaling.
4) Simplified Dependency Management - All libraries, runtimes, and tools
your app needs are included in the container image, so there’s no need
to install anything manually on the host machine.
5) Version Control and Rollbacks - You can tag Docker images with versions
and easily revert to a previous one, which adds a safety net during
updates or deployments.
6) Isolation and Security - Each container runs in isolation from the host
system and other containers, reducing the risk of conflicts and improving
overall system security.
 Docker Hub:
Docker Hub is a cloud-based service where people can store and share
Docker images. Think of it like GitHub, but for Docker. It has a huge
collection of ready-made images for popular software like Ubuntu,
MySQL, Nginx, etc., which you can download and use right away. You can
also upload your own images to Docker Hub so others—or your team—
can pull and use them easily from anywhere.

 Benefit:

1) Centralized Repository − Docker Hub allows you to search, access, and


share containerized apps and services. It acts as a single source of truth
thanks to the central repository for Docker container images.
2) Vast Library of Images − It provides access to a huge library of pre-
built Docker images. This includes popular web servers, databases,
programming languages, and frameworks, among other software and
services. You don't have to start from scratch. You can just find and
select images based on your unique requirements in this vast
collection.
3) Open Collaboration − Docker Hub promotes an environment of open
collaboration. It allows developers to share their own Docker images
with the community. You can build upon and improve each other's
work. This promotes knowledge sharing and speeds up development
cycles.
4) Automation Tools − It offers tools for automating the build, test, and
deployment of Docker images. This includes functions like integration
with CI/CD pipelines for smooth continuous integration and delivery
workflows. Moreover, it provides support for automated builds, which
start builds automatically whenever changes are pushed to a
repository.
5) Versioning and Tagging − Docker Hub allows the versioning and
tagging of Docker images. This simplifies the management and tracking
of various iterations of a service or application over time. This makes it
easier to roll back to earlier versions if necessary and guarantees
consistency and reproducibility across various environments.
6) Access Control and Permissions − Docker Hub has some powerful
features for managing access control and permissions. This allows
businesses to regulate who can view, edit, and share Docker images.
This is especially beneficial for teams working on confidential or
proprietary applications as it helps guarantee the security and integrity
of containerized deployments.
7) Scalability and Performance − Docker Hub, a cloud-based service,
provides high-performance infrastructure and scalability for hosting
and distributing Docker images. This guarantees dependable and quick
access to container images irrespective of the repository's size or level
of popularity.
8) Integration with Docker Ecosystem − It offers a unified platform for
developing, launching, and overseeing containerized applications from
development to production. It does this by integrating seamlessly with
the larger Docker ecosystem, which includes Docker Engine, Docker
Compose, and Docker Swarm.
 How to Create a Docker Hub Repository:

(Click here to learn)


 Docker Architecture:
Docker uses a client-server architecture. The Docker client communicates
with the docker daemon, which does the heavy work of creation, execution,
and distribution of docker containers. The Docker client operates alongside
the daemon on the same host, or we can link the Docker client and daemon
remotely. The docker client and daemon communicate via REST API over a
UNIX socket or a network.
 Component of Docker:
1) Docker Engine – The core part of Docker, responsible for building and
running containers. It consists of three parts: the Docker CLI, REST API, and
Docker Daemon (dockerd). The daemon manages images, containers,
volumes, and networks and can communicate with other daemons in
clusters like Docker Swarm.
2) Docker Registries – Storage systems where Docker images are stored and
retrieved. These include public options like Docker Hub or Docker Cloud and
private registries used in organizations. They allow pushing, pulling, and
managing image repositories through commands like docker push and
docker pull.
3) Docker Images – Read-only templates used to create containers. Built using
Dockerfiles, they contain all instructions and dependencies needed to run
applications. Images are made of layers, making them efficient and
lightweight. You can build images using docker build.
4) Docker Containers – Running instances of Docker images. Containers can
be created, started, stopped, and deleted using CLI or API. They are isolated
but configurable in terms of storage, network, and resource usage.
Example: docker run -it ubuntu /bin/bash starts a container with an
interactive shell.

5) Docker Networking – Provides communication between containers. There


are four main network drivers:

a) Bridge – Default driver, used for internal communication between


containers on a single host.
b) Host – Uses the host’s network stack directly, suitable for high-
performance setups.
c) Overlay – Connects containers across multiple Docker hosts, useful in
swarm mode.
d) macvlan – Assigns MAC addresses to containers, allowing them to
appear as physical devices on the network.

6) Docker Storage – Manages persistent data in containers. Options include:

a) Volumes – Preferred method for persistent data, can be reused and


shared across containers.
b) Volume Containers – Dedicated containers that host volumes, shared
with other containers.
c) Directory Mounts – Mounts local directories into containers from the
host.
d) Storage Plugins – Integrates with external storage systems like GlusterFS
or cloud platforms for advanced storage needs.
 Docker Layers:
Docker images are built using a layered architecture, where each layer
represents an instruction in the Dockerfile. These layers are stacked on top of
one another to form the final image. When a container is created from an
image, Docker uses these layers to run the application efficiently and
consistently.

1) Base Layer - This is the foundational layer of a Docker image. It typically


contains the core operating system files like Ubuntu, Alpine, or Debian. All
other layers are built on top of this base layer. It provides the minimal
system environment needed to run applications.
2) Intermediate Layers - Each command in the Dockerfile such as RUN,
COPY, ADD, or ENV creates a new intermediate layer. These layers add
specific functionality or changes to the image. For example, installing a
package or copying application files.
3) Read-Only Layers - All layers in a Docker image are read-only. When you
create and run a container from an image, Docker adds a top writable
layer. Any changes made to the container, like writing files or modifying
content, occur only in this writable layer without affecting the original
image layers.
4) Copy-on-Write Mechanism - When a file from a lower read-only layer is
modified inside a container, Docker copies that file to the writable layer
and applies the changes there. This avoids changing the original file and
ensures efficient use of space and resources.
5) Reusable and Cached Layers - Docker caches layers during the build
process. If a layer hasn't changed, Docker will reuse the existing cached
version instead of rebuilding it. This drastically reduces build times and
makes the development process more efficient.
 Docker Daemon:
The Docker daemon is the background service that runs on your computer
and does all the heavy lifting for Docker. It listens for Docker commands (like
build, run, pull) and then manages images, containers, networks, and
storage.
You can think of it as the engine behind Docker — when you type a Docker
command in the terminal, the Docker client sends that request to the Docker
daemon, and the daemon makes it happen.
 Docker Shell:
Shells are essential in Docker containers; they act as interfaces through
which the execution of commands in the container occurs. Usually, when a
container is started, it has to run a shell to interpret and execute the
commands either described in the Dockerfile or passed when the container is
run. The shell performs several vital functions −
1) Command Execution − Shells interpret and execute commands authored
in scripts or entered interactively by users. This includes software
installation, environment configuration, and application execution.
2) Script Automation − One of the biggest roles shell scripts play in
Dockerfiles is the automation of the container environment setup. It
ensures that all essential steps are automatically and consistently
executed.
3) Interactive Access − Shells enable interactive access to containers,
allowing developers and admins to debug, manage, and examine the
container's environment. This is done interactively by running commands,
for instance, docker exec, against a running container to have a shell
session opened inside it.
 Docker registries:
Docker registries are places where Docker images are stored and shared.
Think of them like online libraries for app blueprints. After creating an image,
you can push it to a registry so others (or your servers) can pull and use it
anytime. Docker Hub is the most common public registry, but you can also
have private ones to keep your images secure within your team or company.
 Types of Docker Registries (Public vs. Private):
Docker registries can be generally categorized into two types –
1) Public Registries − They are made publicly available and usually contain
many pre-built images. The most popular public registry is DockerHub,
which contains thousands of official and community-contributed images.
2) Private Registries − Those that are hosted on your infrastructure or cloud,
offering limited or restricted access. These are useful when storing
proprietary images sensitive data, or images with unique compliance
requirements. Some popular private registry options include Docker
Registry (open source), Harbor (open source), Nexus Repository, and
JFrog Artifactory.
Feature Public Repositories Private Repositories
Visibility Accessible to anyone Only for authorized users
Fine-grained control
Access Control No detailed control
(teams, roles)
Cost Free May need a subscription
Great for open-source Best for private or
Usage
projects sensitive projects
Images are hidden from
Privacy Images are public
public
Less secure access Enhanced security with
Security
control restricted access
Limited to chosen
Collaboration Open to everyone
collaborators
More storage with paid
Scalability Limited free space
plans
Docker Command:
1) Docker Image - A Docker image is like a snapshot or blueprint of an
application and everything it needs to run—like the code, tools, libraries,
and settings. It’s a read-only package created using a Dockerfile, and you
can use it to launch containers. Think of it as a recipe, and when you run
it, you get a working app inside a container, always consistent no matter
where you run it.
 Component of Docker Image:
1) Base Image – The foundational layer your Docker image builds upon.
It could be a simple OS like Alpine or a pre-configured environment
like Node.js or Python. It provides the essential environment your
app needs to run.
2) Layers – Docker images are made of multiple layers, each created
from a line in the Dockerfile (like installing dependencies or copying
files). Layers are cached, so Docker can rebuild images faster by
reusing unchanged parts.
3) Metadata – This includes information like tags, environment
variables, and labels. Metadata helps identify the image version,
control app behavior, and organize containers during deployment.
4) Dependencies – These are packages or libraries your app needs (e.g.,
Flask for a Python app or Nginx for a web server). They're installed
into the image so your app can run consistently anywhere.
5) Application Code – This is your actual source code and related files.
Docker copies this into the image so that your app is included and can
be executed inside the container.
6) Entrypoint / CMD – These define what command should run when a
container starts. For example, starting a web server or launching your
app script. It ensures the container knows what to do once it’s up.
 Docker Image Commands:
a) Listing all Docker Images - To see a list of all the Docker images that
are present on your local computer, you can use the "docker images"
command. It gives important details like the size, creation time,
image ID, tag, and repository name.

b) Pulling Docker Images - To download Docker images to your local


computer from a registry, use the Docker pull command. Docker will

automatically pull the "latest" version of the image if no tag is


specified.
c) Building Docker Images from Dockerfile - The docker build command
creates a Docker image from a Dockerfile placed at the provided
path. During the build process, Docker follows the instructions in the
Dockerfile to generate layers and assemble the final image. This
command is essential for creating customized images that are
tailored to your application's specific needs.
d) Docker tag command - The docker tag command gives a new name
or label to an existing Docker image.Think of it like adding a nickname
or version to a file, so you can organize or share it more easily.

docker tag my-app:latest yourusername/my-app:v1

e) Pushing Docker Images - The docker push command transfers a


Docker image from your local machine to a registry, such as Docker
Hub or a private one. Before pushing an image, ensure that you have
signed in to the registry with the "docker login" command.

f) Removing Docker Images - The docker rmi command removes one or


more Docker images from your local machine. You can provide either
the image name or the image ID. This command deletes images and
their associated layers permanently, so use it with caution.
2) Docker container:
A Docker container is a running instance of a Docker image—basically, it’s
like launching an app from a blueprint. It includes everything needed to
run the app, so it works the same no matter where it runs. Containers are
lightweight, fast, and isolated from each other, which makes them great
for testing, development, or running apps in the cloud.

 Key Concepts of Docker Containers:

a) Container – A lightweight, standalone, and executable package that


includes everything needed to run a piece of software - code,
dependencies, and runtime.
b) Image vs Container – An image is a static file with app code and
environment setup, while a container is a running instance of that
image - think of an image as a blueprint and the container as the live
object.
c) Isolation – Containers run in isolated environments, meaning they
don’t interfere with each other or the host system - this ensures
consistency and security.
d) Port Binding – Containers can expose specific ports so they can
communicate with the outside world - for example, exposing port
3000 for a web app.
e) Volumes – Volumes are used to persist data outside of a container’s
lifecycle - they help store things like database files even after the
container is stopped or removed.
f) Networking – Docker containers can talk to each other using custom
networks - this is useful when building multi-container apps like web
servers with separate databases.
 Important Docker Container Commands:

a) Listing all Docker Containers - The Docker host's running containers


can be listed using the docker ps command. You can use the -a or --all
flag to show all containers, including stopped ones, as it only shows
running containers by default.

b) Creating and Starting Containers - docker run command The primary


command for starting and creating Docker containers is docker run. If
the image isn't already available locally, Docker pulls it from a registry
when you run this command. It then starts a fresh container instance
by generating one based on that image.

c) Stopping a Docker Container - A container can be gracefully stopped


by using the docker stop command, which signals the container's
main process with a SIGTERM.
d) Pausing a Running Container - A running container's processes can
be momentarily suspended, or its execution paused, with the docker
pause command. This can be helpful for temporarily freeing up
system resources, debugging, and troubleshooting problems.
e) Resuming a Docker Container - When a container is paused, its
processes can be carried out again by using the docker unpause
command. By using this command, the container returns to its initial
state and undoes the effects of the docker pause command.

f) Restarting a Container - One easy way to quickly stop and restart an


operating container is with the docker restart command. It is
frequently used to force a container to reinitialize after experiencing
problems or to apply changes to the configuration of a running
container.

g) Removing a Docker Container - To remove a Docker container or


containers, you can use the docker rm command. The container(s)
whose ID or name you wish to remove can be specified. This
command only removes stopped containers by default; to forcefully
remove running containers, you can use the -f or --force flag.
3) Docker Compose:
Docker Compose is a tool specifically designed to simplify the
management of multi-container Docker applications. It uses a YAML file in
which the definition of services, networks, and volumes that an
application requires is described. Basically, through the docker-
compose.yml file, we define the configuration for each container: build
context, environment variables, ports to be exposed, and the relationship
between services.
 Key Elements of YAML:
1) File Version − This defines the format of the Docker Compose file so
that it ensures compatibility with different Docker Compose features.
2) Services − Contains lists of all services (containers) composing the
application. Each service is described with uncounted configuration
options.
3) Networks − It will specify custom networks for inter-container
communication and may specify the configuration options and
network drivers.
4) Volumes − Declares shared volumes that are used to allow persistent
storage. Volumes can be shared between services or used to store
data outside the container's lifecycle.
 Docker-Compose:
 Important Docker Compose Commands:
a) Docker Compose Up - Command The docker-compose up command
brings up and runs the entire application, as defined in the docker-
compose.yml file while creating and starting all the services,
networks, and volumes. In addition, if images of this service have
never been built, it builds the necessary Docker images.

b) Docker Compose Down Command - The command `docker-compose


down` stops and removes all the containers, networks, and volumes

defined in the `docker-compose.yml` file. So, this command helps in


cleaning up the resources that your app has taken so far,
c) Docker Build Command - This command is used to build or rebuild
Docker images for services defined in the docker-compose.yml file. It
runs when changes are made in a Dockerfile or source code; new
images need to be created.

 Docker Compose Start, Stop, Restart Commands -


 `docker-compose start` - will start the already created containers
without recreating them, bringing up previously stopped services.
 `docker-compose stop` - stops the currently running containers,
without discarding them; thus, it is possible to restart the services
later. `
d) docker-compose restart` - is useful if you've brought changes to the
environment or configuration and want to restart them.
e) Docker Compose Status Command - The docker-compose ps
command shows the status of all services defined in the docker-
compose.yml file, pointing out containers' statuses, their names,
states, and ports. This command is used to inspect the current state
of the services.

f) Docker Compose Logs Command - The command `docker-compose


logs` gets and displays the bundle of all logs that define services in
`docker-compose.yml`. It is essential for debugging and monitoring
the application because this will primarily involve real-time output
from executing containers.
4) Container Linking:
Docker container linking is a compelling feature that enables containers
to communicate with one another in a very secure and effective way. It
creates a secure tunnel between containers when you link them so that
one container can access services running within another.

a) -d − Run container in detached mode.


b) --name recipient_container − It is possible to specify a recipient
container name.
c) --link source_container/image_alias − Links source container to the
recipient container for the given image alias.
d) image_name:tag − Specifies the Docker image to use for creating the
recipient container.
5) Docker Volume:
A Docker volume is a way to store data outside of a container, so the data
doesn’t get lost when the container stops or is deleted.

Think of it like a separate storage space that Docker can use to save
files—like databases, logs, or uploaded files—even after the container is
gone. Volumes are stored on your system, not inside the container, which
makes them great for persistent data and sharing data between
containers.

 How does a Docker Volume Work:


Docker volumes are independent file systems that exist outside a
container's life cycle. Such a separation makes sure that data persists
and that it can easily be shared between more than one container. The
way Docker volumes work is as follows –
a) Creation − Volumes can be explicitly created by using the docker
volume create command, and it can also be done implicitly when a
container is started with a volume mount.
b) Storage − Volumes are stored in a particular directory on the host,
under /var/lib/docker/volumes by default on Linux.
c) Mounting − Volumes are mounted to specific paths in containers:
data written to those paths are stored in the volume, not in the
container's writable layer.
d) Persistence − Even stopping, updating, or deleting a container won't
affect the data within the volume. Hence, this makes it relatively easy
to have persistence across different instances of containers.
e) Sharing − Volumes can be shared by many containers simultaneously.
It enables access to, or modification of, the very same data by
different containers.
f) Management − You can easily manage volumes with Docker
commands such as listing, inspecting, and removing them.
Additionally, you can use volume drivers to store volumes on remote
hosts or cloud providers.
 Docker Volume Commands:
a) Creating a Docker Volume - If you want to create a new Docker
volume, you can use the docker volume create command. The
volumes created using this command can be used by one or more
containers.

b) Listing Docker Volumes - Listing docker volumes in the local host


machine will be one of the most frequently used commands that will
help you manage volumes. If you want to list all the Docker volumes
on your system, you can use the docker volume ls command.

c) Inspecting a Docker Volume - When you want to fetch detailed


information about a particular volume, you can use the docker
volume inspect command. This command needs the name or id of the
volume to be specified which you can get using the volume list
command. It will provide details such as the volume's location on the
host and its configuration.

d) Removing a Docker Volume - If you want to clean up your local


machines and remove Docker volumes that are no longer required,
you can use the docker volume rm command. It allows you to remove
only those volumes that are not currently in use by any containers.
e) Removing Unused Volumes - The Docker volume remove command
only allows you to remove one volume at a time. But if you want to
remove all unused Docker volumes to free up some space in your
machine, you can use the docker volume prune command. When you
run this command, it will prompt you for a confirmation before
deleting all dangling volumes.

f) Backing Up a Docker Volume - Its always useful to back up data


stored in volumes because if you accidentally delete the volume, it
will help you to restore the lost data back. You can store the data in
the volumes into a tarball file. To do so, you can use a container to tar
the volume's contents and output it to a file on the host system.
Heres an example command for a volume called my_volume −

g) Restoring a Docker Volume from Backup - Once you have your data
backed up in a tarball file, you can use the below command to restore
it back to a Docker volume. Heres the example command to restore a
volume called my_volume from a backup file −
6) Docker Network:
Docker Network allows containers to communicate with each other and
with the outside world. It creates virtual networks so containers can talk
safely, just like apps on a regular network.

 There are different types of Docker networks, like:

a) bridge (default for standalone containers),


b) host (shares the host machine’s network),
c) none (no network),
d) overlay (for multi-host communication in Docker Swarm).

 Important Docker Network Commands:


a) Create a Network - You can use the docker network create command
to create a Docker network. It allows you to specify the driver and
options for the network.

b) List Networks - If you want to list all the networks on your host, you
can use the docker network ls command. The output of this command
will be a list of all the networks along with their names and drivers.

c) Connect a Container to a Network - If you already have a network


created and you want to connect it to a running container, you can
use the docker network connect command. This will allow a container
to communicate with other containers on the network mentioned in
the command.
d) Disconnect a Container from a Network - If a network is associated
with a container and you want to disconnect it from the container,
you can use the docker network disconnect command. It will remove
the containers connection to the specified network.

e) Remove a Network - If a network is no longer needed, you can


remove it from the system using the docker network rm command.
With the help of this command, you can remove only those networks
that are not currently in use by any containers.

f) Prune Unused Networks - If you want to remove all the unused


Docker networks to free up resources, you can use the docker
network prune command. Before removing all the networks, this
command will let you be double-sure by prompting you for
confirmation.
DevOps and Monitoring:
 Infrastructure as code:
Infrastructure as Code (IaC) is a way to manage and set up servers,
databases, and other infrastructure using code instead of doing it
manually. Just like you write code to build software, with IaC you write
scripts that tell the cloud exactly what resources to create and how to
configure them. This makes the process faster, repeatable, and less error-
prone. Tools like Terraform and Ansible help automate this setup, so you
can launch and manage infrastructure with just a few lines of code.
1) Terraform:
Terraform is a tool that helps you set up cloud stuff (like servers,
storage, databases) using code instead of clicking buttons in the cloud
dashboard (like AWS, Azure, or GCP). You write what you want in a file,
and Terraform creates it for you automatically.
 How Terraform Works – Step-by-Step:

a) Step 1: Write Code in a Terraform File (.tf)

 provider "aws": You're telling Terraform to use AWS.


 region: You choose the AWS region (like selecting the city where
your server lives).
 resource "aws_instance": You are creating a virtual server (called an
EC2 instance).
 ami: Think of it like the OS image (e.g., Ubuntu or Windows).
 instance_type: This defines how powerful the server is (t2.micro is
small & free-tier).
b) Step 2: Initialize Terraform in Your Project Folder:

c) Step 3: Preview What Will Be Created:

d) Step 4: Apply and Create the Resource:

Command What It Does (Simple)

terraform init Sets up Terraform in your folder

terraform plan Shows what will be created/updated/deleted

terraform apply Actually creates or updates the resources


Deletes everything Terraform created (if you want
terraform destroy
to stop or reset)
2) Ansible:
Ansible is a tool that helps you automate the setup of servers. Instead
of logging into a server and typing commands like “install Node.js” or
“start Nginx”, you write those steps in a file, and Ansible runs them for
you. It’s like writing a to-do list for your server — and Ansible makes
sure everything on the list gets done.
a) Step 1: You write a Playbook (YAML file) - A playbook is a file
where you define what you want Ansible to do.

 hosts: webserver – This tells Ansible which server(s) to run on.


 become: yes – Run commands with admin (sudo) access.
 tasks: – These are the steps to perform, like:
o Install nginx
o Start nginx service

b) Step 2: Inventory File (List of servers) - Ansible needs to know


where to run those tasks. You create a simple file called
inventory.ini. This tells Ansible there’s a server at IP 192.168.1.100
Login with user Ubuntu:
c) Step 3: Run Ansible Command - This means: Use the inventory file
(-i inventory.ini) Run the instructions in the playbook file (nginx-
setup.yml) Ansible logs in to the server and runs all the commands
for you. No manual typing needed!. You run this in the terminal
 Monitoring and Logging:
Monitoring is the process of continuously tracking the performance,
health, and behavior of your application, servers, or infrastructure to
ensure everything is running smoothly. It helps detect problems like slow
response times, crashes, or high server usage before they affect users.
Tools like Prometheus and Grafana are commonly used to collect and
display real-time data (like CPU usage, memory, or errors) through
dashboards, so developers and system admins can quickly spot and fix
issues.
 Importance of Monitoring:

a) Early Problem Detection – Monitoring helps detect issues like server


crashes, high memory usage, or failed services as soon as they occur.
This allows the development or DevOps team to take quick action
before these issues impact users or cause downtime.
b) System Health Check – It continuously tracks the overall health and
status of your infrastructure, including CPU load, memory
consumption, disk space, and network usage. This ensures that
everything is functioning within acceptable limits and helps prevent
unexpected failures.
c) Alerts and Notifications – Monitoring tools like Prometheus or
Grafana can be set up to send real-time alerts through email, SMS, or
chat when something unusual happens, such as a server going offline
or a spike in traffic. This enables a quick response to minimize damage
or downtime.
d) Easier Debugging – When something goes wrong, monitoring provides
detailed logs, error reports, and graphs that help pinpoint exactly
what caused the issue. This makes the debugging process much faster
and more accurate, especially in large or complex systems.
e) Performance Optimization – By reviewing metrics such as response
time, server load, and database performance, you can identify slow
areas or bottlenecks in your system. This helps you make targeted
improvements to enhance speed and efficiency.
f) Planning and Scalability – Long-term monitoring data reveals trends
in resource usage and user behavior, which helps in making informed
decisions about scaling your infrastructure, upgrading hardware, or
preparing for high-traffic periods.

1) Prometheus:
Prometheus is an open-source monitoring tool used to collect, store,
and query performance data (called metrics) from servers, applications,
and services.
It works by scraping (pulling) data from targets like servers, containers,
or apps at regular intervals. These targets expose their data in a specific
format (usually on a /metrics endpoint), and Prometheus fetches that
information and stores it in a time-series database.

 Types of Metrics Prometheus Can Collect:

a) CPU Usage – Measures how much of the processor’s capacity is


currently being used. High CPU usage over time might indicate heavy
computation, inefficient code, or overloaded servers that need
optimization or scaling.
b) Memory Usage – Tracks how much RAM is being used, available, or
cached. Monitoring memory helps detect memory leaks, applications
consuming too much memory, or systems that need more RAM to
run smoothly.
c) Disk Space – Shows used, free, and total disk capacity. Monitoring
disk space is crucial because if a disk fills up, it can cause the system
to crash or fail to save important data and logs.
d) Network Traffic – Monitors the amount of incoming and outgoing
data on the server. It helps detect issues like network congestion,
unusually high traffic (which might be a DDoS attack), or bandwidth
limits being reached.
e) App-specific metrics – These are custom metrics like API request
counts, response times, error rates, and user activity. They provide
insight into how your application is performing from a business and
user perspective, not just at the system level.
 Steps How Prometheus Collects Data:

a) Install Prometheus on your system – This sets up the main


monitoring tool that will handle data collection, storage, and analysis
for your infrastructure or application.
b) Configure targets (like a server or app that exposes metrics) –
Targets are the systems or apps Prometheus will monitor. You define
them in a config file so Prometheus knows where to fetch data from.
c) Prometheus scrapes the data from these targets at regular intervals
(every few seconds/minutes) – Instead of waiting for data to be
sent, Prometheus pulls it actively on a schedule to ensure fresh and
consistent monitoring data.
d) Metrics are stored in Prometheus's time-series database – All
collected data is stored in a way that tracks how values change over
time, making it easy to analyze trends or past issues.
e) You can query the data using PromQL (Prometheus Query
Language) – PromQL lets you ask specific questions like CPU usage
over the last 5 minutes or number of failed requests, helping with
analysis and troubleshooting.
f) Data is usually visualized using Grafana dashboards – Grafana
connects to Prometheus and turns raw metrics into easy-to-read
graphs and dashboards, so you can quickly understand the system’s
health and performance.
2) Grafana:
Grafana is an open-source analytics and interactive visualization web
application. It allows users to query, visualize, alert on, and explore
metrics from a wide range of data sources. Its primary use is to turn
time-series data into insightful and customizable dashboards that help
teams monitor systems, infrastructure, and applications.
 Key Features:
a) Data Source Agnostic - Grafana supports a wide variety of data
sources including Prometheus, InfluxDB, Graphite, Elasticsearch,
MySQL, PostgreSQL, and many others. This flexibility allows users to
centralize their metrics and logs from multiple systems in a single
dashboard interface.
b) Custom Dashboards - Users can build dashboards that consist of
multiple panels, each visualizing different metrics or data in various
formats. These panels can be configured to suit specific needs such as
tracking performance metrics, infrastructure health, or business KPIs.
c) Alerting - Grafana includes a powerful alerting system that enables
users to define thresholds and conditions for metrics. Alerts can be
configured to trigger notifications via email, Slack, or other channels
when certain criteria are met, helping teams respond proactively.
d) Templating - Dashboard templating allows users to create dynamic
and reusable dashboards using variables. These variables can be used
in queries, making it easy to change the scope of the data being
visualized without duplicating dashboards.
e) Authentication - Grafana supports multiple authentication backends
including LDAP, OAuth, and basic authentication. This ensures secure
access control and integration with enterprise identity management
systems.
 Creating Dashboards – Step by Step:
a) Add a Data Source - Begin by navigating to Configuration > Data
Sources. Select the appropriate data source (such as Prometheus)
and connect it by entering the necessary details like the URL and
authentication credentials. Once configured, Grafana can query data
from this source.
b) Create a New Dashboard - To create a new dashboard, click the plus
icon and choose "Dashboard." This will open a blank canvas where
you can begin adding visualizations to represent your data.
c) Build a Panel - Add a panel to the dashboard and use the built-in
query editor to fetch data. The editor interface varies depending on
the data source. Choose a visualization type such as a graph, gauge,
bar gauge, table, or heatmap. You can further customize each panel
by adjusting legends, axes, thresholds, and color schemes.
d) Use Variables (Templating) - To make dashboards dynamic, define
variables by going to Dashboard Settings > Variables. These variables
can then be used within queries and panel titles, allowing users to
filter data or switch views without modifying the dashboard
structure.
e) Set Alerts - To set up monitoring alerts, go to the Alert tab in the
panel configuration. Define the alert conditions, specify threshold
levels, and configure notification channels. This enables proactive
monitoring and helps teams address issues in real time.
 Basic Logging: Understanding Error Logs and System Logs

Logging is the process of recording events, messages, or other types of


information generated by an application, operating system, or service. It
plays a critical role in software development and system administration by
enabling debugging, performance monitoring, and ongoing system
maintenance.

1) Error Logs:
Error logs are dedicated to capturing problems or unexpected behavior
in applications and systems. They are particularly useful for identifying
why a process failed or diagnosing application errors.

a) Timestamp - Each log entry includes a timestamp to indicate exactly


when the error occurred. This is essential for correlating the issue with
user activity, system changes, or other logged events.
b) Error Level - The log will often indicate the severity of the error, using
levels such as ERROR, WARNING, or CRITICAL. These levels help
prioritize which issues need immediate attention and which can be
reviewed later.
c) Error Message - The message gives a brief description of what went
wrong. It may include details such as the type of error, the affected
component, or the reason the operation failed.
d) Stack Trace (optional) - For application errors, especially in
development environments, a stack trace may be included. This shows
the sequence of function calls leading to the error, helping developers
quickly locate the root cause in the source code.
 Why It Matters:

 Assists developers in identifying and fixing bugs during development


and in production.
 Provides system administrators with insight into failures or
malfunctions.
 Can be used by monitoring tools to trigger alerts when specific types
of errors are detected.
2) System Logs:
System logs offer a broader view of events related to the operating
system and its core services. They are critical for understanding the
general state and stability of the environment in which applications
run.

 Common Sources of System Logs:

a) Linux: System logs are typically located in the /var/log/ directory.


Common files include /var/log/syslog, /var/log/messages, and
/var/log/auth.log, each serving different logging purposes such as
general system activity, kernel events, and authentication logs.
b) Windows: System logs are accessed through the Event Viewer.
Logs are categorized into sections such as System, Application,
and Security, each containing entries relevant to that context.

 What They Include:

a) System startup and shutdown events: Logs the beginning and


end of system sessions, which is helpful for tracing uptime or
unexpected reboots.
b) Service status changes: Records when services are started,
stopped, or fail to launch, offering insight into service availability.
c) Hardware or driver issues: Includes messages from the hardware
or kernel, such as disk errors, driver failures, or hardware
compatibility warnings.
d) User login and logout events: Useful for tracking user activity,
detecting unauthorized access, or auditing system usage.

You might also like