0% found this document useful (0 votes)
46 views

Recap: Value Types Reference Types String Type

A DLL is a library that contains code and data that can be used by more than one program at the same time. By using a DLL, a program can be modularized into separate components that can be loaded at runtime only when needed, making the load time of the program faster.

Uploaded by

Вук Исић
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Recap: Value Types Reference Types String Type

A DLL is a library that contains code and data that can be used by more than one program at the same time. By using a DLL, a program can be modularized into separate components that can be loaded at runtime only when needed, making the load time of the program faster.

Uploaded by

Вук Исић
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

 

Recap 

 DLL 
A DLL is a library that contains code and data that can be used by more than one program at the same time. By using a DLL,
a program can be modularized into separate components. For example, an accounting program may be sold by module. Each
module can be loaded into the main program at run time if that module is installed. Because the modules are separate, the
load time of the program is faster. And a module is only loaded when that functionality is requested.

 ref vs. out 


ref

The parameter or argument must be initialized first before it is passed to ref.


It is not required to assign or initialize the value of a parameter before returning to the calling method.
Passing a parameter value by Ref is useful when the called method is also needed to modify the pass parameter.
It is not compulsory to initialize a parameter value before using it in a calling method.
When we use REF, data can be passed bi-directionally.

out

It is not compulsory to initialize a parameter or argument before it is passed to an out.


A called method is required to assign or initialize a value of a parameter before returning to the calling method.
Declaring a parameter to an out method is useful when multiple values need to be returned from a function or method.
A parameter value must be initialized within the calling method before its use.
When we use OUT data is passed only in a unidirectional way (from the called method to the caller method).

 .Equals vs. == 


==

Value Types operands - compare values.


Reference Types operands with exception of string - compare references.
String type operands - compare values.

.Equals

Reference Types operands - compare content using .Equals() method.


Value Types operands - compare values.
String type operands - compare values.
 Static 
Class

Static classes cannot be instantiated.


All the fields/methods of a static class must be static.
Static classes are sealed class and therefore, cannot be inherited.
A static class cannot inherit from other classes.
A static class remains in memory for the lifetime of the application domain in which your program resides.

Field

Static fields of a non-static class is shared across all the instances.

Method

Static methods can be called without creating an object. You cannot call static methods using an object of the non-static
class. The static methods can only call other static methods and access static members. You cannot access non-static
members of the class in the static methods.

 #define 
The #define and #undef directives influence compilation. They should appear at the top of a source file. They can adjust
compilation options for the entire file. As directives, they have no affect on runtime performance - declaring symbols only.

#define PERL

using System;

class Program

static void Main()

#if PERL

Console.WriteLine(true);

#endif

#if PYTHON

Console.WriteLine(false);

#endif

 abstract vs. virtual 


An abstract function cannot have functionality. You're basically saying, any child class MUST give their own version of this
method, however it's too general to even try to implement in the parent class.

A virtual function, is basically saying look, here's the functionality that may or may not be good enough for the child class. So
if it is good enough, use this method, if not, then override me, and provide your own functionality.
 using() 
The objects specified within the using block must implement the IDisposable interface. The framework invokes the Dispose
method of objects specified within the "using" statement when the block is exited - managing the resources.

using(var service = new EmailService())

//..actions with service

You can achieve the same result by putting the object inside a try block and then calling Dispose in a finally block.

var service = new EmailService();

try

//..actions with service

catch

// ..exception handling

finally

// .. calls eather way

service.Dispose();

 lambda 
Lambda expressions in C# are used like anonymous functions, with the difference that in Lambda expressions you don’t need
to specify the type of the value that you input thus making it more flexible to use.

Any lambda expression can be converted to a delegate type. If a lambda expression doesn't return a value, it can be
converted to one of the Action delegate types; otherwise, it can be converted to one of the Func delegate types.

 LINQ 
Language-Integrated Query (LINQ) is the name for a set of technologies based on the integration of query capabilities directly
into the C# language. - Example of fluid API

 nullable 
C# provides a nullable types, to which you can assign normal range of values as well as null.

// Nullable types

Nullable<int> a;

int? b;

 throw ex vs. throw new Exception() 


throw ex, unlike throw new Excception(), gives us full stack trace of exception.
 REST 
REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the
web, making it easier for systems to communicate with each other.

In the REST architectural style, the implementation of the client and the implementation of the server can be done
independently without each knowing about the other.

As long as each side knows what format of messages to send to the other, they can be kept modular and separate.
Separating the user interface concerns from the data storage concerns, we improve the flexibility of the interface across
platforms and improve scalability by simplifying the server components. Additionally, the separation allows each component
the ability to evolve independently.

By using a REST interface, different clients hit the same REST endpoints, perform the same actions, and receive the same
responses.

Systems that follow the REST paradigm are stateless, meaning that the server does not need to know anything about what
state the client is in and vice versa. In this way, both the server and the client can understand any message received, even
without seeing previous messages. This constraint of statelessness is enforced through the use of resources.

All actions are done using  GET ,  POST ,  PUT ,  DELETE  methods of HTTP.

 Web API 
The .Net Web API is an extensible framework for building HTTP based services that can be accessed in different applications
on different platforms such as web, windows, mobile etc. It works more or less the same way as .Net MVC web application
except that it sends data as a response instead of html view. It is like a webservice or WCF service but the exception is that it
only supports HTTP protocol.

.NET Web API Characteristics:

an ideal platform for building RESTful services.


supports request/response pipeline
maps HTTP verbs to method names.
supports different formats of response data.

 HTTP Functions 
GET - is used to request data from a specified resource.
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should never be used when dealing with sensitive data
GET requests have length restrictions
GET requests are only used to request data (not modify)
POST - is used to send data to a server to create/update a resource.
POST requests are never cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length
PUT - is used to send data to a server to create/update a resource.
The difference between POST and PUT is that PUT requests are idempotent. That is, calling the same PUT request
multiple times will always produce the same result.
HEAD - HEAD is almost identical to GET, but without the response body.
In other words, if GET /users returns a list of users, then HEAD /users will make the same request but will not return
the list of users. HEAD requests are useful for checking what a GET request will return before actually making a GET
request.
DELETE - deletes the specified resource.
PATCH - applies partial modifications to a resource.
OPTIONS - describes the communication options for the target resource.

 Encapsulation 
Encapsulation is defined 'as the process of enclosing one or more items within a physical or logical package'. Encapsulation,
in object oriented programming methodology, prevents access to implementation details. Abstraction and encapsulation are
related features in object oriented programming. Abstraction allows making relevant information visible and encapsulation
enables a programmer to implement the desired level of abstraction.

Encapsulation is implemented by using access specifiers:

Public - allows a class to expose its member variables and member functions to other functions and objects. Any public
member can be accessed from outside the class.
Private - allows a class to hide its member variables and member functions from other functions and objects. Only
functions of the same class can access its private members.
Protected - allows a child class to access the member variables and member functions of its base class. This way it
helps in implementing inheritance.
Internal - allows a class to expose its member variables and member functions to other functions and objects in the
current assembly.
Protected internal - allows a class to hide its member variables and member functions from other class objects and
functions, except a child class within the same application.
 Polymorphism 
The word polymorphism means having many forms. In object-oriented programming paradigm, polymorphism is often
expressed as 'one interface, multiple functions'.Polymorphism can be static or dynamic. In static polymorphism, the response
to a function is determined at the compile time. In dynamic polymorphism, it is decided at run-time.

Static

class Printdata

void print(int i) {

Console.WriteLine("Printing int: {0}", i );

void print(double f) {

Console.WriteLine("Printing float: {0}" , f);

Printdata p = new Printdata();

p.print(5);

p.print(500.263);

Dynamic

class Shape

protected int width, height;

public Shape( int a = 0, int b = 0) { width = a; height = b; }

public virtual int Area() { return 0; }

class Rectangle: Shape

public Rectangle( int a = 0, int b = 0): base(a, b) { }

public override int Area () { return (width * height);}

class Triangle: Shape

public Triangle(int a = 0, int b = 0): base(a, b) { }

public override int Area() { return (width * height / 2); }

class Caller {

public void CallArea(Shape sh) {

int a;

a = sh.area();

Console.WriteLine("Area: {0}", a);

Caller c = new Caller();

Rectangle r = new Rectangle(10, 7);

Triangle t = new Triangle(10, 5);

c.CallArea(r);

c.CallArea(t);

 Testing 
Types of tests in SE:

Unit Testing - Unit testing should start at the very beginning to assure that each block of code/unit performs its intended
manipulation of inputs into desired outputs for the next module. Tests an individual unit/component of software to validate
that each unit of the software performs as designed.
Integration Testing - Takes multiple individual units/component of the software and tests them as a group to ensure that
the unit modules connect as expected and convey data and commands throughout the system per the specifications built.
Smoke Testing - Smoke tests are a subset of test cases which tests the major/critical functionalities of software in a non-
comprehensive manner to ensure the software works well enough to move on to additional tests. Execute before any
detailed functional test or regression tests performed on the software build.
Sanity Testing - After receiving a software build verify that minor changes and fixes applied to the code body do not have
unexpected side effects in, apparently, separate parts of the system and to confirm that the bugs have fixed. If sanity tests
fail, the build will be rejected to save the time and costs involved in more rigorous testing.
Regression Testing - Verify that the later feature additions and bug fixes(due to which code changes)not adversely affect
existing features. Regression Testing is nothing but the full or partial selection of already executed test cases which are
re-executed to ensure existing functionalities work fine.
User Acceptance Testing - This the last step before software goes live, user acceptance tests make sure it can handle
required tasks in Real- World scenarios, according to specifications. End users typically perform these tests during the
Beta testing period.

 Unit Testing and Mocking 


Unit testing is a type of testing in which individual units or functions of software testing. Its primary purpose is to test each unit
or function. A unit is the smallest testable part of an application. It mainly has one or a few inputs and produces a single
output. To do this, mocks are required. Is there a need for mocks to make testing on functions? Yes, without creating mocks
functions cannot be unit tested.Testing works on the basis of mock objects. Mock objects work to fill in for missing parts of a
program. For example, there might be a function that needs variables or objects that not created yet. To test function, mock
objects created. In such conditions, mock objects fill missing parts.

With Unit Testing Enteprises can:

Improve Quality of Code


Build Reusable and Reliable Code
Simplify Documentation
Enable Seamless Integration

White-Box testing

It's referred to as a glass box testing/transparent testing. In this type of testing, the tester is aware of internal functionality. The
internal structure of an item or function to be tested is unknown.

Black-Box testing

It is a type of testing, tester not aware of the internal functionality of a system. The internal structure of the function to be
tested is unknown.
Gray-Box testing

It's referred to as semi-transparent testing. It is a combination of a Black Box and White Box testing. It is the type of testing in
which tester aware with internal functionality of a method or unit but not in a more deep level like white box testing. In this, the
user partially aware of the internal functionality of a system.

Tools and Frameworks: nUnit/xUnit and Moq


 Generics 
In C#, generic means not specific to a particular data type.

Class
Generic classes are defined using a type parameter in an angle brackets after the class name. The following defines a generic
class.

class DataStore<T>

public T Data { get; set; }


}

class KeyValuePair<TKey, TValue>

public TKey Key { get; set; }

public TValue Value { get; set; }

DataStore<int> intStore = new DataStore<int>();

intStore.Data = 100;

KeyValuePair<int, string> kvp1 = new KeyValuePair<int, string>();

kvp1.Key = 100;

kvp1.Value = "Hundred";

Field
A generic class can include generic fields. However, it cannot be initialized.

class DataStore<T>

public T data;

Method
A method declared with the type parameters for its return type or parameters is called a generic method. A non-generic class
can include generic methods by specifying a type parameter in angle brackets with the method name.

class Printer

public void Print<T>(T data)

Console.WriteLine(data);

 Boxing and UnBoxing 


Boxing

Boxing is the process of converting a value type to the object type or any interface type implemented by this value type.
Boxing is implicit.

Why? To have a unified type system and allow value types to have a completely different representation of their underlying
data from the way that reference types represent their underlying data

int i = 10;

object o = i; //performs boxing

UnBoxing

Unboxing is the reverse of boxing. It is the process of converting a reference type to value type. Unboxing extract the value
from the reference type and assign it to a value type.Unboxing is explicit. It means we have to cast explicitly. A boxing
conversion makes a copy of the value. So, changing the value of one variable will not impact others.

object o = 10;

int i = (int)o; //performs unboxing

int i = 10;

// ---

object o = i; // boxing

double d = (double)o; // runtime exception

// ---

int i = 10;

object o = i; // boxing

double d = (double)(int)o; // valid

 Delegates 
A delegate is a reference type variable that holds the reference to a method. The reference can be changed at runtime.
Delegates are especially used for implementing events and the call-back methods.

// delegate <return type> <delegate-name> <parameter list>

public delegate int MyDelegate (string s);

public delegate void printString(string s);

...

printString ps1 = new printString(WriteToScreen);

printString ps2 = new printString(WriteToFile);

Types:

Predicate: essentially Func<T, bool>; asks the question "does the specified argument satisfy the condition represented by
the delegate?" Used in things like List.FindAll.
Action: Perform an action given the arguments. Very general purpose. Not used much in LINQ as it implies side-effects,
basically. (no return value)
Func: Used extensively in LINQ, usually to transform the argument, e.g. by projecting a complex structure to one property.
(has return value)
 Events 
Events are user actions such as key press, clicks, mouse movements, etc., or some occurrence such as system generated
notifications. Applications need to respond to events when they occur. For example, interrupts. Events are used for inter-
process communication.

The events are declared and raised in a class and associated with the event handlers using delegates within the same class
or some other class. The class containing the event is used to publish the event. This is called the publisher class. Some other
class that accepts this event is called the subscriber class. Events use the publisher-subscriber model.

A publisher is an object that contains the definition of the event and the delegate. The event-delegate association is also
defined in this object. A publisher class object invokes the event and it is notified to other objects.

A subscriber is an object that accepts the event and provides an event handler. The delegate in the publisher class invokes
the method (event handler) of the subscriber class.

public delegate string MyDel(string str);

class EventProgram {

event MyDel MyEvent;

public EventProgram() { this.MyEvent += new MyDel(this.WelcomeUser); }

public string WelcomeUser(string username) { return "Welcome " + username; }

EventProgram obj1 = new EventProgram();

string result = obj1.MyEvent("Tutorial");

Console.WriteLine(result);

 Reflection 
Reflection objects are used for obtaining type information at runtime. The classes that give access to the metadata of a
running program are in the System.Reflection namespace. The System.Reflection namespace contains classes that allow you
to obtain information about the application and to dynamically add types, values, and objects to the application.

Applications of Reflection:

It allows view attribute information at runtime.


It allows examining various types in an assembly and instantiate these types.

Viewing Metadata

The MemberInfo object of the System.Reflection class needs to be initialized for discovering the attributes associated with a
class.

System.Reflection.MemberInfo info = typeof(MyClass);

object[] attributes = info.GetCustomAttributes(true);

for (int i = 0; i < attributes.Length; i++) {


System.Console.WriteLine(attributes[i]);

//Private Method

MethodInfo dynMethod = this.GetType().GetMethod("Draw_" + itemType,

BindingFlags.NonPublic | BindingFlags.Instance);

dynMethod.Invoke(this, new object[] { methodParams });

 Thread vs. Task 


What is a Task?

A task is something you want done. It is a set of program instructions that are loaded in memory . When program instruction is
loaded into memory people do call as process or task. Task and Process are synonyms nowadays. A task will by default use
the Threadpool , which saves resources as creating threads can be expensive. The Task can tell you if the work is completed
and if the operation returns a result. Also, you can see a Task as a higher level abstraction upon threads.

What is a Thread?

A thread is a basic unit of CPU utilization , consisting of a program counter, a stack, and a set of registers. Thread has its own
program area and memory area . A thread of execution is the smallest sequence of programmed instructions that can be
managed independently by a scheduler. Threads are not a .NET construct, they are built into your operating system . The
thread class from .NET is just a way to create and manage threads. Threads can themselves split into two or more
simultaneously running tasks .

Differences Between Task And Thread

Task is more abstract then threads. It is always advised to use tasks instead of thread as it is created on the thread pool
which has already system created threads to improve the performance.
The task can return a result. There is no direct mechanism to return the result from a thread.
Task supports cancellation through the use of cancellation tokens. But Thread doesn't.
A task can have multiple processes happening at the same time. Threads can only have one task running at a time.
You can attach task to the parent task, thus you can decide whether the parent or the child will exist first.
While using thread if we get the exception in the long running method it is not possible to catch the exception in the
parent function but the same can be easily caught if we are using tasks.
You can easily build chains of tasks. You can specify when a task should start after the previous task and you can specify
if there should be a synchronization context switch. That gives you the great opportunity to run a long running task in
background and after that a UI refreshing task on the UI thread.
A task is by default a background task. You cannot have a foreground task. On the other hand a thread can be
background or foreground.
The default TaskScheduler will use thread pooling, so some Tasks may not start until other pending Tasks have
completed. If you use Thread directly, every use will start a new Thread.
 IEnumerable vs. IQueryable vs. IList 
IEnumerable describes behavior, while List is an implementation of that behavior. When you use IEnumerable, you give the
compiler a chance to defer work until later, possibly optimizing along the way. If you use ToList() you force the compiler to reify
the results right away.

IEnumerable

IEnumerable can move forward only over a collection, it can’t move backward and between the items.
IEnumerable is best to query data from in-memory collections like List, Array etc.
IEnumerable doesn't support add or remove items from the list.
Using IEnumerable we can find out the no of elements in the collection after iterating the collection.
IEnumerable supports deferred execution.
IEnumerable supports further filtering.

IList

IList is used to access an element in a specific position/index in a list.


Like IEnumerable, IList is also best to query data from in-memory collections like List, Array etc.
IList is useful when you want to Add or remove items from the list.
IList can find out the no of elements in the collection without iterating the collection.

IQueryable

IQueryable can move forward only over a collection, it can’t move backward and between the items.
IQueryable is best to query data from out-memory (like remote database, service) collections.
While query data from a database, IQueryable execute the select query on the server side with all filters.
IQueryable is suitable for LINQ to SQL queries.
IQueryable supports deferred execution.
IQueryable supports custom query using CreateQuery and Execute methods.
IQueryable support lazy loading. Hence it is suitable for paging like scenarios.
Extension methods support by IQueryable takes expression objects means expression tree.

 GC 
Automatic memory management is made possible by Garbage Collection in .NET Framework. When a class object is created
at runtime, certain memory space is allocated to it in the heap memory. However, after all the actions related to the object are
completed in the program, the memory space allocated to it is a waste as it cannot be used. In this case, garbage collection is
very useful as it automatically releases the memory space after it is no longer required.

Garbage collection will always work on Managed Heap and internally it has an Engine which is known as the Optimization
Engine.

Garbage Collection occurs if at least one of multiple conditions is satisfied. These conditions are given as follows:

If the system has low physical memory, then garbage collection is necessary.
If the memory allocated to various objects in the heap memory exceeds a pre-set threshold, then garbage collection
occurs.
If the GC.Collect method is called, then garbage collection occurs. However, this method is only called under unusual
situations as normally garbage collector runs automatically.
 yield 
You use a yield return statement to return each element one at a time.

The sequence returned from an iterator method can be consumed by using a foreach statement or LINQ query. Each iteration
of the foreach loop calls the iterator method. When a yield return statement is reached in the iterator method, expression is
returned, and the current location in code is retained. Execution is restarted from that location the next time that the iterator
function is called.

public int SumOfOdds(int from, int to)

var array = GetOddNumbers(from, to);

var sum = 0;

foreach (var item in array)

sum += item;

return sum;

public IEnumerable<int> GetOddNumbers (int from, int to)

for(int i = from ; i < to ; i++)

if (i % 2 != 0)

yield return i;

 .Net Framework vs. Core vs. Standard 


.Net Framework is a framework for building and managing the Windows and Web-based application. This is old framework
created by Microsoft and provides end to end solution. This does not support cross-platform deployment.

.Net Core is a cross-platform and open source framework for building the application which can run on any platform like Mac,
Linux or Windows. It is also created by Microsoft. It is not a new version on .Net Framework, whereas it is a totally new
framework which is written from scratch.

.Net Standard is a specification that can be used across all .NET implementations. It is used for developing library projects
only. This means if we are creating a library in .NET Standard, we can use those in .NET Framework and .NET Core.
 WEB and DNS 
1. You Enter a URL like  https://www.google.com/ 
2. Your Browser Uses DNS to Find the Website's IP Address
DNS translates human-friendly URLs  https://www.google.com/  into computer-friendly IP addresses  123.45.67.89 .
Depending on whether you've visited that website recently, your browser could find this DNS information from several
sources, including your computer or internet service provider.
3. Your Browser Requests a Connection to the Website

Once your browser has used DNS to find the IP address of the website you want to connect to, it starts to establish a
connection. To do this, it runs through a three-step handshake process:
Your computer asks the website server if it's open to establishing new connections.
If the website can do so, it acknowledges that you are clear to connect.
Your computer then sends an acknowledgment that it received the confirmation.
4. Your Browser Downloads Website Data
Next, your browser sends a request to the website asking to download its data. This contains some additional
information about what browser you're using and what the purpose of the connection is.
The server receives this request, and then generates a response in a particular format. It sends this response back to
your browser.
Now comes the fun part! Your browser receives the response, and uses it to display the website you requested. You'll
see the page in its entirety after just a moment, and can interact with it as needed.

 HTTP vs. HTTPS 


HTTP is Hypertext Transfer Protocol. HTTP offers set of rules and standards which govern how any information can be
transmitted on the World Wide Web. HTTP provides standard rules for web browsers & servers to communicate. HTTP is an
application layer network protocol which is built on top of TCP. HTTP uses Hypertext structured text which establishes the
logical link between nodes containing text. It is also known as "stateless protocol" as each command is executed separately,
without using reference of previous run command. It uses the port 80.

HTTPS stands for Hyper Text Transfer Protocol Secure. It is highly advanced and secure version of HTTP. It uses the port 443
for Data Communication. It allows the secure transactions by encrypting the entire communication with SSL. It is a
combination of SSL/TLS protocol and HTTP. It provides encrypted and secure identification of a network server.

KEY DIFFERENCE

HTTP lacks security mechanism to encrypt the data whereas HTTPS provides SSL or TLS Digital Certificate to secure
the communication between server and client.
HTTP operates at Application Layer whereas HTTPS operates at Transport Layer.
HTTP by default operates on port 80 whereas HTTPS by default operates on port 443.
HTTP transfers data in plain text while HTTPS transfers data in cipher text (encrypt text).
HTTP is fast as compared to HTTPS because HTTPS consumes computation power to encrypt the communication
channel.

SSL stands for Secure Sockets Layer and, in short, it's the standard technology for keeping an internet connection secure and
safeguarding any sensitive data that is being sent between two systems, preventing criminals from reading and modifying any
information transferred, including potential personal details.
 CORS 
CORS stands for Cross-Origin Resource Sharing.

It’s a security concept built into modern browsers.

And it has one simple goal: Resources (e.g. endpoints, data) exposed by a server should not be accessible by some random
other server.

Server A, by default, is not able to access resources exposed by Server B.

Only frontend pages served by Server B will be able to access resources (e.g. API routes) exposed by Server B.

So frontend and backend need to have the same origin - hence the name: Cross-Origin Resource Sharing.

By default, sharing across different origins (= servers) is not allowed.

Which headers are these? Here are the three most important ones which you typically need:

Access-Control-Allow-Origin: This header controls which other domains should be allowed to access the resources. This
can be a wildcard (*) to allow all other origins to access the resources but you could also lock it down to specific domains.
Access-Control-Allow-Methods: This header controls which Http methods are supported - methods not specified here will
yield to CORS errors.
Access-Control-Allow-Headers: This header controls which extra headers the client (!) may send with its request. If other
headers are added, the request will lead to a CORS error response.

If you send a request to an API with some other tool (e.g. with Postman) you’ll have no problems getting that restricted data.
Postman simply doesn’t care about CORS headers.

So CORS is just a browser concept and not a strong security mechanism.

It allows you to restrict which other web apps may use your backend resources but that’s all. Definitely better than nothing but
not something you should use as a content-protection mechanism!

 Abstract Class vs. Interface 


If many implementations are of the same kind and use common behavior, then it is superior to use abstract class. It can be
fully, partially or not implemented.

If many implementations only share methods, then it is superior to use Interface. It should be fully implemented.

Abstract classes provide you the flexibility to have certain concrete methods and some other methods that the derived classes
should implement. By contrast, if you use interfaces, you would need to implement all the methods in the class that extends
the interface. An abstract class is a good choice if you have plans for future expansion – i.e. if a future expansion is likely in
the class hierarchy. If you would like to provide support for future expansion when using interfaces, you’ll need to extend the
interface and create a new one.
 Sockets 
Sockets allow communication between two different processes on the same or different machines.

Socket Types

There are four types of sockets available to the users. The first two are most commonly used and the last two are rarely used.

Stream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items
"A, B, C", they will arrive in the same order − "A, B, C". These sockets use TCP (Transmission Control Protocol) for data
transmission.
Datagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't
need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it
out. They use UDP (User Datagram Protocol).
Raw Sockets − These provide users access to the underlying communication protocols, which support socket
abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the
interface provided by the protocol.
Sequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are
preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction, and is very important
in most serious NS applications.

SOCKET = IP_ADDRESS + PORT

IP_ADDRESS = Computer

PORT = Application (one port - one app);

 Cryptography 
Types of algorithms

Based on input data:

Block cipher - works with blocks of data with fixed length,


Stream cipher - works with bytes eg. characters

Based on key type:

Symmetric - same key for encryption and decryption (DES,AES,3DES),


Asymetric - different keys for encryption and decryption (RSA)

 Hash 
One-way function (Hash function) is a method of transformation of date with different lengths into data with fixed length.

password -> hash_function -> hash

A -> hash_function -> tgntkjgn

A -> hash_function -> tgntkjgn


 Cookies, Sessions and Storages 
Cookies are text files with small pieces of data — like a username and password — that are used to identify your computer as
you use a computer network. Specific cookies known as HTTP cookies are used to identify specific users and improve your
web browsing experience.

Data stored in a cookie is created by the server upon your connection. This data is labeled with an ID unique to you and your
computer.

When the cookie is exchanged between your computer and the network server, the server reads the ID and knows what
information to specifically serve to you.

HTTP cookies, or internet cookies, are built specifically for Internet web browsers to track, personalize, and save information
about each user’s session. A “session” just refers to the time you spend on a site.

Cookies are created to identify you when you visit a new website. The web server — which stores the website’s data — sends
a short stream of identifying info to your web browser.

Browser cookies are identified and read by “name-value” pairs. These tell cookies where to be sent and what data to recall.

The server only sends the cookie when it wants the web browser to save it. If you’re wondering “where are cookies stored,” it’s
simple: your web browser will store it locally to remember the “name-value pair” that identifies you.

If a user returns to that site in the future, the web browser returns that data to the web server in the form of a cookie. This is
when your browser will send it back to the server to recall data from your previous sessions.

Here’s how cookie are intended to be used:

Session management. For example, cookies let websites recognize users and recall their individual login information and
preferences, such as sports news versus politics.
Personalization. Customized advertising is the main way cookies are used to personalize your sessions. You may view
certain items or parts of a site, and cookies use this data to help build targeted ads that you might enjoy.
Tracking. Shopping sites use cookies to track items users previously viewed, allowing the sites to suggest other goods
they might like and keep items in shopping carts while they continue shopping.

Session cookies are used only while navigating a website. They are stored in random access memory and are never written to
the hard drive.

When the session ends, session cookies are automatically deleted. They also help the "back" button or third-party anonymizer
plugins work. These plugins are designed for specific browsers to work and help maintain user privacy.

Persistent cookies remain on a computer indefinitely, although many include an expiration date and are automatically removed
when that date is reached.

Persistent cookies are used for two primary purposes: Authentication and Tracking.

Because HTTP is stateless, in order to associate a request to any other request, you need a way to store user data between
HTTP requests.

Cookies or URL parameters are both suitable ways to transport data between 2 or more request. However they are not good
in case you don't want that data to be readable/editable on client side.
The solution is to store that data server side, give it an "id", and let the client only know (and pass back at every http request)
that id. There you go, sessions implemented.

localStorage

Main features:

Data is shared between all tabs and windows from the same origin.
The data will not expire. It will remain even after browser restart and survive OS reboot too.

sessionStorage

Usage of sessionStorage object is much less than localStorage.

Properties and methods are the same, however it’s functionality is much more limited:

The sessionStorage exists only within the current browser tab. Another tab with the same page will have a different
session storage.
However it is shared between iframes in the same tab (assuming they come from the same origin).
The data survives page refresh, but not closing/opening the tab.

 DB: Indexes 
Indexing is a way to optimize the performance of a database by minimizing the number of disk accesses required when a
query is processed. It is a data structure technique which is used to quickly locate and access the data in a database.

Indexes are created using a few database columns.

The first column is the Search key that contains a copy of the primary key or candidate key of the table. These values are
stored in sorted order so that the corresponding data can be accessed quickly.
The second column is the Data Reference or Pointer which contains a set of pointers holding the address of the disk
block where that particular key value can be found.

There are primarily three methods of indexing:

Clustered Indexing

When more than two records are stored in the same file these types of storing known as cluster indexing. By using the
cluster indexing we can reduce the cost of searching reason being multiple records related to the same thing are stored at
one place and it also gives the frequent joing of more than two tables(records).

Clustering index is defined on an ordered data file. The data file is ordered on a non-key field. In some cases, the index is
created on non-primary key columns which may not be unique for each record. In such cases, in order to identify the
records faster, we will group two or more columns together to get the unique values and create index out of them. This
method is known as the clustering index. Basically, records with similar characteristics are grouped together and indexes
are created for these groups.
Non-clustered or Secondary Indexing

A non clustered index just tells us where the data lies, i.e. it gives us a list of virtual pointers or references to the location
where the data is actually stored. Data is not physically stored in the order of the index. Instead, data is present in leaf
nodes. For eg. the contents page of a book. Each entry gives us the page number or location of the information stored.
The actual data here(information on each page of the book) is not organized but we have an ordered reference(contents
page) to where the data points actually lie. We can have only dense ordering in the non-clustered index as sparse
ordering is not possible because data is not physically organized accordingly.
With the growth of the size of the database, indices also grow. As the index is stored in the main memory, a single-level
index might become too large a size to store with multiple disk accesses. The multilevel indexing segregates the main
block into various smaller blocks so that the same can stored in a single block. The outer blocks are divided into inner
blocks which in turn are pointed to the data blocks. This can be easily stored in the main memory with fewer overheads.

 DB: Transactions 
Transactions group a set of tasks into a single execution unit. Each transaction begins with a specific task and ends when all
the tasks in the group successfully complete. If any of the tasks fail, the transaction fails. Therefore, a transaction has only two
results: success or failure.

BEGIN TRANSACTION: It indicates the start point of an explicit or local transaction.


SET TRANSACTION: Places a name on a transaction.
COMMIT: If everything is in order with all statements within a single transaction, all changes are recorded together in the
database is called committed. The COMMIT command saves all the transactions to the database since the last COMMIT
or ROLLBACK command.
ROLLBACK: If any error occurs with any of the SQL grouped statements, all changes need to be aborted. The process of
reversing changes is called rollback. This command can only be used to undo transactions since the last COMMIT or
ROLLBACK command was issued.
SAVEPOINT: creates points within the groups of transactions in which to ROLLBACK.

A SAVEPOINT is a point in a transaction in which you can roll the transaction back to a certain point without rolling back
the entire transaction.
RELEASE SAVEPOINT:- This command is used to remove a SAVEPOINT that you have created.

Properties of a transaction:

Atomicity
Consistency
Isolation
Durable

Isolation levels define the degree to which a transaction must be isolated from the data modifications made by any other
transaction in the database system. A transaction isolation level is defined by the following phenomena –

Dirty Read – A Dirty read is the situation when a transaction reads a data that has not yet been committed. For example,
Let’s say transaction 1 updates a row and leaves it uncommitted, meanwhile, Transaction 2 reads the updated row. If
transaction 1 rolls back the change, transaction 2 will have read data that is considered never to have existed.
Non Repeatable read – Non Repeatable read occurs when a transaction reads same row twice, and get a different value
each time. For example, suppose transaction T1 reads data. Due to concurrency, another transaction T2 updates the
same data and commit, Now if transaction T1 rereads the same data, it will retrieve a different value.
Phantom Read – Phantom Read occurs when two same queries are executed, but the rows retrieved by the two, are
different. For example, suppose transaction T1 retrieves a set of rows that satisfy some search criteria. Now, Transaction
T2 generates some new rows that match the search criteria for transaction T1. If transaction T1 re-executes the
statement that reads the rows, it gets a different set of rows this time.

Based on these phenomena, The SQL standard defines four isolation levels :

Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one transaction may read not yet
committed changes made by other transaction, thereby allowing dirty reads. In this level, transactions are not isolated
from each other.
Read Committed – This isolation level guarantees that any data read is committed at the moment it is read. Thus it does
not allows dirty read. The transaction holds a read or write lock on the current row, and thus prevent other transactions
from reading, updating or deleting it.
Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks on all rows it references
and writes locks on all rows it inserts, updates, or deletes. Since other transaction cannot read, update or delete these
rows, consequently it avoids non-repeatable read.
Serializable – This is the Highest isolation level. A serializable execution is guaranteed to be serializable. Serializable
execution is defined to be an execution of operations in which concurrently executing transactions appears to be serially
executing.

 DB: Concurency 
Optimistic locking

The optimistic locking strategy performs most of an operation’s logic under the assumption that no conflict has occurred
(which explains the name “optimistic”). Then, when actually saving/committing your changes, you verify that indeed no conflict
has occurred. If you detect that there was actually a conflict, you abort your operation.

When interacting with a relational database, checking if a conflict has occurred is typically done by either comparing the actual
data for an object to the data you based yourself on or by checking a version number or timestamp that you update every time
a change is made to the object. It seems that the approach with the version number is the most commonly used one.

The typical use case for optimistic locking is retrieving an object, making some changes to it (possibly done by a user) and
then saving the result while verifying that no other changes have been made in the meantime. However, it also allows you to
lock objects that you need to stay unchanged until after you complete your operation. You can achieve this by simply saving
the object as you retrieved it. If this doesn’t yield a conflict, you know that the object hasn’t changed in the meantime.

A big benefit of optimistic locking is the flexibility it offers. You don’t have to care about when and where the “base version” of
your object was retrieved. There’s no need for it to be retrieved inside the same transaction where you save the object. This is
especially desirable when you need to wait for user input to make the changes, like in our example with users editing
descriptions. In such a case, it would be very impractical to keep a database transaction open while a users edits an item.
Typically, the user will make separate calls to retrieve the item and to save it, which may even be handled by different
instances of your application in a cluster.

Some sources claim that, when using optimistic locking, deadlocks are not possible because it never issues any database-
level locks. Further down this post, I will show that this is not true and that optimistic locking (combined with the common
transaction isolation level Read Committed) actually acquires a database-level lock from when you save data until your
transaction is committed. However, the flexibility of optimistic locking makes it straightforward to prevent deadlocks by always
saving objects in the same order (and thus always acquiring database-level locks in the same order).

An important drawback of optimistic locking is that, if a conflict does occur, you need to deal with your operation being
aborted. If your operation simply retrieves an object and immediately changes and saves it, it can make sense to retry your
entire operation (retrieve current version, change and save it).

Pessimistic locking, as the name implies, uses a less optimistic approach. It assumes that conflicts will occur and it actively
blocks anything that could possibly cause a conflict.

When interacting with a relational database, this is done by actively locking database rows or even database tables when you
retrieve the data you will operate on. The locks will be held until your transaction completes. There are generally two types of
locks: shared locks (read locks) and exclusive locks (write locks). Shared locks are used for reading data that you simply need
to stay the same until your operation completes. A shared lock allows others to take a shared lock as well (you still allow
others to read the object as this does not create a conflict). Exclusive locks are used for reading and updating data that you
want to change. Exclusive locks do not allow anyone else to take a lock on the same data. Any attempts to obtain a lock when
it’s not allowed at the time will block until the conflicting locks are released.

The main benefit of pessimistic locking is that you can completely prevent conflicts from occurring, meaning that you don’t
have to deal with the situation where you have a conflict. This can also make it the best-performing strategy in high-
concurrency environments where the chance of having conflicts is high.

A drawback of pessimistic locking is that you miss the flexibility of optimistic locking: when using pessimistic locking,
everything needs to happen inside a single database transaction. This makes it less applicable to situations where you need
to wait for user input or an expensive operation between retrieving and saving an object. It can also limit your options for
organizing your code.

Pessimistic locking may also unnecessarily limit concurrency, preventing possibly conflicting operations from occurring
concurrently even if there is a very low chance that there will actually be a conflict.

When using pessimistic locking, you need to take special care to prevent deadlocks. First of all, you typically acquire
database-level locks at the start of your operation and you keep them until the operation completes, making for a relatively
large time window for deadlocks to occur. Additionally, when retrieving multiple objects, the actual objects to retrieve may
depend on objects retrieved earlier. This means that you have less flexibility regarding the order in which you obtain locks. It
may be tempting to “solve” this by first retrieving an object without any locking and later retrieving it again with the required
lock. However, in that case, you need to check for changes made in the meantime, which is basically optimistic locking and
introduces the possibility of conflicts occurring.

Additionally, if you use pessimistic locking in situations where you need to wait for user input, you need to deal with situations
in which the user doesn’t provide that input in a timely fashion (e.g., the user goes for lunch or forgets about what he was
doing, blocking other users while nothing is happening).
 SOLID 

 What is SOLID? 

SOLID is the mnemonic acronym introduced by Michael Feathers for the first five principles named by Robert Martin, which
meant five basic principles of object-oriented programming and design.

S: Single Responsibility Principle (SRP)


O: Open/Closed Principle (OCP)
L: Liskov Substitution Principle (LSP)
I: Interface Segregation Principle (ISP)
D: Dependency Inversion Principle (DIP)

 Single Responsibility Principle (SRP) 

As stated in Clean Code, "There should never be more than one reason for a class to change". It's tempting to jam-pack a
class with a lot of functionality, like when you can only take one suitcase on your flight. The issue with this is that your class
won't be conceptually cohesive and it will give it many reasons to change. Minimizing the amount of times you need to change
a class is important.

It's important because if too much functionality is in one class and you modify a piece of it, it can be difficult to understand how
that will affect other dependent modules in your codebase.

 Bad: 

class UserSettings

private User User;

public UserSettings(User user) { }

public void ChangeSettings(Settings settings) { }

private bool VerifyCredentials() { }

 Good: 

class UserAuth

private User User;

public UserAuth(User user) { }

public bool VerifyCredentials() { }

class UserSettings

private User User;

private UserAuth Auth;

public UserSettings(User user){ }

public void ChangeSettings(Settings settings) { }

 Open/Closed Principle (OCP) 

As stated by Bertrand Meyer, "software entities (classes, modules, functions, etc.) should be open for extension, but closed for
modification." What does that mean though? This principle basically states that you should allow users to add new
functionalities without changing existing code.

 Bad: 

abstract class AdapterBase

protected string Name;

public string GetName() { return Name; }

class AjaxAdapter : AdapterBase

public AjaxAdapter() { Name = "ajaxAdapter"; }

class NodeAdapter : AdapterBase

public NodeAdapter() { Name = "nodeAdapter"; }

class HttpRequester : AdapterBase

private readonly AdapterBase Adapter;

public HttpRequester(AdapterBase adapter) { Adapter = adapter; }

public bool Fetch(string url)

var adapterName = Adapter.GetName();

return adapterName == "ajaxAdapter"? MakeAjaxCall(url) : MakeHttpCall(url) ;

private bool MakeAjaxCall(string url) { }

private bool MakeHttpCall(string url) { }

 Good: 

interface IAdapter

bool Request(string url);

class AjaxAdapter : IAdapter

public bool Request(string url) { }

class NodeAdapter : IAdapter

public bool Request(string url) { }

class HttpRequester

private readonly IAdapter Adapter;

public HttpRequester(IAdapter adapter) { Adapter = adapter; }

public bool Fetch(string url) { return Adapter.Request(url); }

 Liskov Substitution Principle (LSP) 

This is a scary term for a very simple concept. It's formally defined as "If S is a subtype of T, then objects of type T may be
replaced with objects of type S (i.e., objects of type S may substitute objects of type T) without altering any of the desirable
properties of that program (correctness, task performed, etc.)." That's an even scarier definition.

The best explanation for this is if you have a parent class and a child class, then the base class and child class can be used
interchangeably without getting incorrect results.

 Bad: 

public class Apple

public virtual string GetColor()

return "Red";

public class Orange : Apple

public override string GetColor()

return "Orange";

Apple apple = new Orange();

Console.WriteLine(apple.GetColor());

 Good: 

public abstract class Fruit

public abstract string GetColor();

public class Apple : Fruit

public override string GetColor()

return "Red";

public class Orange : Fruit

public override string GetColor()

return "Orange";

Fruit fruit = new Orange();

Console.WriteLine(fruit.GetColor());

fruit = new Apple();

Console.WriteLine(fruit.GetColor());

 Interface Segregation Principle (ISP) 

ISP states that "Clients should not be forced to depend upon interfaces that they do not use."

A good example to look at that demonstrates this principle is for

classes that require large settings objects. Not requiring clients to setup huge amounts of options is beneficial, because most
of the time they won't need all of the settings. Making them optional helps prevent having a "fat interface".

 Bad: 

public interface IEmployee

void Work();

void Eat();

public class Human : IEmployee


{

public void Work() { }

public void Eat() { }

public class Robot : IEmployee


{

public void Work() { }

public void Eat() { }

 Good: 

Not every worker is an employee, but every employee is an worker.

public interface IWorkable

void Work();

public interface IFeedable

void Eat();

public interface IEmployee : IFeedable, IWorkable { }

public class Human : IEmployee


{

public void Work() { }

public void Eat() { }

// robot can only work

public class Robot : IWorkable


{

public void Work() { }

 Dependency Inversion Principle (DIP) 

High-level modules should not depend on low-level modules. Both should depend on abstractions.

You've seen an implementation of this principle in the form of Dependency Injection(DI). While they are not identical concepts,
DIP keeps high-level modules from knowing the details of its low-level modules and setting them up.

It can accomplish this through DI. A huge benefit of this is that it reduces the coupling between modules. Coupling is a very
bad development pattern because it makes your code hard to refactor.

 Bad: 

public abstract class EmployeeBase

protected virtual void Work(){ }

public class Human : EmployeeBase

public override void Work(){ }

public class Robot : EmployeeBase

public override void Work(){ }

public class Manager

private readonly Robot _robot;

private readonly Human _human;

public Manager(Robot robot, Human human) { _robot = robot; _human = human; }

public void Manage() { _robot.Work(); _human.Work(); }

 Good: 

public interface IEmployee

void Work();

public class Human : IEmployee


{

public void Work(){ }

public class Robot : IEmployee


{

public void Work() { }

public class Manager

private readonly IEnumerable<IEmployee> _employees;

public Manager(IEnumerable<IEmployee> employees) { _employees = employees; }

public void Manage() { foreach (var employee in _employees) { _employee.Work(); } }

 Don’t repeat yourself (DRY) 

Do your absolute best to avoid duplicate code. Duplicate code is bad because it means that there's more than one place to
alter something if you need to change some logic.

 Bad: 

public List<EmployeeData> ShowDeveloperList(Developers developers)

foreach (var developers in developer)

var expectedSalary = developer.CalculateExpectedSalary();

var experience = developer.GetExperience();

var githubLink = developer.GetGithubLink();

var data = new[] {

expectedSalary,

experience,

githubLink

};

render(data);

public List<ManagerData> ShowManagerList(Manager managers)

foreach (var manager in managers)

var expectedSalary = manager.CalculateExpectedSalary();

var experience = manager.GetExperience();

var githubLink = manager.GetGithubLink();

var data =

new[] {

expectedSalary,

experience,

githubLink

};

render(data);

 Good: 

public List<EmployeeData> ShowList(Employee employees)

foreach (var employee in employees)

render(new[] {

employee.CalculateExpectedSalary(),

employee.GetExperience(),

employee.GetGithubLink()

});

 Patterns 
Design patterns are typical solutions to common problems in software design. Each pattern is like a blueprint that you can
customize to solve a particular design problem in your code.

 Strategy 

Strategy is a behavioral design pattern that lets you define a family of algorithms, put each of them into a separate class, and
make their objects interchangeable.

Use the Strategy pattern when you want to use different variants of an algorithm within an object and be able to switch
from one algorithm to another during runtime.
Use the pattern when your class has a massive conditional operator that switches between different variants of the same
algorithm.
Use the Strategy when you have a lot of similar classes that only differ in the way they execute some behavior.

public class Person

public string Name {get; set;}

public int Age {get; set;}

public interface ISortStrategy

List<Person> Sort (List<Person> list);

public class SortByName : ISortStrategy

public List<Person> Sort (List<Person> list) { }

public class SortByAge : ISortStrategy

public List<Person> Sort (List<Person> list) { }

public class CollectionOfPeople

public List<Person> People {get; set;}

public List<Person> Sort(ISortStrategy strategy) { return strategy.Sort(People); }

 Adapter 

An adapter wraps one of the objects to hide the complexity of conversion happening behind the scenes. The wrapped object
isn’t even aware of the adapter.

Use the Adapter class when you want to use some existing class, but its interface isn’t compatible with the rest of your
code.
Use the pattern when you want to reuse several existing subclasses that lack some common functionality that can’t be
added to the superclass.

public class WebServer

public void SpecificRequest() { }

public interface IServer

void Request();

public class CustomServer : IServer

private WebServer webServer;

public CustomServer() { webServer = new WebServer();}

public void Request()

webServer.SpecificRequest();

 State 

The State pattern suggests that you create new classes for all possible states of an object and extract all state-specific
behaviors into these classes.

Use the State pattern when you have an object that behaves differently depending on its current state, the number of
states is enormous, and the state-specific code changes frequently.
Use State when you have a lot of duplicate code across similar states and transitions of a condition-based state machine.

public interface IState

string State { get; set; }

public class CloseState : IState

public string State { get; set; }

public CloseState() { State = "Closed"; }

public class OpenState : IState

public string State { get; set; }

public OpenState() { State = "Opened"; }

public class Switch

public IState State { get; private set; }

public Switch() { State = new CloseState(); }

public void ChangeState(IState state) { State = state; }

 Singleton 
Singleton is a creational design pattern that lets you ensure that a class has only one instance, while providing a global
access point to this instance.

Use the Singleton pattern when a class in your program should have just a single instance available to all clients.
Use the Singleton pattern when you need stricter control over global variables.

public class PasswordManager

private static PasswordManager instance;

private PasswordManager() { }

public static PasswordManager GetInstance()

if(instance == null)

instance = new PasswordManager();

return instance;

public string GetPasswordHash() { }

var hash = PasswordManager().GetInstance().GetPasswordHash("password");


 Command 
Command is a behavioral design pattern that turns a request into a stand-alone object that contains all information about the
request.

Use the Command pattern when you want to parametrize objects with operations.
Use the Command pattern when you want to queue operations, schedule their execution, or execute them remotely.
Use the Command pattern when you want to implement reversible operations.

public interface ICommand

int Execute(int value);

int UnExecute (int value);

public class MinusCommand : ICommand

public int Execute(int value){ return value-1; }

public int UnExecute(int value){ return value+1; }

public class PlusCommand : ICommand

public int Execute(int value){ return value+1; }

public int UnExecute(int value){ return value-1; }

public class Calculations

private int result = 0;

private Queue<ICommand> commands = new Queue<ICommand>();

public Execute(ICommand command){ command.Execute(); commans.Enqueue(command); }

public Redo(){ var command = commands.Dequeue(); command.Execute(); commands.Enqueue(command); }


public Undo(){ var command = commands.Dequeue(); command.UnExecute(); commands.Enqueue(command);}

 Facade 
Facade is a structural design pattern that provides a simplified interface to a library, a framework, or any other complex set of
classes.

Use the Facade pattern when you need to have a limited but straightforward interface to a complex subsystem.
Use the Facade when you want to structure a subsystem into layers.

public class VideoConverter

public File Convert(File file, string format) { }

public class FileReader

public File ReadFile(string fileName) { }

public class VideoEditor

public File EditVideo(File file) { }

public class AudioMixer

public File EditAudio(File file) { }

public class VideoCreator

public File Create(string fileName, string format)

var file = new FileReader().ReadFile(fileName);

var editedVideo = new VideoEditor().EditVideo(file);

var editedVideoWithAudio = new AudioMixer().EditAudio(editedVideo);

return new VideoConverter().Convert(editedVideoWithAudio, "mp4");

 Proxy 
Proxy is a structural design pattern that lets you provide a substitute or placeholder for another object. A proxy controls access
to the original object, allowing you to perform something either before or after the request gets through to the original object.

Lazy initialization (virtual proxy). This is when you have a heavyweight service object that wastes system resources by
being always up, even though you only need it from time to time.
Access control (protection proxy). This is when you want only specific clients to be able to use the service object; for
instance, when your objects are crucial parts of an operating system and clients are various launched applications
(including malicious ones).
Local execution of a remote service (remote proxy). This is when the service object is located on a remote server.
Logging requests (logging proxy). This is when you want to keep a history of requests to the service object.
Caching request results (caching proxy). This is when you need to cache results of client requests and manage the life
cycle of this cache, especially if results are quite large.

public interface IService

void Operation();

public class Service : IService

public void Operation() { }

public class ServiceProxy : IService

private IService service;

public ServiceProxy(IService service) {this.service = service;}

private bool CheckAccess(){ }

public void Operation()

if(CheckAccess())

service.Operation();

 Template Method 
Template Method is a behavioral design pattern that defines the skeleton of an algorithm in the superclass but lets subclasses
override specific steps of the algorithm without changing its structure.

Use the Template Method pattern when you want to let clients extend only particular steps of an algorithm, but not the
whole algorithm or its structure.
Use the pattern when you have several classes that contain almost identical algorithms with some minor differences. As a
result, you might need to modify all classes when the algorithm changes.

public abstract class DataMiner

public void Mine(string fileName)

var file = OpenFile(fileName);

var analysis = AnalyzeData(file);

var report = CreateReport(analysis);

CloseFile(file);

protected abstract File OpenFile(string fileName);

protected abstract void CloseFile(File file);

protected Analysis AnalyzeData(File file){/*...*/}

protected Report CreateReport(Analysis analysis){/*...*/}

public class CSVMainer : DataMiner

protected override File OpenFile(string fileName){ /*CSV Things*/}

protected override void CloseFile(File file){ /*CSV Things*/}

public class XMLMainer : DataMiner

protected override File OpenFile(string fileName){ /*XML Things*/}

protected override void CloseFile(File file){ /*XML Things*/}

 Mediator 

Mediator is a behavioral design pattern that lets you reduce chaotic dependencies between objects. The pattern restricts
direct communications between the objects and forces them to collaborate only via a mediator object.

Use the Mediator pattern when it’s hard to change some of the classes because they are tightly coupled to a bunch of
other classes.
Use the Mediator when you find yourself creating tons of component subclasses just to reuse some basic behavior in
various contexts.

public class Component

private IMediator mediator;

public Component(IMediator mediator){this.mediator = mediator;}

public virtual void Click(){mediator.Notify(this,"click");}

public class Button : Component

private IMediator mediator;

public Component(IMediator mediator): base(mediator){}

public override void Click(){mediator.Notify(this,"click");}

public class RadioButton : Component

private IMediator mediator;

public Component(IMediator mediator): base(mediator){}

public override void Click(){mediator.Notify(this,"click");}

public interface IMediator

void Notify(Component sender, string event);

public class Dialog : IMediator

private Button button;

private RadioButton radioButton;

public Dialog() { button = new Button(this); radioButton = new RadioButton(this);}

public void Notify(Component sender, string event)

if(sender == button && event=="click") {/* Call action on other elements*/}

else if(sender == radioButton && event=="click"){/* Call action on other elements*/}

 Factory Method 

Factory Method is a creational design pattern that provides an interface for creating objects in a superclass, but allows
subclasses to alter the type of objects that will be created.

Use the Factory Method when you don’t know beforehand the exact types and dependencies of the objects your code
should work with.
Use the Factory Method when you want to provide users of your library with a way to extend its internal components.
Use the Factory Method when you want to save system resources by reusing existing objects instead of rebuilding them
each time.

public interface IButton

void Click();

public class WindowsButton : IButton

public override void Click(){ }

public class HTMLButton : IButton

public override void Click(){ }

public class Dialog

public abstract IButton CreateButton();

public class WindowsDialog : Dialog

public override IButton CreateButton(){return new WindowsButton(); }


}

public class WebDialog : Dialog

public override IButton CreateButton(){return new HTMLButton(); }

IButton button;

Dialog dialog;

string os = "Windows";

dialog = os.Equals("Windows") ? new WindowsDialog() : new WebDialog();

button = dialog.CreateButton();

 Builder 

Builder is a creational design pattern that lets you construct complex objects step by step. The pattern allows you to produce
different types and representations of an object using the same construction code.

Use the Builder pattern to get rid of a “telescopic constructor”.


Use the Builder pattern when you want your code to be able to create different representations of some product (for
example, stone and wooden houses).
Use the Builder to construct Composite trees or other complex objects.

public class Car

public string Company { get; set; }

public string Model { get; set; }

public interface IBuilder

void Reset();

void SetCompany(string company);

void SetModel(string model);

public class CarBuilder : IBuilder

private Car car;

public void Reset(){car = new Car();}

public void SetCompany(string company){car.Company = company}

public void SetModel(string model){car.Model = model}

public Car GetResult(){return car;}

public class Factory

private IBuilder builder;

public Factory(IBuilder builder){this.builder = builder;}

public void BMW320()

builder.Reset();

builder.SetCompany("BMW");

builder.SetModel("320");

var builder = new CarBuilder();

var factory = new Factory(builder);

factory.BMW320();

var car = builder.GetResult();


 Next Pattern 

A
A
A

You might also like