0% found this document useful (0 votes)
3 views

Dot Net_MQ

The document outlines the professional background and project experiences of a Microsoft .NET Developer with 10 years of experience, detailing expertise in various technologies including C#, ASP.NET, Azure, and AWS. It describes recent projects in the health and insurance domains, emphasizing the development of applications, migration to cloud services, and implementation of CI/CD pipelines. Additionally, it highlights challenges faced during project migrations and the developer's day-to-day activities within an Agile framework.

Uploaded by

vamshicecil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Dot Net_MQ

The document outlines the professional background and project experiences of a Microsoft .NET Developer with 10 years of experience, detailing expertise in various technologies including C#, ASP.NET, Azure, and AWS. It describes recent projects in the health and insurance domains, emphasizing the development of applications, migration to cloud services, and implementation of CI/CD pipelines. Additionally, it highlights challenges faced during project migrations and the developer's day-to-day activities within an Agile framework.

Uploaded by

vamshicecil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

New Vendor Screening

06 December 2023 23:26

1. Intro: tell me about yourself


Version 1:
Myself <Your Name>, having 10 years of overall experience as a Microsoft .NET Developer.
I have worked on C#, ASP. Net Webforms, ASP. Net Core Web APIs, Windows Forms, MVC &
Web API.
On databases side using Sql Server & Oracle for database.
Coming to client side, I have extensively worked on JavaScript, jQuery and Angular.
I have also worked with Azure cloud services, Devops operations like CI/CD pipelines.
Along with development, I have good knowledge in writing Unit test cases using Nunit & XUnit
frameworks. We have achieved up to 80% of code coverage in my current project.
I have worked on both Waterfall and Agile methodologies.
And used TFS, Azure Devops and GIT repositories for version control of source code.
I have good experience with services testing tools like SOAPUI, Postman and Swagger.
I also worked with message queue systems like MSMQ Azure Service bus and Kafka.
Along with development, I have closely worked with clients in analyzing and confirming the
requirements as per client expectations.
I also have good experience of coordinating with Offshore team, performing code review, explaining
requirements and tracking the status on daily basis.

2. Tell me about your recent project (Health Domain)


My recent project is on Health domain. We are responsible for handling applications for
Providers, accumulators, enrollments, medical/dental/vision claims developed in .Net/.Net
Core. We are also responsible for collecting these information and generating the files that
can be consumed by internal and external teams.
We are currently modernizing one of the existing file based system into REST based
services system in Azure from legacy on-premises.
I have involved in all the phases of the development including design and architecture.
Designed and migrated the tables from on-prem to Azure database.
Designed and Developed applications using C#.Net, Angular, Web API, and Azure cloud
technologies.
Developed Timer Trigger, Cosmos Db triggered and Kafka Topic Triggered Azure
Function Apps using C#.Net.
Implemented token based and certificate based authentication.
Implemented & Configured APIs in Azure APIM with Products.
Developed applications using Azure Cosmos DB as the NoSQL database & implemented
CRUD operations. Improved performance of the Cosmos DB using indexes and deploying
them using IaC Pipelines.
Automated the testing using .Net and Selenium.
Created CI/CD pipelines using Azure Devops and implemented Fortify Scan, Whitesource
Scan, Automated unit testing, Jfrog Xray Scans for improving the code quality and code
coverage.
Developed IaC YAML pipelines using Bicep templates for creating Azure resources like
WebApp, Function App, App Insights, APIs & Products with Policies in APIM, Gateway
Policies, Key Vault, Action Groups, Alert Rules, Access Policies & IAM and deploying
them and Developed CI/CD Application pipelines.

3. Tell me about your project(Insurance Domain)


Currently I am working for X-Client since last 2 years and As my client is in Insurance
domain, Our project is implementing the B2B application appointments for ARX and Non-
ARX appointments,
ARX is nothing but client directly connected auto work shop and Non-ARX is nothing but
Agent Auto Shops whenever any of client met accident or any insurance claim need to
book the appointment based on the availability.
In this project we have difference modules like client registration and login Module, and
Policy information module and claims and coverages modules, and appointment schedules
and this system will connect with multiple 3rd party and internal policy systems and
implemented couple of business rules validations on effective and expiration dates.

I have involved in all the phases of the development including design and architecture.
Designed and migrated the tables from on-prem to Azure database.
Designed and Developed applications using C#.Net, Angular, Web API, and AWS Cloud
technologies.
Developed Timer Trigger, Mongo Db triggered and ASB Topic Triggered, Lambda
Function using C#.Net.
Implemented token based and certificate based authentication.
Implemented & Configured APIs in Azure APIM with Products.
Developed applications using Azure Cosmos DB as the NoSQL database & implemented
CRUD operations. Improved performance of the Cosmos DB using indexes and deploying
them using IaC Pipelines.
Created CI/CD pipelines using Azure Devops and implemented Sonar Scan, Whitesource
Scan, Automated unit testing, Jfrog Xray Scans for improving the code quality and code
coverage.
Developed IaC YAML pipelines using Bicep templates for creating Azure resources like
WebApp, Function App, App Insights, APIs & Products with Policies in APIM, Gateway
Policies, Key Vault, Action Groups, Alert Rules, Access Policies & IAM and deploying
them and Developed CI/CD Application pipelines.

4. Tell me about your recent Project Challenging Scenario?


Few of the challenges I have faced recently.
a. Cert based authentication issue while migrating the application from .Net to .Net
Core:
We are currently trying to migrate an existing project from .Net to .Net core. As part
of it, We tried to put our public APIs in APIM. We have implemented certificate based
authentication. User are able to authenticate our APIs using the certificate based
authentication with in our network but external client are unable to authenticate. We
found that, app gateway is not sending the certificate passed with the client request.
We resolved the issue by creating the app gateway policy to rewrite the certificate
with server variables, so that APIM can receive the client certificate originally sent.
b. Next one
In one of my previous project we have migrating from on prem to cloud migration
using the lift and Shift mechanism.
In this process Most of the business logic we should not change and only
configurations and end point information and infrastructure configurations will be
updated.
As part of this process we have done large volume of applications we have migrated
from on prem to could migration.
We faced few of the whitelisting issues as well as connection failed issues.
To avoid this we have opened we have one small toll to gather all the endpoints from
all the application and run some commands to validate the connectivity between the
ports and generate the reports once the application is successful migrated.

5. Tell me about your day -to -day activities.

I have been into IT 10 years of exp.


I have started my career with .NET and have worked with ASP.net webforms, MVC, Web API and
also developed SOA application using WCF and currently working on .NET
I have been using Angular, Angular JS and React for my client side application
I have responsive design exp with Bootstrap, CSS 3, SASS and HTML5
I have worked with DB like SQL and Oracle and also use Entity Framework, Dapper and LINQ api
to interact and interface with backed db. and business models
I have been using Agile for last 4 yrs. and have exposure to both scrum and kanban methodologies
I have also worked with SSIS and SSRS and Azure data factory for migrations and reporting
On Cloud most of my exp is in Azure Cloud with App services, Function Apps, Logic apps, Azure
Storage, and Vault etc.

2nd (Health Insurance)

We as a <this> team, we are responsible for collecting the subscriber, accumulator and claims
information and generating the intermediate files for the internal teams to consume for processing
the claims. Developed and modernized the existing file-based system into API based system and
web application to support backend operations for the new API system.

• Designed and Developed applications using C#.Net, Angular, Web API, and Azure cloud
technologies.
• Developed Timer Trigger and Kafka Topic Triggered Azure Function Apps using C#.Net.
• Implemented & Configured APIs in Azure APIM with Products.
• Developed applications using Azure Cosmos DB as the NoSQL database & implemented CRUD
operations.
• Developed IaC YAML pipelines using Bicep templates for creating Azure resources like WebApp,
Function App, App Insights, APIs & Products with Policies in APIM, Gateway Policies, Key Vault,
Action Groups, Alert Rules, Access Policies & IAM and deploying them and Developed CI/CD
Application pipelines.

6. Agile methodology or Roles & responsibilities

currently we are working with agile methodology and 2 weeks as my scrum time, for every 2
weeks we will interact with business users for the requirement gathering, once we have
requirement, then we will prepare the user stories, and then we will give estimated time for the
user stories in Fibonacci series.
we are using JIRA as a PM tool, we will upload all these user stories into the JIRA tool, and the
user tool will be assigned to team members like us, and we are going to work on the user story,
mostly the development of user stories will be assigned to me, more of working on development
side, for the user story
I am going to work on the development, or on a integration either a backend work using the
Spring Boot services or the Frontend work in the ReactJS.
Once we are done with the development, then we will write unit tests, and integration tests and
we will do testing for them, and we will make sure that the test code coverage will be 90% and
then we will deploy to the testing environment, then we will move into the UAT, and business
users will do the UAT testing, before deploying into the production,
we will give live demo to the business users, what exactly the functionality are changed for this
particular sprint and then will deploy.
7. What is your current project application architecture?

In my current project, we have implemented multi-level application architecture, as this


project is Full stack dotnet with Azure cloud integration application with SQL server, Front
end we have implemented with angular 16 as one layer and .net core we have used core 6.0
version.
For web APIs we have created an Api layer which accepts inputs from the UI and validates
the request and converts into the Service layer request. In this layer when UI sends request
to the Api we have implemented authentication and authorization process using the azure
AD implementation with client and tenant id with the help of authentication token.
Coming to service layer, it was created with the help of GRPC service architecture to
connect to the service layer and it has Entity framework code first migration approach
models to connect to the database.
Before connecting to the service layer, we validate whether the token is valid or not and
prepare the domain request as per the business rules.
In this service architecture we prepare the Domain layer for domain level requests and
response.
We have one level of migration folder in the service level architecture which converts out
data models into database objects.
As part of the deployment we have converted into multiple microservices UI as one
microservice and Api is one microservice and service layer as one microservice.
We have implemented Azure ADO dash build and release pipeline to deploy the
application in the different environment like dev, Integration, User, Load, Stage and
Production.

8. In what cases did you write an aws lambda how did you implemented?

My Current project is Global registration platform and my client do the business in the
USA, Canada, Middle East and Asian region, As part of this we have to roll out different
micro services based on specific region based requirement and as per their data
regulations.

In this process we have created AWS lambda functions with the help of .Net core and
deployed successful and tested as well.
Create lambda function in aws .Net Core:
Step 1: Install Prerequisites
Make sure you have the following installed on your development machine:
AWS CLI and .Net Core
Step 2: Create a New .NET Core Lambda Project
Open a command prompt or terminal and run the following commands to create a new
.NET Core Lambda project:
dotnet new lambda.EmptyFunction -n YourLambdaFunctionName
cd YourLambdaFunctionName
Replace YourLambdaFunctionName with the desired name for your Lambda
function.
Step 3: Modify the Lambda Function Code
Open the generated Function.cs file in your favorite text editor or IDE and modify the
function logic as needed.
Step 4: Publish the Lambda Function
Run the following command to publish the Lambda function
dotnet publish -c Release
This command compiles the code and creates a deployment package in the
bin\Release\netcoreapp3.1\publish\ directory.
Step 5: Create an IAM Role
You need to create an IAM role that Lambda can assume to execute your function and
access other AWS resources. You can use the AWS Management Console or the AWS CLI
to create the role.
Here is an example using the AWS CLI:
aws iam create-role --role-name YourLambdaRoleName --assume-role-policy-
document file://trust-policy.json

Create a file named trust-policy.json with the following content:


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

Step 6: Attach Policies to the IAM Role


Attach policies to the IAM role to grant necessary permissions. For a basic Lambda
function, you can attach the AWSLambdaBasicExecutionRole policy:
aws iam attach-role-policy --role-name YourLambdaRoleName --policy-arn
arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Step 7: Deploy the Lambda Function
Run the following command to deploy the Lambda function:
aws lambda create-function --function-name YourLambdaFunctionName --runtime
dotnetcore3.1 --role arn:aws:iam::your-account-id:role/YourLambdaRoleName --
handler
YourLambdaFunctionName::YourLambdaFunctionName.Function::FunctionHandle
r --code S3Bucket=your-s3-bucket-name,S3Key=path/to/your/package.zip

Replace the placeholders with your actual values.


Step 8: Test the Lambda Function
You can test your Lambda function using the AWS Lambda Management Console or the
AWS CLI.

9. In what cases did you write an Azure Functions?


In my recent project, we have a requirement to process the records from Cosmos DB in
particular timings, we have implemented timer trigger functions to fulfil the requirement.
For handling the Kafka messages from a specific topic, we have used kafka triggered
function.
Where we don’t want to depended on the infrastructure to handle the apis we can simply
configure serverless functions and it is easy to configure and implement the business
logic.
Create a Azure functions:
Install Prerequisites
Ensure you have the following installed on your development machine:
.net core SDK, azure function core tools.
Create a New Azure Functions Project
Open a command prompt or terminal and run the following commands to create a new
Azure Functions project:
mkdir YourFunctionProject
cd YourFunctionProject
func init YourFunctionProject --worker-runtime dotnet

Replace YourFunctionProject with the desired name for your project.


Create a New Azure Function
Run the following command to create a new Azure function:
func new --name YourFunctionName --template "HTTP trigger"

Replace YourFunctionName with the desired name for your function. Choose the
template that fits your use case; in this example, we're using an HTTP trigger.
Build and Run the Function Locally
Navigate to the function folder:
cd YourFunctionName
Run the following command to build and run the function locally:
func host start
This command starts the Azure Functions runtime locally, allowing you to test your
function.
Test the Local Function
Open a web browser or use a tool like Postman to test your function locally. By default, the
HTTP trigger template creates an endpoint like
http://localhost:7071/api/YourFunctionName. Send an HTTP request to this endpoint to
test your function.
Publish to Azure
When you are satisfied with your function locally, you can publish it to Azure. Run the
following command:
func azure functionapp publish YourAzureFunctionAppName

Replace YourAzureFunctionAppName with the desired name for your Azure


Functions App.
Test the Azure Function
After publishing, you can test your Azure Function on the Azure portal or by sending
HTTP requests to the function's endpoint in the Azure environment.

10. What are Object Oriented Principles(OOP) in .Net? What are Object Oriented
Principles(OOP) used in your project? What is difference between Interfaces and Abstract?
What is Polymorphism?

The key concepts of OOPs are


1. Abstraction: is a concept of providing full information about an entity (Data Binding)
2. Encapsulation: is a concept of Data Hiding
3. Inheritance: is a concept of deriving the features from parent to
child.
4. Polymorphism: is a concept of providing many functionalities with one function name. Poly:
Many, Morphism: Names.

Here are the key features of OOP:


• Object Oriented Programming (OOP) is a programming model where programs are organized
around objects and data rather than action and logic.
• OOP allows decomposing a problem into many entities called objects and then building data
and functions around these objects.
• A class is the core of any modern object-oriented programming language such as C#.
• In OOP languages, creating a class for representing data is mandatory.
• A class is a blueprint of an object that contains variables for storing data and functions to
perform operations on the data.
• A class will not occupy any memory space; hence, it is only a logical representation of data.

11. SOLID Principles?


SOLID principles are the design principles that enable us to manage most of the software design
problems.
• S: Single Responsibility Principle (SRP)
• O: Open closed Principle (OCP)
• L: Liskov substitution Principle (LSP)
• I: Interface Segregation Principle (ISP)
• D: Dependency Inversion Principle (DIP)

Single Responsibility Principle:
This means that every class, or similar structure, in your code should have only one job to do.
Everything in that class should be related to a single purpose.
O: Open/Closed Principle
The Open/closed Principle says "A software module/class is open for extension and closed for
modification".
Liskov Substitution Principle (LSP)
states that "you should be able to use any derived class instead of a parent class and have it
behave in the same manner without modification". It ensures that a derived class does not affect
the behavior of the parent class
I: Interface Segregation Principle (ISP)
The Interface Segregation Principle states "that clients should not be forced to implement interfaces
they don't use. Instead of one fat interface, many small interfaces are preferred based on groups of
methods, each one serving one submodule.".
D: Dependency Inversion Principle
The Dependency Inversion Principle (DIP) states that high-level modules/classes should not
depend on low-level modules/classes. Both should depend upon abstractions. Secondly,
abstractions should not depend upon details. Details should depend upon abstractions.
Inversion of Control
IoC is a design principle which recommends the inversion of different kinds of controls in object-
oriented design to achieve loose coupling between application classes. In this case, control refers
to any additional responsibilities a class has

12. You are required to change the logic of a module that many other modules have
dependency on. How would you go about making the changes without impacting
dependent systems.
You need to firstly perform an impact analysis. Impact analysis is about being able to tell which
pieces of code, packages, modules, and projects use given piece of code, packages, modules, and
projects, or vice versa is a very difficult thing.
Performing an impact analysis is not a trivial task, and there is not a single tool that can cater for
every scenario
a) In Visual Studio Ctrl+Shift+f can be used to search for references
b) You can perform a general “File Search” for keywords on all projects in the workspace.
c) You can use Notepad++ editor and select Search –> Find in files. You can search for a URL
or any keyword across several files within a folder.

There are instances where you need to perform impact analysis across stored procedures, various
services, URLs, environment properties, batch processes, etc. This will require a wider analysis
across projects and repositories.

13. What is overloading and overriding and when do you use them?
When we have more than one method with the same name in a single class, but the arguments
are different, then it is called method overloading. The overriding concept comes in picture with
inheritance when we have two methods with the same signature, one in the parent class and
another in child class. We can use @Override annotation in the child class overridden method to
make sure if the parent class method is changed, so as child class.

14. Give an example where you prefer abstract class over interface?
- In C# you can only extend one class but implement multiple interfaces. So, if you
extend a class, you lost your chance of extending another class.
- Interface are used to represent adjective or behavior e.g., Runnable, Closable,
Serializable etc, so if you use an abstract class to represent behavior your class cannot
be Runnable and Closable at same time because you cannot extend two class in Java
but if you use interface your class can have multiple behavior at same time.
- On time critical application prefer abstract class is slightly faster than interface.
- If there is a genuine common behavior across the inheritance hierarchy which can
be coded better at one place than abstract class is preferred choice. Sometime
interface and abstract class can work together also where defining function in
interface and default functionality on abstract class.
15. Difference between String and StringBuilder?
• String: A string instance is immutable. You cannot change it after it was created. Any operation that
appears to change the string instead returns a new instance
1. Under System namespace
2. Immutable (readonly) instance
3. Performance degrades when continuous change of value occurs
4. Thread-safe
• StringBuilder (mutable string): mutable string, such as one you're contructing piece-wise or where
you change lots of things, then you'll need a StringBuilder which is a buffer of characters that can
be changed.
1. Under System.Text namespace
2. Mutable instance
3. Shows better performance since new changes are made to an existing instance

16. What is the benefit of Generics & Generic Collections in C#?


Boxing: Conversion of a value type into a reference type of variable. When the
variable of a value type is to be converted to a reference type, the object box is
allocated to hold the value and copies the value into a box.
Unboxing: It is just the opposite of boxing.

Problem with Array and Array List


Array
• Arrays are strongly typed (meaning that you can only put one type of object into it).
• Limited to size (Fixed length).
Array List
• Array List are strongly typed.
• Data can increase on need basis.
• It will do the boxing and unboxing while processing (decrease the performance).
List (Generic Collection)
• List are strongly typed.
• Data can increase on need basis.
• Don't incur overhead of being converted to and from type object.

Generics:
A generic collection is strongly typed (you can store one type of objects into it) so
that we can eliminate runtime type mismatches, it improves the performance by
avoiding boxing and unboxing.
Why to use Generics
There are mainly two reasons to use generics as in the following:
1. Performance: Collections that store the objects uses boxing and unboxing on
data types. A collection can reduce the performance.
By using generics it helps to improve the performance and type safety.
2. Type Safety: there is no strong type information at compile time as to what it
is stored in the collection.
When to use Generics
• When you use various #ff0000 data types, you need to create a generic type.
• It is easier to write code once and reuse it for multiple types.
• If you are working on a value type then for that boxing and unboxing
operation will occur, Generics will eliminate the boxing and unboxing
operations.

Generic collections - These are the collections that can hold data of same type and
we can decide what type of data that collections can hold.

Some advantages of generic collections - Type Safe, Secure, reduced overhead of


type conversions.

17. How do you handle exception in .Net? What are different types of exceptions? How do you
implement custom exceptions in .Net? How do you implement exceptions in Services?

In .NET C#, an exception is an error that occurs during runtime which interrupts the normal
flow of the program and transfers control to the nearest catch block that can handle the
exception. Various factors, such as invalid input parameters, network connectivity issues,
or system resource limitations can cause it.
Exceptions are classified into two categories:
i. System exceptions: System exceptions are generated by the runtime
environment and include errors such as stack overflow, out-of-memory, or
access violation.
ii. Application exceptions: Application exceptions are generated by code in your
application and can be customized to suit your specific needs.

In C#, you handle exceptions using the following keywords:


• try – A try block encloses a section of code. When code throws an exception within
this block, the corresponding catch handles the exception
• catch – When an exception happens, the code within the Catch block executes. This
is where you are able to handle the exception, log it, or ignore it
• finally – The finally block enables the execution of specific code irrespective of
exception. For instance, it facilitates the disposal of an object that requires disposal
• throw – The throw keyword crafts a fresh exception, ascending to a try-catch-finally
block

Examples:

try
{
wc = new WebClient(); //downloading a web page
var resultData = wc.DownloadString("http://google.com");
}
catch (ArgumentNullException ex)
{
//code specifically for a ArgumentNullException
}
catch (WebException ex)
{
//code specifically for a WebException
}
catch (Exception ex)
{
//code for any other type of exception
}
finally
{
//call this if exception occurs or not
//in this example, dispose the WebClient
wc?.Dispose();
}

Exception Filters
Exception filters introduced in C# 6 enable you to have even more control over your catch
blocks and further tailor how you handle specific exceptions. These features help you fine-
tune exactly how you handle exceptions and which ones you want to catch.
Before C# 6, you would have had to catch all types of WebException and handle them.
You can now select to manage them only in specific situations and allow different
situations to rise to the calling code. Here is a modified example with filters:

WebClient wc = null;
try
{
wc = new WebClient(); //downloading a web page
var resultData = wc.DownloadString("http://google.com");
}
catch (WebException ex) when (ex.Status == WebExceptionStatus.ProtocolError)
{
//code specifically for a WebException ProtocolError
}
catch (WebException ex) when ((ex.Response as HttpWebResponse)?.StatusCode ==
HttpStatusCode.NotFound)
{
//code specifically for a WebException NotFound
}
catch (WebException ex) when ((ex.Response as HttpWebResponse)?.StatusCode ==
HttpStatusCode.InternalServerError)
{
//code specifically for a WebException InternalServerError
}
finally
{
//call this if exception occurs or not
wc?.Dispose();
}

Create Your Own C# Custom Exception Types


All exceptions inherit from a base System.Exception class. You can use many common
exceptions within your own code. Commonly, developers use the generic
ApplicationException or Exception object to throw custom exceptions. However, you can
also create your own type of exceptions.
Creating C# custom exceptions are most beneficial when you intend to catch a particular
exception type and manage it distinctively. Custom exceptions can also be helpful to track
a very specific type of exception that you deem extremely critical.

Here is a simple example from our code:


Custom Exception Class:

public class ClientBillingException : Exception


{
public ClientBillingException(string message) : base(message)
{
}
}

Making use of the custom exception:

public void DoBilling(int clientID)


{
Client client = _clientDataAccessObject.GetById(clientID);
if (client == null)
{
throw new ClientBillingException(string.Format("Unable to find a client by id
{0}", clientID));
}
}

18. Explain about the different types of errors in .Net?

I have used handling of different errors in my current project and I have created custom
exceptions as well in my recent project.
There are three main types of errors.
• Compilation errors- Also known as syntax errors reported at the time of compiling.
• Runtime errors- Thrown during the execution of the program.
• Logical errors- Occurring when the program works without crushing, but it does not
produce a correct result.

19. Explain about the different libraries/assemblies in .Net?


In My recent project, I have used Private assembly and shared assembly.
Assembly is collection of classes developed in different languages and packaged as a single
dll(Dynamic Link Library) file.
Assemblies can be either private or shared. Private assemblies are used by a single
application and are stored in the application’s directory. Shared assemblies, on the other
hand, can be used by multiple applications and are stored in the Global Assembly Cache
(GAC).
The GAC is a central repository for shared assemblies that allows them to be easily
accessed and managed by multiple applications.

20. Explain why you choose .Net Core? Difference between .Net and .Net Core?
In My recent project, we have used .Net Core for all the new developments as it is is a free
open-source, high-performance, mainstream objective buildout platform that is maintained
by Microsoft. It offers a cross-platform framework for creating modern, internet-connected,
cloud-enabled applications which can run on Mac OS, Linux, and Windows Operating
systems.
.Net Core is coded from scratch which makes it a fast, lightweight, and also modular
framework.
Speeds up the execution, is easy to maintain and in addition, it reduces the memory
footprint.
Develop web applications and services, Internet of Things (IoT), and mobile backends.

21. What are Microservices and where so you use in your project?
Microservices is a variant of the service-oriented architecture (SOA) architectural style that
structures an application as a collection of loosely coupled services. In a microservices
architecture, services should be fine-grained and the protocols should be lightweight. The
benefit of decomposing an application into different smaller services is that it improves
modularity and makes the application easier to understand, develop and test. It also
parallelizes development by enabling small autonomous teams to develop, deploy and scale
their respective services independently. It also allows the architecture of an individual
service to emerge through continuous refactoring. Microservices-based architectures enable
continuous delivery and deployment.

22. What are the key components of AWS? That you used in your project and purpose
The key components of AWS that we used in our projects are,
• Route 53:A DNS web service
• Simple E-mail Service: It allows sending e-mail using RESTFUL API call or via
regular SMTP
• Identity and Access Management: It provides enhanced security and identity
management for your AWS account
• Simple Storage Device or (S3):It is a storage device and the most widely used AWS
service
• Elastic Compute Cloud (EC2): It provides on-demand computing resources for
hosting applications. It is handy in case of unpredictable workloads
• Elastic Block Store (EBS):It offers persistent storage volumes that attach to EC2 to
allow you to persist data past the lifespan of a single Amazon EC2 instance
• CloudWatch: To monitor AWS resources, It allows administrators to view and
collect key Also, one can set a notification alarm in case of trouble.

SQL SERVER
23. Explain how do you improv slow running SQL query?
First we need to click on sql profiler where exactly the query/ Store proc is taking time to
execute we need to identify , Below are the steps we can identify one by one based on
Profiler result,
• Use views and stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to server only stored
procedure or view name (perhaps with some parameters) instead of large heavy-duty
queries text. This can be used to facilitate permission management also, because you can
restrict user access to table columns they should not see.
• Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost performance. So, you
should use constraints instead of triggers, whenever possible.
• Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table
variables should be used whenever possible. The table variables are available in SQL
Server 2000 only.
• Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement
does not look for duplicate rows, and UNION statement does look for duplicate rows,
whether or not they exist.
• Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you
should use this clause only when it is necessary.
• Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in comparison with select
statements. Try to use correlated sub-query or derived tables, if you need to perform row-
by-row operations.
• Try to avoid the HAVING clause, whenever possible.
The HAVING clause is used to restrict the result set returned by the GROUP BY clause.
When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the
rows into sets of grouped rows and aggregates their values, and then the HAVING clause
eliminates undesired aggregated groups. In many cases, you can write your select statement
so, that it will contain only WHERE and GROUP BY clauses without HAVING clause.
This can improve the performance of your query.
• If you need to return the total table's row count, you can use alternative way instead of
SELECT COUNT(*) statement.
Because SELECT COUNT(*) statement make a full table scan to return the total table's
row count, it can take very many time for the large table. There is another way to determine
the total row count in a table. You can use sysindexes system table, in this case. There is
ROWS column in the sysindexes table. This column contains the total row count for each
table in your database. So, you can use the following select statement instead of SELECT
COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name')
AND indid < 2 So, you can improve the speed of such queries in several times.
• Include SET NOCOUNT ON statement into your stored procedures to stop the message
indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating
the number of rows affected by a T-SQL statement.
• Try to restrict the queries result set by using the WHERE clause.
This can results in good performance benefits, because SQL Server will return to client
only particular rows, not all rows from the table(s). This can reduce network traffic and
boost the overall performance of the query.
• Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you
need to return only the first n rows.
This can improve performance of your queries, because the smaller result set will be
returned. This can also reduce the traffic between the server and the clients.
• Try to restrict the queries result set by returning only the particular columns from the
table, not all table's columns.
This can results in good performance benefits, because SQL Server will return to client
only particular columns, not all table's columns. This can reduce network traffic and boost
the overall performance of the query.
1.Indexes
24. What is a primary key?
A primary key is a combination of fields which uniquely specify a row. This is a special kind of
unique key, and it has implicit NOT NULL constraint. It means, Primary key values cannot be
NULL.
25. What is a unique key?
A Unique key constraint uniquely identified each record in the database. This provides
uniqueness for the column or set of columns. A Primary key constraint has automatic unique
constraint defined on it. But not, in the case of Unique Key. There can be many unique
constraints defined per table, but only one Primary key constraint defined per table.
26. What is an Index?
An index is performance tuning method of allowing faster retrieval of records from the table. An
index creates an entry for each value and it will be faster to retrieve data.

27. What is the difference between DELETE and TRUNCATE commands and TRUNCATE
and DROP statements?
DELETE command is used to remove rows from the table, and WHERE clause can be used for
conditional set of parameters. Commit and Rollback can be performed after delete statement.
TRUNCATE removes all rows from the table. Truncate operation cannot be rolled back.

TRUNCATE removes all the rows from the table, and it cannot be rolled back. DROP command
removes a table from the database and operation cannot be rolled back.

28. Name a few important DDL commands present in the SQL?


They are generally preferred when it comes to defining or changing the structure of a specific
database in the shortest possible times due to security and other concerns. Some of the
commands that can be applied and considered directly for this are as follows.
- Create
- Alter
- Drop
- Rename
- Truncate

29. What is ACID property in a database?


ACID is an acronym for Atomicity, Consistency, Isolation, Durability.
- Atomicity: it requires that each transaction is all or nothing. It means if one part of
the transaction fails, the entire transaction fails and the database state is left
unchanged.
- Consistency: the consistency property ensure that the data must meet all validation
rules. In simple words you can say that your transaction never leaves your database
without completing its state.
- Isolation: this property ensure that the concurrent property of execution should not
be met. The main goal of providing isolation is concurrency control.
- Durability: durability simply means that once a transaction has been committed, it
will remain so, come what may even power loss, crashes or errors.
30. What is the difference between primary key and unique key and foreign key?
Primary key and unique key both are the essential constraints of the SQL, but there is a small
difference between them Primary key carries unique value but the field of the primary key
cannot be Null on the other hand unique key also carry unique value but it can have a single Null
value field.
· A foreign key is one table which can be related to the primary key of another table.
Relationship needs to be created between two tables by referencing foreign key with the primary
key of another table.
31. What are the types of join and explain each?
· This is a keyword used to query data from more tables based on the relationship
between the fields of the tables. Keys play a major role when JOINs are used.
· There are various types of join which can be used to retrieve data and it depends
on the relationship between tables.
· Inner Join. Inner join return rows when there is at least one match of rows between
the tables.
· Right Join. Right join return rows which are common between the tables and all
rows of Right hand side table. Simply, it returns all the rows from the right hand side
table even though there are no matches in the left hand side table.
· Left Join. Left join return rows which are common between the tables and all rows
of Left hand side table. Simply, it returns all the rows from Left hand side table even
though there are no matches in the Right hand side table.
· Full Join. Full join return rows when there are matching rows in any one of the
tables. This means, it returns all the rows from the left hand side table and all the
rows from the right hand side table.

ReactJS Questions

32. What are the advantages of using React?


MVC is generally abbreviated as Model View Controller.

- Use of Virtual DOM to improve efficiency


React uses virtual DOM to render the view. As the name suggests, virtual DOM is a
virtual representation of the real DOM. Each time the data changes in a react app, a
new virtual DOM gets created. Creating a virtual DOM is much faster than rendering
the UI inside the browser. Therefore, with the use of virtual DOM, the efficiency of
the app improves.
- Gentle learning curve
React has a gentle learning curve when compared to frameworks like Angular.
Anyone with little knowledge of javascript can start building web applications using
React.
- SEO friendly
React allows developers to develop engaging user interfaces that can be easily
navigated in various search engines. It also allows server-side rendering, which
boosts the SEO of an app.
- Reusable components
React uses component-based architecture for developing applications. Components
are independent and reusable bits of code. These components can be shared across
various applications having similar functionality. The re-use of components increases
the pace of development.
Huge ecosystem of libraries to choose from
React provides you the freedom to choose the tools, libraries, and architecture for
developing an application based on your requirement.
33. What is JSX?
JSX stands for JavaScript XML. It allows us to write HTML inside JavaScript and place them in
the DOM without using functions like appendChild( ) or createElement( ).
As stated in the official docs of React, JSX provides syntactic sugar for React.createElement( )
function.

34. What are the differences between functional and class components?
Before the introduction of Hooks in React, functional components were called stateless
components and were behind class components on feature basis. After the introduction of Hooks,
functional components are equivalent to class components.
Although functional components are the new trend, the react team insists on keeping class
components in React. Therefore, it is important to know how these both components differ.

35. What is the virtual DOM? How does react use the virtual DOM to render the UI?
As stated by the react team, virtual DOM is a concept where a virtual representation of the real
DOM is kept inside the memory and is synced with the real DOM by a library such as
ReactDOM.

36. What are the lifecycle methods of React?


• componentWillMount: Executed before rendering and is used for App level configuration
in your root component.
• componentDidMount: Executed after first rendering and here all AJAX requests, DOM or
state updates, and set up event listeners should occur.
• componentWillReceiveProps: Executed when particular prop updates to trigger state
transitions.
• shouldComponentUpdate: Determines if the component will be updated or not. By
default it returns true. If you are sure that the component doesn't need to render after state
or props are updated, you can return false value. It is a great place to improve performance
as it allows you to prevent a re-render if component receives new prop.
• componentWillUpdate: Executed before re-rendering the component when there are props
& state changes confirmed by shouldComponentUpdate() which returns true.
• componentDidUpdate: Mostly it is used to update the DOM in response to prop or state
changes.
• componentWillUnmount: It will be used to cancel any outgoing network requests, or
remove all event listeners associated with the component.

37. What are Higher-Order Components?


A higher-order component (HOC) is a function that takes a component and returns a new
component. Basically, it's a pattern that is derived from React's compositional nature.

You might also like