Dot Net_MQ
Dot Net_MQ
I have involved in all the phases of the development including design and architecture.
Designed and migrated the tables from on-prem to Azure database.
Designed and Developed applications using C#.Net, Angular, Web API, and AWS Cloud
technologies.
Developed Timer Trigger, Mongo Db triggered and ASB Topic Triggered, Lambda
Function using C#.Net.
Implemented token based and certificate based authentication.
Implemented & Configured APIs in Azure APIM with Products.
Developed applications using Azure Cosmos DB as the NoSQL database & implemented
CRUD operations. Improved performance of the Cosmos DB using indexes and deploying
them using IaC Pipelines.
Created CI/CD pipelines using Azure Devops and implemented Sonar Scan, Whitesource
Scan, Automated unit testing, Jfrog Xray Scans for improving the code quality and code
coverage.
Developed IaC YAML pipelines using Bicep templates for creating Azure resources like
WebApp, Function App, App Insights, APIs & Products with Policies in APIM, Gateway
Policies, Key Vault, Action Groups, Alert Rules, Access Policies & IAM and deploying
them and Developed CI/CD Application pipelines.
We as a <this> team, we are responsible for collecting the subscriber, accumulator and claims
information and generating the intermediate files for the internal teams to consume for processing
the claims. Developed and modernized the existing file-based system into API based system and
web application to support backend operations for the new API system.
• Designed and Developed applications using C#.Net, Angular, Web API, and Azure cloud
technologies.
• Developed Timer Trigger and Kafka Topic Triggered Azure Function Apps using C#.Net.
• Implemented & Configured APIs in Azure APIM with Products.
• Developed applications using Azure Cosmos DB as the NoSQL database & implemented CRUD
operations.
• Developed IaC YAML pipelines using Bicep templates for creating Azure resources like WebApp,
Function App, App Insights, APIs & Products with Policies in APIM, Gateway Policies, Key Vault,
Action Groups, Alert Rules, Access Policies & IAM and deploying them and Developed CI/CD
Application pipelines.
currently we are working with agile methodology and 2 weeks as my scrum time, for every 2
weeks we will interact with business users for the requirement gathering, once we have
requirement, then we will prepare the user stories, and then we will give estimated time for the
user stories in Fibonacci series.
we are using JIRA as a PM tool, we will upload all these user stories into the JIRA tool, and the
user tool will be assigned to team members like us, and we are going to work on the user story,
mostly the development of user stories will be assigned to me, more of working on development
side, for the user story
I am going to work on the development, or on a integration either a backend work using the
Spring Boot services or the Frontend work in the ReactJS.
Once we are done with the development, then we will write unit tests, and integration tests and
we will do testing for them, and we will make sure that the test code coverage will be 90% and
then we will deploy to the testing environment, then we will move into the UAT, and business
users will do the UAT testing, before deploying into the production,
we will give live demo to the business users, what exactly the functionality are changed for this
particular sprint and then will deploy.
7. What is your current project application architecture?
8. In what cases did you write an aws lambda how did you implemented?
My Current project is Global registration platform and my client do the business in the
USA, Canada, Middle East and Asian region, As part of this we have to roll out different
micro services based on specific region based requirement and as per their data
regulations.
In this process we have created AWS lambda functions with the help of .Net core and
deployed successful and tested as well.
Create lambda function in aws .Net Core:
Step 1: Install Prerequisites
Make sure you have the following installed on your development machine:
AWS CLI and .Net Core
Step 2: Create a New .NET Core Lambda Project
Open a command prompt or terminal and run the following commands to create a new
.NET Core Lambda project:
dotnet new lambda.EmptyFunction -n YourLambdaFunctionName
cd YourLambdaFunctionName
Replace YourLambdaFunctionName with the desired name for your Lambda
function.
Step 3: Modify the Lambda Function Code
Open the generated Function.cs file in your favorite text editor or IDE and modify the
function logic as needed.
Step 4: Publish the Lambda Function
Run the following command to publish the Lambda function
dotnet publish -c Release
This command compiles the code and creates a deployment package in the
bin\Release\netcoreapp3.1\publish\ directory.
Step 5: Create an IAM Role
You need to create an IAM role that Lambda can assume to execute your function and
access other AWS resources. You can use the AWS Management Console or the AWS CLI
to create the role.
Here is an example using the AWS CLI:
aws iam create-role --role-name YourLambdaRoleName --assume-role-policy-
document file://trust-policy.json
Replace YourFunctionName with the desired name for your function. Choose the
template that fits your use case; in this example, we're using an HTTP trigger.
Build and Run the Function Locally
Navigate to the function folder:
cd YourFunctionName
Run the following command to build and run the function locally:
func host start
This command starts the Azure Functions runtime locally, allowing you to test your
function.
Test the Local Function
Open a web browser or use a tool like Postman to test your function locally. By default, the
HTTP trigger template creates an endpoint like
http://localhost:7071/api/YourFunctionName. Send an HTTP request to this endpoint to
test your function.
Publish to Azure
When you are satisfied with your function locally, you can publish it to Azure. Run the
following command:
func azure functionapp publish YourAzureFunctionAppName
10. What are Object Oriented Principles(OOP) in .Net? What are Object Oriented
Principles(OOP) used in your project? What is difference between Interfaces and Abstract?
What is Polymorphism?
12. You are required to change the logic of a module that many other modules have
dependency on. How would you go about making the changes without impacting
dependent systems.
You need to firstly perform an impact analysis. Impact analysis is about being able to tell which
pieces of code, packages, modules, and projects use given piece of code, packages, modules, and
projects, or vice versa is a very difficult thing.
Performing an impact analysis is not a trivial task, and there is not a single tool that can cater for
every scenario
a) In Visual Studio Ctrl+Shift+f can be used to search for references
b) You can perform a general “File Search” for keywords on all projects in the workspace.
c) You can use Notepad++ editor and select Search –> Find in files. You can search for a URL
or any keyword across several files within a folder.
There are instances where you need to perform impact analysis across stored procedures, various
services, URLs, environment properties, batch processes, etc. This will require a wider analysis
across projects and repositories.
13. What is overloading and overriding and when do you use them?
When we have more than one method with the same name in a single class, but the arguments
are different, then it is called method overloading. The overriding concept comes in picture with
inheritance when we have two methods with the same signature, one in the parent class and
another in child class. We can use @Override annotation in the child class overridden method to
make sure if the parent class method is changed, so as child class.
14. Give an example where you prefer abstract class over interface?
- In C# you can only extend one class but implement multiple interfaces. So, if you
extend a class, you lost your chance of extending another class.
- Interface are used to represent adjective or behavior e.g., Runnable, Closable,
Serializable etc, so if you use an abstract class to represent behavior your class cannot
be Runnable and Closable at same time because you cannot extend two class in Java
but if you use interface your class can have multiple behavior at same time.
- On time critical application prefer abstract class is slightly faster than interface.
- If there is a genuine common behavior across the inheritance hierarchy which can
be coded better at one place than abstract class is preferred choice. Sometime
interface and abstract class can work together also where defining function in
interface and default functionality on abstract class.
15. Difference between String and StringBuilder?
• String: A string instance is immutable. You cannot change it after it was created. Any operation that
appears to change the string instead returns a new instance
1. Under System namespace
2. Immutable (readonly) instance
3. Performance degrades when continuous change of value occurs
4. Thread-safe
• StringBuilder (mutable string): mutable string, such as one you're contructing piece-wise or where
you change lots of things, then you'll need a StringBuilder which is a buffer of characters that can
be changed.
1. Under System.Text namespace
2. Mutable instance
3. Shows better performance since new changes are made to an existing instance
Generics:
A generic collection is strongly typed (you can store one type of objects into it) so
that we can eliminate runtime type mismatches, it improves the performance by
avoiding boxing and unboxing.
Why to use Generics
There are mainly two reasons to use generics as in the following:
1. Performance: Collections that store the objects uses boxing and unboxing on
data types. A collection can reduce the performance.
By using generics it helps to improve the performance and type safety.
2. Type Safety: there is no strong type information at compile time as to what it
is stored in the collection.
When to use Generics
• When you use various #ff0000 data types, you need to create a generic type.
• It is easier to write code once and reuse it for multiple types.
• If you are working on a value type then for that boxing and unboxing
operation will occur, Generics will eliminate the boxing and unboxing
operations.
Generic collections - These are the collections that can hold data of same type and
we can decide what type of data that collections can hold.
17. How do you handle exception in .Net? What are different types of exceptions? How do you
implement custom exceptions in .Net? How do you implement exceptions in Services?
In .NET C#, an exception is an error that occurs during runtime which interrupts the normal
flow of the program and transfers control to the nearest catch block that can handle the
exception. Various factors, such as invalid input parameters, network connectivity issues,
or system resource limitations can cause it.
Exceptions are classified into two categories:
i. System exceptions: System exceptions are generated by the runtime
environment and include errors such as stack overflow, out-of-memory, or
access violation.
ii. Application exceptions: Application exceptions are generated by code in your
application and can be customized to suit your specific needs.
Examples:
try
{
wc = new WebClient(); //downloading a web page
var resultData = wc.DownloadString("http://google.com");
}
catch (ArgumentNullException ex)
{
//code specifically for a ArgumentNullException
}
catch (WebException ex)
{
//code specifically for a WebException
}
catch (Exception ex)
{
//code for any other type of exception
}
finally
{
//call this if exception occurs or not
//in this example, dispose the WebClient
wc?.Dispose();
}
Exception Filters
Exception filters introduced in C# 6 enable you to have even more control over your catch
blocks and further tailor how you handle specific exceptions. These features help you fine-
tune exactly how you handle exceptions and which ones you want to catch.
Before C# 6, you would have had to catch all types of WebException and handle them.
You can now select to manage them only in specific situations and allow different
situations to rise to the calling code. Here is a modified example with filters:
WebClient wc = null;
try
{
wc = new WebClient(); //downloading a web page
var resultData = wc.DownloadString("http://google.com");
}
catch (WebException ex) when (ex.Status == WebExceptionStatus.ProtocolError)
{
//code specifically for a WebException ProtocolError
}
catch (WebException ex) when ((ex.Response as HttpWebResponse)?.StatusCode ==
HttpStatusCode.NotFound)
{
//code specifically for a WebException NotFound
}
catch (WebException ex) when ((ex.Response as HttpWebResponse)?.StatusCode ==
HttpStatusCode.InternalServerError)
{
//code specifically for a WebException InternalServerError
}
finally
{
//call this if exception occurs or not
wc?.Dispose();
}
I have used handling of different errors in my current project and I have created custom
exceptions as well in my recent project.
There are three main types of errors.
• Compilation errors- Also known as syntax errors reported at the time of compiling.
• Runtime errors- Thrown during the execution of the program.
• Logical errors- Occurring when the program works without crushing, but it does not
produce a correct result.
20. Explain why you choose .Net Core? Difference between .Net and .Net Core?
In My recent project, we have used .Net Core for all the new developments as it is is a free
open-source, high-performance, mainstream objective buildout platform that is maintained
by Microsoft. It offers a cross-platform framework for creating modern, internet-connected,
cloud-enabled applications which can run on Mac OS, Linux, and Windows Operating
systems.
.Net Core is coded from scratch which makes it a fast, lightweight, and also modular
framework.
Speeds up the execution, is easy to maintain and in addition, it reduces the memory
footprint.
Develop web applications and services, Internet of Things (IoT), and mobile backends.
21. What are Microservices and where so you use in your project?
Microservices is a variant of the service-oriented architecture (SOA) architectural style that
structures an application as a collection of loosely coupled services. In a microservices
architecture, services should be fine-grained and the protocols should be lightweight. The
benefit of decomposing an application into different smaller services is that it improves
modularity and makes the application easier to understand, develop and test. It also
parallelizes development by enabling small autonomous teams to develop, deploy and scale
their respective services independently. It also allows the architecture of an individual
service to emerge through continuous refactoring. Microservices-based architectures enable
continuous delivery and deployment.
22. What are the key components of AWS? That you used in your project and purpose
The key components of AWS that we used in our projects are,
• Route 53:A DNS web service
• Simple E-mail Service: It allows sending e-mail using RESTFUL API call or via
regular SMTP
• Identity and Access Management: It provides enhanced security and identity
management for your AWS account
• Simple Storage Device or (S3):It is a storage device and the most widely used AWS
service
• Elastic Compute Cloud (EC2): It provides on-demand computing resources for
hosting applications. It is handy in case of unpredictable workloads
• Elastic Block Store (EBS):It offers persistent storage volumes that attach to EC2 to
allow you to persist data past the lifespan of a single Amazon EC2 instance
• CloudWatch: To monitor AWS resources, It allows administrators to view and
collect key Also, one can set a notification alarm in case of trouble.
SQL SERVER
23. Explain how do you improv slow running SQL query?
First we need to click on sql profiler where exactly the query/ Store proc is taking time to
execute we need to identify , Below are the steps we can identify one by one based on
Profiler result,
• Use views and stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to server only stored
procedure or view name (perhaps with some parameters) instead of large heavy-duty
queries text. This can be used to facilitate permission management also, because you can
restrict user access to table columns they should not see.
• Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost performance. So, you
should use constraints instead of triggers, whenever possible.
• Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table
variables should be used whenever possible. The table variables are available in SQL
Server 2000 only.
• Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement
does not look for duplicate rows, and UNION statement does look for duplicate rows,
whether or not they exist.
• Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you
should use this clause only when it is necessary.
• Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in comparison with select
statements. Try to use correlated sub-query or derived tables, if you need to perform row-
by-row operations.
• Try to avoid the HAVING clause, whenever possible.
The HAVING clause is used to restrict the result set returned by the GROUP BY clause.
When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the
rows into sets of grouped rows and aggregates their values, and then the HAVING clause
eliminates undesired aggregated groups. In many cases, you can write your select statement
so, that it will contain only WHERE and GROUP BY clauses without HAVING clause.
This can improve the performance of your query.
• If you need to return the total table's row count, you can use alternative way instead of
SELECT COUNT(*) statement.
Because SELECT COUNT(*) statement make a full table scan to return the total table's
row count, it can take very many time for the large table. There is another way to determine
the total row count in a table. You can use sysindexes system table, in this case. There is
ROWS column in the sysindexes table. This column contains the total row count for each
table in your database. So, you can use the following select statement instead of SELECT
COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name')
AND indid < 2 So, you can improve the speed of such queries in several times.
• Include SET NOCOUNT ON statement into your stored procedures to stop the message
indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating
the number of rows affected by a T-SQL statement.
• Try to restrict the queries result set by using the WHERE clause.
This can results in good performance benefits, because SQL Server will return to client
only particular rows, not all rows from the table(s). This can reduce network traffic and
boost the overall performance of the query.
• Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you
need to return only the first n rows.
This can improve performance of your queries, because the smaller result set will be
returned. This can also reduce the traffic between the server and the clients.
• Try to restrict the queries result set by returning only the particular columns from the
table, not all table's columns.
This can results in good performance benefits, because SQL Server will return to client
only particular columns, not all table's columns. This can reduce network traffic and boost
the overall performance of the query.
1.Indexes
24. What is a primary key?
A primary key is a combination of fields which uniquely specify a row. This is a special kind of
unique key, and it has implicit NOT NULL constraint. It means, Primary key values cannot be
NULL.
25. What is a unique key?
A Unique key constraint uniquely identified each record in the database. This provides
uniqueness for the column or set of columns. A Primary key constraint has automatic unique
constraint defined on it. But not, in the case of Unique Key. There can be many unique
constraints defined per table, but only one Primary key constraint defined per table.
26. What is an Index?
An index is performance tuning method of allowing faster retrieval of records from the table. An
index creates an entry for each value and it will be faster to retrieve data.
27. What is the difference between DELETE and TRUNCATE commands and TRUNCATE
and DROP statements?
DELETE command is used to remove rows from the table, and WHERE clause can be used for
conditional set of parameters. Commit and Rollback can be performed after delete statement.
TRUNCATE removes all rows from the table. Truncate operation cannot be rolled back.
TRUNCATE removes all the rows from the table, and it cannot be rolled back. DROP command
removes a table from the database and operation cannot be rolled back.
ReactJS Questions
34. What are the differences between functional and class components?
Before the introduction of Hooks in React, functional components were called stateless
components and were behind class components on feature basis. After the introduction of Hooks,
functional components are equivalent to class components.
Although functional components are the new trend, the react team insists on keeping class
components in React. Therefore, it is important to know how these both components differ.
35. What is the virtual DOM? How does react use the virtual DOM to render the UI?
As stated by the react team, virtual DOM is a concept where a virtual representation of the real
DOM is kept inside the memory and is synced with the real DOM by a library such as
ReactDOM.