0% found this document useful (0 votes)
20 views83 pages

D306 Study Guide

The document serves as a study guide covering various Azure solutions, including virtual machines, Azure App Service, Azure Functions, and storage options like Cosmos DB and Blob Storage. It outlines key concepts, deployment methods, and best practices for implementing secure and scalable cloud solutions, along with details on creating ARM templates and container images using Docker. Additionally, it emphasizes the importance of diagnostics logging and monitoring for troubleshooting applications in Azure.

Uploaded by

Eric Isberg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views83 pages

D306 Study Guide

The document serves as a study guide covering various Azure solutions, including virtual machines, Azure App Service, Azure Functions, and storage options like Cosmos DB and Blob Storage. It outlines key concepts, deployment methods, and best practices for implementing secure and scalable cloud solutions, along with details on creating ARM templates and container images using Docker. Additionally, it emphasizes the importance of diagnostics logging and monitoring for troubleshooting applications in Azure.

Uploaded by

Eric Isberg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 83

Study Guide

Section 1.1: Implement Solutions That Use Virtual Machines


Section 1.2: Create Azure App Service Web Apps
Section 1.3: Implement Azure Functions
Section 2.1: Develop Solutions that Use Cosmos DB Storage
Section 2.2: Develop Solutions That Use Blob Storage
Section 3.1: Implement User Authentication and Authorization
Section 3.2: Implement Secure Cloud Solutions
Section 4.1: Integrate Caching and Content Delivery within Solutions
Section 4.2: Instrument Solutions to Support Monitoring and Logging
Section 5.1: Develop an App Service Logic App
Section 5.2: Implement API Management
Section 5.3: Develop Event-Based Solutions
Section 5.4: Develop Message-Based Solutions
IaaS Model:
 Provides more granular control over the infrastructure supporting your
application.
 Requires ongoing maintenance post-deployment, necessitating budget
allocation for support and trained staff.
PaaS Model:
 Reduces infrastructure planning and deployment requirements by utilizing
managed services.
 Allows focus on code and interactions with other Azure services.
 Examples include Azure App Service and Azure Functions, which handle high
availability and fault tolerance.

1.1

Summary: Provision VMs in Azure

Key Concepts:
Provision VMs: Deploying a VM in Azure involves choosing an operating system and
configuring essential aspects like name, location, size, limits, and extensions. Supported
operating systems include Windows, Windows Server, and major Linux distributions.

Definitions:

 Name: Enter the name of the VM (up to 15 characters long).


 Location: Select the geographical region for your VM. Choosing the wrong region may
have adverse effects.
 Size: Designate resources (memory, processing power, NICs, storage capacity).
 Limits: Default quota limits per subscription (20 VMs per region, can be increased via
Azure support).
 Extensions: Automate tasks or configuration post-deployment (custom scripts,
deploy/manage configurations, collect diagnostic data).
 Related resources: Consider storage type, internet connectivity, and traffic allowed
to/from the VM.

Related Resources:

 Resource Group: Every VM must be in a resource group (create new or reuse existing).
 Storage Account: VM disks are .vhd files stored as page blobs (standard or premium).
 Virtual Network: VMs need to connect to a virtual network.
 Network Interface: VMs require a network interface for communication.

Supported OS for VMs:

 Windows: Windows and Windows Server


 Linux: Principal distributions

Deployment Methods:

1. Using the Azure portal


2. Using PowerShell
3. Using Azure CLI
4. Programmatically using REST API or C#

High Availability:

 Use a load balancer and availability set to ensure high availability and fault tolerance.
VMs in an availability set are distributed across multiple servers to prevent simultaneous
downtimes.

Exam Tip:
 Creating a VM is straightforward, but planning the deployment is crucial. Consider high
availability and scaling needs, and remember to create the availability set before the
VMs.

Summary: Configure VMs for Remote Access

To configure remote access to your Azure VMs, ensure a public IP is set up, and use Network
Security Groups (NSGs) to manage traffic. By default, Azure enables remote protocols (RDP for
Windows and SSH for Linux).

Key Concepts:

 Public IP: Needed for remote access; can be static (incurs cost) or dynamic.
 Network Security Group (NSG): Manages security rules to allow or deny traffic to the
VM.
 Security Rule: A rule within the NSG allowing traffic (e.g., TCP/3389 for RDP).

Supported OS for VMs:

 Windows
 Linux

Steps to Configure Remote Access:

1. Add Public IP, NSG, and Security Rule:


o Public IP: Dynamic to save costs.
o NSG: Manage security rules.
o Security Rule: Allow RDP traffic.

Verifying Remote Access:

1. Open the Azure portal.


2. Search for the VM (e.g., az204VMTesting).
3. Check Networking settings to ensure the Allow-RDP rule is present.
4. Use the Connect option to download the RDP file.
5. Open the RDP file and provide the configured password.

Exam Tip:

Avoid configuring remote access over public IPs for production VMs. Instead, deploy a virtual
private network and use the private IP for remote access. This enhances security by limiting
exposure to the internet.

Summary: Create ARM Templates


Azure Resource Manager (ARM): A service used for deployment and management of Azure
resources (JSON).

Key Concepts:

 Resource: Items you can manage in Azure.


 Resource Group: A container for holding resources, allowing for management and
access control.
 Resource Provider: Services managing resource lifecycles (e.g., Microsoft.Compute
for VMs).
 Resource Manager Template: A JSON file defining resources to deploy to a resource
group or subscription.

ARM Template Structure

 $schema: Specifies the JSON schema version (Resource group or subscription


deployment).
 contentVersion: Internal version number for the template.
 parameters: Optional values provided during deployment.
 variables: Optional values reused across the template.
 functions: Optional custom functions used in the template.
 resources: Required section listing all resources to deploy or update.
 outputs: Optional section defining values to return after deployment.

Basic ARM Template Example:

"$schema":
"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",

"contentVersion": "1.0.0.0",

"parameters": { },

"variables": { },

"functions": [ ],

"resources": [ ],

"outputs": { }
}

#!/bin/bash
#Azure CLI template deployment
az group create --name AZ204-ResourceGroup --location "West US"
az group deployment create \
--name AZ204DemoDeployment \
--resource-group AZ204-ResourceGroup \
--template-file az204-template.json \
--parameters @az204-parameters.json
The @ symbol is required in the az group deployment create command to call param
file.

ARM Template Parameters:

 parameterName: Valid JavaScript identifier for the parameter name.


 defaultValue: Used if no value is provided during deployment; optional.
 type: Specifies the kind of value (string, securestring, int, bool, object, secureObject,
array).

Custom Functions Limitations:

 Cannot access template variables (can be passed as parameters).


 Cannot access template’s parameters directly.
 Custom function parameters cannot have default values.
 Cannot call other custom functions; only predefined functions.
 Cannot use the reference() predefined function.

dependsOn Element:

 Lists resource names required before deploying the resource.


 Use resourceId() for referencing to avoid ambiguity.

Parent-Child vs. Dependencies:

 Parent-child relationships do not ensure deployment order; use dependsOn for correct
order.

Storing Templates:

 Store JSON files in a publicly accessible location.


 Use inline parameters during deployment or a Storage Account and SAS token to protect
sensitive information.

Summary: Create Container Images for Solutions by Using Docker

Containerization is key for reliable and quick software deployment, reducing resource
requirements compared to virtual machines. Containers package code and dependencies, using
shared OS libraries, making them lightweight. Docker is the most widely used container
technology.

Key Concepts:

 Container: Software that packages code and dependencies, running directly in the
environment.
 Container image: A package containing everything needed to run an application.
 Dockerfile: File defining the image, including application requirements.
 Volumes: External mount points used to persist data across container reboots.

Steps to Create a Container Image:

1. Directory: Create a directory for the new image, containing the Dockerfile, code, and
dependencies.
2. Dockerfile: Define the image requirements in a Dockerfile.
3. Command Line: Open a command line to run Docker commands.
4. Build Image: Use docker build --tag=<tag_name>[:<version>] <dockerfile_dir>
to create the image. default value latest.
5. List Image: Verify the image creation with docker image ls.

Enterprise Deployment and Scaling: Scaling dynamically and automatically in an enterprise


environment is challenging. Orchestration solutions such as Docker Swarm, DC/OS, or
Kubernetes help manage this by automatically scaling and deploying containers.

Azure Services for Container Deployment:

 Azure Kubernetes Services (AKS)


 Service Fabric
 Azure Web Apps for Containers
 Azure Container Registry
 Azure Container Instances

Docker Compose for Complex Applications: For complex applications requiring multiple
containers, Docker Compose is used to define and run multiple containers. Each service in
Docker Compose has a one-to-one relationship with an image but can have multiple instances
(containers).

Docker Compose File:


 docker-compose.yaml: Contains definitions of relationships and requirements for
running your application.

Important Points:

 Services have a one-to-one relationship with an image.


 Use Docker Compose to manage multiple containers and set resource limits.
 Modifications made to a running container do not persist after a reboot; use volumes to
persist data.

Exam Tips:

 Modifications to a running container do not persist after a reboot.


 Use volumes to save data across reboots.
 For multiple containers, use Docker Compose to define relationships and resource limits.

Publish an Image to the Azure Container Registry

The purpose of creating an image is to ensure your code is portable and independent from the
server executing it. For this, the image must be accessible to all servers. Storing your image in a
centralized service like Azure Container Registry (ACR) achieves this.

Azure Container Registry (ACR):

 Microsoft's Docker registry service based on Docker Registry 2.0.


 Allows private storage and distribution of images.
 Supports building images on the fly and automating builds based on code commits.

Tagging and Pushing an Image: Before uploading an image to ACR, you need to tag it using
the format <acr_name>.azurecr.io/[repository_name][:version].

 acr_name: Name of your registry.


 repository_name: Optional name for a repository in your registry.
 version: Version of the image.

Procedure for Pushing an Image to ACR:

1. Log in to Azure:

bash
Copy code
az login

2. Log in to ACR:

bash
Copy code
az acr login --name <acr_name>

3. Tag the Image:

bash
Copy code
docker tag foobar <acr_name>.azurecr.io/<repository_name>/<image_name>

4. Push the Image:

bash
Copy code
docker push <acr_name>.azurecr.io/<repository_name>/<image_name>

After pushing, verify the image in the ACR repositories.

Exam Tip: A container registry is essential not only for storing images but also for automating
container deployment into Azure Container services. Continuous delivery services, like Azure
Pipelines, rely on container registries for deploying container images.

Run Containers Using Azure Container Instance

Once you have created your image and pushed it to your Azure Container Registry (ACR),
follow these steps to run the container using Azure Container Instance (ACI):

1. Create Images: Ensure all necessary images for your application are created.
2. Push to Registry: Upload the images to a container registry.
3. Deploy Application: Deploy the application from the registry to ACI.

To create and run a container in ACI from ACR using Admin account authentication:

1. Sign in to Azure Cloud Shell: Access via Azure Cloud Shell.


2. Select Bash: Choose Bash in the Shell Selector.
3. Open Online Editor: Click the curly brace icon next to the Shell Selector.

4. Save the Script: Click the ellipsis icon below user information, then click Save and
provide a name for the script.
5. Execute the Script: Run the script in Azure Cloud Shell:

bash
Copy code
sh <your_script_name>

Accessing the Container:

 Find your container in the Azure portal by its name.


 Access the application via the URL: <APP_DNS_NAME>.<region>.azurecontainer.io.
Exam Tip:

 Authentication Mechanisms: Options include individual login with Azure AD, admin
account, or service principal.
o Azure AD: Suitable for development and testing.
o Admin Account: Disabled by default, discouraged for production due to security
risks.
o Service Principal: Recommended for production to pull images securely.

Azure App Service Overview

Azure App Service is a Platform as a Service (PaaS) solution that allows you to develop web
applications, mobile app back-ends, or REST APIs without managing the underlying
infrastructure. It supports various programming languages (.NET, .NET Core, Java, Ruby,
Node.js, PHP, Python) and platforms (Linux, Windows). Key features include load balancing,
security, autoscaling, automated management, and integration with CI/CD tools like GitHub,
Docker Hub, and Azure DevOps.

Key Concepts

 App Service Plan: Manages the group of VMs that host your web application.
o Region: Location where the App Service plan is deployed.
o Number of Instances: Number of VMs in the App Service plan.
o Size of Instances: Size of the VMs.
o Operating System Platform: OS (Linux or Windows) for the VMs.
o Pricing Tier: Features and cost of the App Service plan.

Creating an Azure App Service Web App

1. Open Visual Studio 2019.


2. Create a New Project:
o Select C# and Web project types.
o Choose ASP.NET Core Web Application with .NET Core and ASP.NET Core
3.1.
o Uncheck Configure For HTTPS.
3. Publish the Project:
o Use the Publish tool in Visual Studio.
o Create a new Azure account with sufficient privileges.
o Configure the App Service settings: App Name, Subscription, Resource Group,
Hosting Plan.
4. Deploy the Web Application:
o Visual Studio uploads the code to the App Service.
o Access the deployed app via
https://<your_app_service_name>.azurewebsites.net.

Pricing Tiers
 Free Tier (F1): Basic, shared resources, not available for Linux VMs.
 Standard and Premium Tiers: Better performance, custom domains, SSL, backups,
deployment slots.
 Isolated Tier: Dedicated VMs and virtual networks, maximum scale-out capabilities.

Security Integration

 Authentication and Authorization: Integrate with providers like Azure, Microsoft,


Google, Facebook, Twitter.
 VNet Integration: For Standard, Premium, or PremiumV2 tiers to access on-premises
resources.
 Hybrid Connections: Use Azure Service Bus Relay for network connections between
App Service and application endpoints.

Important Points

 Operating System: Cannot change OS for the App Service without recreating it.
 Continuous Integration/Deployment: Integrated with GitHub, Docker Hub, Azure
DevOps.
 Scaling: Manually or automatically scale the number of VMs.
 Deployment Slots: Use different slots for testing and production.

Summary

Azure App Service is a versatile PaaS solution supporting multiple languages and platforms,
providing robust infrastructure capabilities like load balancing and autoscaling. It integrates well
with CI/CD pipelines and offers various pricing tiers to match development and production
needs. Careful planning of App Service plans, resource groups, and pricing tiers ensures optimal
performance and cost-effectiveness.

Enable Diagnostics Logging

Troubleshooting and diagnosing the behavior of an application is fundamental in its lifecycle.


Azure App Service provides mechanisms for enabling diagnostics logging at different levels:

Web Server Diagnostics

 Detailed Error Logging: Logs detailed information for HTTP status codes 400 or
greater, storing HTML files in the instance's file system (up to 50 files).
 Failed Request Tracing: Logs detailed information about failed requests, including IIS
component traces and processing times.
 Web Server Logging: Logs HTTP transaction information using the W3C extended log
file format, with customizable retention policies (default space quota: 35 MB).

Application Diagnostics
 Send log messages directly from your code using the standard logging system of your
app's language. Different from Application Insights, which requires the Application
Insights SDK.

Deployment Diagnostics

 Automatically enabled to gather information related to application deployment, useful for


troubleshooting deployment failures.

Enabling Diagnostics Logging

 Levels of Error Log: Disabled, Error, Warning, Information, Verbose.


•Disabled No errors are registered.
 • Error Critical and Error categories are registered.
 • Warning Registers Warning, Error, and Critical categories.
 • Information Registers Info, Warning, Error, and Critical log categories.
 • Verbose Registers all log categories (Trace, Debug, Info, Warning, Error, and Critical).
 Storage Location: File system (for debugging purposes, automatically disabled after 12
hours) or Blob Storage (with configurable retention period).

Folder Structure for Log Files

 /LogFiles/Application/: Application logging files.


 /LogFiles/W3SVC#########/: Failed request traces.
 /LogFiles/DetailedErrors/: Detailed error logs.
 /LogFiles/http/RawLogs/: Web server logs in W3C format.
 /LogFiles/Git: Deployment logs, also in D:\home\site\deployments.

Downloading Log Files

 Use FTP/S or Azure CLI:

bash
Copy code
az webapp log download --resource-group <Resource group name> --name
<App name>

Viewing Logs in Real-Time

 Log Streams: View log messages as they are saved, available via the Azure portal or
Azure CLI:

bash
Copy code
az webapp log tail --resource-group <Resource group name> --name <App
name>

Integration with Azure Monitor


 Diagnostics information can be sent to Azure Monitor (feature in preview).

Exam Tip

 Not all programming languages can write log information to Blob Storage. Blob Storage
is supported for .NET application logs, while Java, PHP, Node.js, or Python must use the
application log file system option.

By using these logging mechanisms, developers can ensure efficient monitoring and
troubleshooting of their applications deployed on Azure App Service.

Deploy Code to a Web App

Overview

Deploying code to Azure App Service can be done through various methods suitable for
continuous deployment or integration workflows.

Deployment Options

 ZIP or WAR Files: Package all files and use the Kudu service to deploy.
 FTP: Copy application files directly using the FTP/S endpoint.
 Cloud Synchronization: Sync code from OneDrive or Dropbox with the App Service
using Kudu.
 Continuous Deployment: Integrate with GitHub, BitBucket, or Azure Repos to deploy
updates.
 Local Git Repository: Configure App Service as a remote repository for local Git and
push code to Azure.
 ARM Template: Use Visual Studio and ARM templates for deployment.

Example: Deploying with Azure Pipelines

1. Open Azure Portal: Navigate to your App Service.


2. Deployment Center: Select Azure Repos and continue through setup.
3. Build Provider: Choose Azure Pipelines (Preview).
4. Configure: Select your Azure DevOps organization, project, repository, and branch.
5. Summary: Review and finish setup.

Exam Tip

 Continuous Deployment Authorization: Ensure your continuous deployment system is


authorized before performing any deployment.

Configure Web App Settings Including SSL, API, and Connection Strings

Configuration Categories
 Application Settings: Environment variables passed to your code, equivalent to
<appSettings> in Web.config or appsettings.json. Always encrypted at rest.
 Connection Strings: Configure database connection strings, equivalent to
<connectionString> in Web.config or appsettings.json.
 General Settings:
o Stack Settings: Configure the stack and version (e.g., .NET Core, .NET, Java,
PHP, Python).
o Platform Settings: 32- or 64-bit platform, IIS pipeline mode, FTP state, HTTP
version, web sockets, always on, ARR affinity, debugging, incoming client
certificates, default documents.
 Path Mappings:
o Windows Apps (Uncontainerized): Handler mappings, virtual applications, and
directories.
o Containerized Apps: Mount points attached to containers during execution (up to
five Azure files or blob mount points per app).

Accessing Settings from Code

 PHP Example:

php
Copy code
$testing_var1 = getenv('APPSETTING_testing-var1');
$connection_string = getenv('SQLAZURECONNSTR_testing-connsql1');

 ASP.NET Example:

csharp
Copy code
System.Configuration.ConfigurationManager.AppSettings["testing-var1"];
System.Configuration.ConfigurationManager.ConnectionStrings["testing-
connsql1"];

Configuring SSL Settings

1. Azure Portal: Navigate to your Azure web app.


2. Custom Domain: Add and validate a custom domain.
3. TLS/SSL Settings: Add TLS/SSL binding with a valid certificate for your custom
domain.

Exam Tip

 Application Settings Overwrite: Remember that settings configured in the Application


Settings section overwrite values in <appSettings> or <connectionStrings> in
Web.config or appsettings.json.
Implement Autoscaling Rules, Including Scheduled Autoscaling, and Scaling by
Operational or System Metrics

Overview

Autoscaling in Azure allows you to dynamically assign more resources to your application as
needed, ensuring optimal performance without wasting resources. It addresses both vertical
(scaling up/down) and horizontal (scaling out/in) scaling needs.

Key Concepts

 Vertical Scaling: Increases computing power by adding memory, CPU resources, and
IOPS to the application, usually by moving to a larger VM. This requires stopping the
system during resizing.
 Horizontal Scaling: Increases application capacity by adding or removing instances of
the application, managed automatically by Azure. This does not require stopping the
system.

Autoscaling Rules

Autoscaling is typically configured through rules that determine when and how to scale:

 Time-based Rules: Scale based on a schedule (e.g., increasing resources during the first
week of each month).
 Metric-based Rules: Scale based on predefined metrics such as CPU usage, HTTP
queue length, or memory usage.
 Custom-based Rules: Scale based on custom metrics exposed via Application Insights.

Supported Azure Resource Types

 Azure Virtual Machines: Autoscaling through virtual machine scale sets.


 Azure Service Fabric: Autoscaling through node types supported by VM scale sets.
 Azure App Service: Built-in autoscaling capabilities for adding or removing instances.
 Azure Cloud Services: Built-in autoscaling capabilities for roles in the cloud service.

Example: Adding a Metric-based Autoscale Rule

1. Open Azure Portal: Go to the Azure portal.


2. Find Your App Service: Search for your Azure App Service.
3. Scale-Out: Go to the Scale-Out (App Service Plan) option.
4. Custom Autoscale: Click Custom Autoscale on the Configure tab.
5. Add a Rule: In the Default Auto Created Scale Condition, click Add A Rule.
6. Configure Criteria: Select CPU Percentage > 80.
7. Set Action: Increase the Instance count by 1.
8. Save Profile: Set Maximum Instance Limit to 3 and save.
bash
Copy code
# Example: Scaling rule for Azure App Service
az webapp config set --resource-group <ResourceGroupName> --name
<AppServiceName> --scm-type LocalGit
az webapp deployment source config-local-git --resource-group
<ResourceGroupName> --name <AppServiceName>
git remote add azure <GitURL>
git push azure master

Common Autoscale Patterns

 Scale based on CPU: Configure both scale-out and scale-in rules based on CPU usage.
 Scale differently on weekdays vs. weekends: Use different profiles for weekdays and
weekends.
 Scale during holidays: Add instances during high-demand periods like holidays.
 Scale based on custom metrics: Use custom metrics for different layers of your
application (e.g., front-end, back-end, API).

Exam Tips

 Create Opposite Rules: Always create scale-in rules when you create scale-out rules to
ensure efficient resource usage.
 Authorization: Ensure your continuous deployment system is authorized before
performing deployments.
 Custom Metrics: Utilize Application Insights to expose custom metrics for autoscaling.

1.3

Implement Azure Functions

Azure Functions allow you to run pieces of code that solve particular problems within an
application. These functions operate like classes or functions within your code, receiving input,
executing logic, and producing output. They are highly cost-efficient, especially with the
Consumption pricing tier, where you are charged only for the time your code is running. Azure
Functions can also run on an existing App Service Plan if you already have other app services
running.

This skill covers:

 Implementing input and output bindings for a function


 Implementing function triggers using data operations, timers, and webhooks
 Implementing Azure Durable Functions

Implement input and output bindings for a function


Bindings in Azure Functions connect your function to external resources without hard-coding
these connections. An Azure Function can have a mix of input and output bindings or no binding
at all. Triggers are the events that cause the function to start execution, whereas bindings pass
data to and from the function as parameters.

Example Scenario:

 Trigger: An Event Grid event for a new image in Blob Storage.


 Input Binding: Blob Storage for reading the image.
 Output Bindings: Cosmos DB for saving results and SignalR for notifications.

Key Points:

 type: Represents the binding type (e.g., blob, cosmosDB).


 direction: Indicates if the binding is for input (in) or output (out).
 name: Used to bind data in the function.

Exam Tip: Remember to install necessary extensions using the func extensions install
command from the Azure Function CLI tools before using bindings or triggers.

Binding Expressions: Use curly braces to create dynamic paths or elements (e.g., {data.url}).

Implement Function Triggers by Using Data Operations, Timers,


and Webhooks

Summary: Azure Functions can be triggered based on various events such as data operations,
timers, and webhooks. These triggers initiate the function's execution and provide the necessary
data for the function to process.

Types of Triggers:

1. Data Operation Triggers: Activated by data changes such as creation,


update, or addition in systems like Cosmos DB, Event Grid, Event Hub, Blob
Storage, Queue Storage, and Service Bus.
2. Timer Triggers: Execute functions based on a defined schedule using CRON
expressions or TimeSpan expressions.
3. HTTP and Webhooks Triggers: Start functions in response to HTTP
requests or webhooks, allowing integration with external systems or APIs.

Implement Azure Durable Functions

 Azure Functions: Stateless; runtime doesn't maintain state between executions if the host
process or VM is recycled or rebooted.
 Azure Durable Functions: Extension of Azure Functions providing stateful workflow
capabilities.
o Function Chaining: Allows calling functions in sequence while
maintaining state.
o Workflow Definition by Code: No need for JSON workflows or
external tools.
o Consistent Workflow State: Checkpoints save activity state during
waits.

 Advantages: Simplifies complex stateful coordination in serverless scenarios. Limited


language support (C#, F#, JavaScript).
 Function Types in Durable Functions:
o Activity Functions: Perform the actual work (e.g., sending emails,
saving documents).
o Orchestrator Functions: Define the order and logic of workflow
execution.
o Client Functions: Entry points that trigger workflows (via HTTP,
queue, or event triggers).

 Triggers and Bindings:


o Orchestration Trigger: Manages orchestration functions; single-
threaded.
o Activity Trigger: Used in activity functions; multithreaded.

Durable Functions Workflow Example:

 Setup Requirements: Azure subscription, Azure Storage Account, Cosmos


DB, Visual Studio Code, necessary extensions.
 Client Function: Initiates the workflow.
 Orchestrator Function: Calls activity functions in sequence.
 Activity Functions: Handle tasks like creating and saving orders.

Common Patterns:

 Chaining: Functions execute in a specific order, output of one is input to the


next.
 Fan Out/Fan In: Executes multiple functions in parallel, then aggregates
results.
 Async HTTP APIs: Manages state of long-running operations with external
clients.
 Monitor: Creates recurring tasks with flexible intervals.
 Human Interaction: Requires manual approval for steps in the workflow.

Thought Experiment

Scenario: Developing an application to integrate multiple systems with reports uploaded to


Azure Storage. The application reads these reports and inserts the data into destination systems.

1. Approval Workflow:
o Service: Azure Durable Functions
o Explanation: Use Human Interaction pattern for validation before
inserting data. Start workflow with Azure Blob Storage triggers.
Durable Functions handle human confirmation.

2. Performance Issues:
o Solution: Deploy Azure Durable Functions on Azure App Service Plans.
o Method: Configure autoscale rules in the Standard pricing tier to
add/remove resources based on CPU consumption or specific days.
Study usage patterns to set autoscale rules.

Important Definitions:

 Azure Functions: Stateless serverless compute.


 Azure Durable Functions: Stateful workflows in serverless environments.
 Activity Functions: Perform tasks in a workflow.
 Orchestrator Functions: Manage workflow logic and order.
 Client Functions: Trigger workflows.
 Orchestration Trigger: Manages orchestration functions; single-threaded.
 Activity Trigger: Manages activity functions; multithreaded.

Exam Tip:

 Binding Mechanism: Use bindings to pass information between different


functions in the workflow.

Chapter Summary

 Azure Computing Services: Provides cloud-based virtual infrastructure and


hybrid architectures.
 Azure Resource Manager (ARM): Manages cloud resources using JSON-
based ARM templates.
 Container Image: A software package with code and dependencies for
running applications.
 Container Instance: A running instance of a container image.
 Registry: A centralized store for container images.
 Azure Container Registry: Managed registry based on Docker Registry 2.0.
 Container Services: Azure services for running containers, including
Managed Kubernetes, Container Instances, Batch, App Service, and Container
Service.
 Serverless Solutions: Services that focus on code without managing
infrastructure.
 Azure App Services: Base for serverless offerings, including web apps,
mobile back-end apps, REST APIs, and Azure Functions.
 App Service Plan: Provides resources and VMs for running App Services.
 Diagnostics Logging: Types include webserver logging, detailed error,
failed requests, application diagnostics, and deployment diagnostics.
 Scaling:
o Horizontal Scaling (Scale In/Out): Adding/removing application
instances.
o Vertical Scaling (Scale Up/Down): Adding/removing resources to
the same VM.
 Autoscale: Automatically adjusts resources based on predefined rules.
 Azure Functions: Evolves from WebJobs, using triggers and bindings for
function instances.
 Azure Durable Functions: Workflow-based functions with state
preservation.
 Orchestration Functions: Define the order of execution for workflow steps.
 Activity Functions: Contain the code for specific actions within a workflow.
 Client Functions: Create instances of orchestration functions.
 Azure Function Apps: Provide resources for running Azure Functions and
Durable Functions.

Thought Experiment

Scenario: Developing an application to integrate multiple systems, including a legacy system


that generates reports uploaded to Azure Storage. The application reads these reports and inserts
information into different systems.

1. Approval Workflow:
o Service Needed: Azure Durable Functions.
o Reason: Allows human interaction for approval before inserting data,
using Blob Storage triggers to start the workflow.
2. Performance Issues:
o Solution: Deploy Azure Durable Functions on Azure App Service Plans.
o Action: Configure Autoscale rules to add resources during peak times
based on CPU consumption or specific days.

Chapter 2: Develop for Azure Storage

Summary: This chapter discusses the essential aspects of designing and implementing storage
solutions using Microsoft Azure. It highlights the challenges in storing data persistently and
efficiently, and how Azure's various storage solutions can address these challenges. The chapter
focuses on Cosmos DB storage and Blob Storage, detailing their features, benefits, and
implementation strategies.

Key Points:

1. Cosmos DB Storage:
o Globally distributed, low-latency, highly responsive, always-online
database service.
o Scalable across the globe with a simple configuration.
o Multiple APIs for accessing data: SQL, Table, Cassandra, MongoDB, and
Gremlin.
o SDKs available for various languages such as .NET, Java, Node.js, and
Python.

2. Blob Storage:
o Detailed information not covered in this section.

SKILL 2.1: Develop Solutions that Use Cosmos DB Storage

Cosmos DB Features:

 Designed for scalability and high throughput.


 Supports various data structures (Key-Value, Column-Family, Document,
Graph).
 Accessed via SQL, Cassandra, Table, Gremlin, and MongoDB APIs.

Key Considerations:

 API Selection: Choose the API based on data structure requirements:


o SQL API: For querying JSON objects with SQL syntax.
o Table API: Similar to Azure Table Storage with additional features.
o Cassandra API: For column-based data using the CQLv4 protocol.
o MongoDB API: For document-based data with MongoDB 3.2
compatibility.
o Gremlin API: For graph-based data structures with vertices and
edges.

Partitioning Schemes:

 Logical Partitions: Smaller data slices within a container, sharing the same
partition key.
 Physical Partitions: Groups of logical partitions managed by Azure,
containing replicas of data.
 Partition Key: Critical for performance; immutable once set. Should ensure
even distribution of data and avoid "hot" partitions.

Choosing Partition Keys:

 Should evenly distribute requests across partitions.


 Avoid keys with limited values to prevent "hot" partitions.
 Ensure keys align with workload requirements for both read and write
operations.
 Consider synthetic partition keys when no single property is suitable.

Exam Tip:

 Remember that once a partition key is chosen, it cannot be changed. The


correct selection of a partition key is crucial for achieving optimal
performance and scalability.
Important Definitions:

 Cosmos DB: A globally distributed, multi-model database service designed


for low latency and high availability.
 API: Application Programming Interface used for accessing Cosmos DB in
different data models.
 Partition Key: A unique identifier that determines how data is distributed
across partitions in Cosmos DB.
 Logical Partition: A subset of data in a container that shares the same
partition key value.
 Physical Partition: A group of logical partitions managed by Azure, storing
replicas of data.

Quick Summary

Azure Cosmos DB allows you to interact with data using various APIs. Once you select an API
for your Cosmos DB account, it cannot be changed. This example focuses on using the Cosmos
DB SQL API with .NET Core to create, update, and delete elements in a Cosmos DB account. It
details the setup of a .NET Core console application, adding necessary NuGet packages, and
configuring the application to interact with Cosmos DB. The SDK provides classes such as
CosmosClient, Database, and Container for managing these elements. Methods like
CreateDatabaseIfNotExistsAsync, CreateContainerIfNotExistsAsync, and
UpsertItemAsync are used for CRUD operations. Consistency levels (Strong, Bounded
Staleness, Session, Consistent Prefix, and Eventual) determine how data is replicated across
regions, impacting latency, availability, and data consistency.

Key Terms and Definitions

1. Cosmos DB: A globally distributed, multi-model database service provided


by Azure.
2. API: Application Programming Interface used to interact with Cosmos DB,
such as SQL, MongoDB, Cassandra, Gremlin, etc.
3. .NET Core: A cross-platform framework for building applications.
4. SQL API: An API for querying Cosmos DB using SQL syntax.
5. CosmosClient: A class in the Cosmos DB SDK for interacting with the
database account.
6. Database: A logical container for data in Cosmos DB.
7. Container: A collection within a Cosmos DB database where items are
stored.
8. PartitionKey: A key used to distribute data across multiple partitions for
scalability.
9. CreateDatabaseIfNotExistsAsync: A method to create a database if it
does not already exist.
10.CreateContainerIfNotExistsAsync: A method to create a container if it
does not already exist.
11.UpsertItemAsync: A method to update an existing item or create a new one
if it does not exist.
12.LINQ: Language-Integrated Query used for querying data in a .NET
environment.
13.CosmosException: An exception class for handling errors related to Cosmos
DB operations.
14.Consistency Level: Determines the trade-offs between consistency,
availability, and performance in a distributed database.
o Strong: Guarantees the most recently committed version is always
read.
o Bounded Staleness: Ensures consistency within a preconfigured lag.
o Session: Provides consistency within a session.
o Consistent Prefix: Ensures reads are in the same order as writes.
o Eventual: Provides no guarantee on the order of reads but ensures
eventual consistency.
15.MERN: A full-stack framework using MongoDB, Express, React, and Node.js.
16.Write Concern: MongoDB setting determining the level of acknowledgment
requested from MongoDB for write operations.
17.Read Concern: MongoDB setting determining the level of isolation for read
operations.
18.Master Directive: MongoDB setting to specify the primary node for
read/write operations.

Other Useful Facts

 Once an API is chosen for a Cosmos DB account, it cannot be changed.


 Methods such as CreateItemAsync and DeleteItemAsync use partition keys to
identify the correct partitions.
 Cosmos DB supports querying with SQL or LINQ.
 Various consistency levels impact the performance, latency, and availability
of the data.
 Mapping between Cosmos DB consistency levels and those of Cassandra and
MongoDB ensures compatibility and expected behavior.

Exam Tip

The consistency level you choose affects latency and availability. Avoid the most extreme levels
unless necessary, as they can significantly impact your application. If unsure, the session
consistency level is generally the best-balanced option for most applications.

Quick Summary

In Cosmos DB, you create databases and containers to store and manage data. The API chosen
for a Cosmos DB account determines the storage and data access method. Databases group
containers and are similar to namespaces, while containers are the primary units of scalability for
throughput and storage. When creating containers, you can set properties like partition keys,
throughput modes, indexing policies, TTL, change feed policies, and unique keys. Some
properties can only be set during the container creation process, so planning is crucial.
Key Terms and Definitions

1. Cosmos DB: A globally distributed, multi-model database service provided


by Azure.
2. API: Application Programming Interface used to interact with Cosmos DB.
Examples include SQL API, Cassandra API, MongoDB API, Gremlin API, and
Table API.
3. Database: A logical grouping of containers within Cosmos DB, akin to
namespaces.
4. Container: The unit of scalability for throughput and storage in Cosmos DB.
5. Partition Key: A key that determines how items in a container are
distributed across logical and physical partitions.
6. Dedicated Throughput: Resources are reserved for a specific container,
backed by SLAs.
7. Shared Throughput: Throughput is shared between containers in a
database, except those with dedicated throughput.
8. IndexingPolicy: Configures how items in a container are indexed.
9. TimeToLive (TTL): Automatically deletes items after a specified period.
10.ChangeFeedPolicy: Allows reading of changes made to items in a container.
11.UniqueKeyPolicy: Ensures that items have unique values for specified
properties within a partition.
12.Throughput: The measure of database or container performance in terms of
read and write operations per second.

Other Useful Facts

 Creating Containers: The process includes naming the database,


configuring throughput, specifying the container ID and partition key, and
optionally adding unique keys.
 Indexing: All item properties are indexed by default, but you can customize
this.
 TTL: Configurable at both the container and item levels to control data
expiration.
 Change Feed: Tracks changes to items, useful for triggering notifications or
API calls.
 Unique Keys: Ensure data uniqueness within a logical partition; defined
during container creation and cannot be changed later.

Exam Tip

Carefully plan the creation of new containers in Azure Cosmos DB. Some properties, such as
unique keys and partition keys, can only be set during the container creation process. Modifying
these properties later requires creating a new container and migrating data.
Moving Items in Blob Storage between Storage Accounts or
Containers

Summary: When managing Azure Blob Storage, you may need to move blobs between storage
accounts or containers. Tools available for these tasks include Azure Storage Explorer, AzCopy,
Python (using azure-storage-blob), and SSIS. These tools typically perform copy operations
followed by deleting the source blob or container.

Key Points:

 Tools for Moving Blobs:


o Azure Storage Explorer: Graphical tool for managing storage
operations.
o AzCopy: Command-line tool for bulk copy operations.
o Python: Use the azure-storage-blob package for programmatic
management.
o SSIS: SQL Server Integration Service for transferring data between on-
premises and Azure Storage.

 Process for Moving Blobs:

1. Copy: Transfer the blob to the destination.


2. Delete: Remove the source blob once the copy is successful.

 Example Using Azure Storage Explorer:

1. Open Azure Storage Explorer and log into your Azure subscription.
2. Navigate to the source storage account and container.
3. Select the blob to move, copy it, then navigate to the destination
container and paste it.
4. Confirm the copy completion, then delete the source blob.

 AzCopy Command Example:


o azcopy copy <URL_Source_Item><Source_SASToken>
<URL_Target_Container><Target_SASToken>

 Python and SSIS:


o Python: Utilize the azure-storage-blob package for blob management.
o SSIS: Use SSIS connectors for moving data to or from Azure Blob
Storage.

Definitions:

 Azure Storage Explorer: A graphical tool to manage Azure storage


services.
 AzCopy: Command-line utility for transferring data to/from Azure Storage.
 Azure Storage Account: A service for storing data objects like blobs, files,
queues, and tables.
 Container: A grouping of blobs within a storage account.
 Blob: Binary large object, a storage format for storing unstructured data.
 SAS Token: Shared Access Signature, a URI that grants restricted access
rights to Azure Storage resources.

Exam Tip:

 You can move blobs and containers across different storage accounts,
regions, and subscriptions, provided you have sufficient access privileges for
both the source and destination accounts.

Set and Retrieve Properties and Metadata in Azure Storage

Summary: Azure Storage allows you to work with additional information assigned to your blobs
through system properties and user-defined metadata. System properties are automatically added
by the storage service and can be either modifiable or read-only. User-defined metadata consists
of key-value pairs added for your purposes and needs to be managed manually.

Key Points:

 System Properties: Auto-assigned by Azure Storage; some are modifiable,


others are read-only. Correspond to certain HTTP headers.
 User-Defined Metadata: Custom key-value pairs added to storage
resources for your specific needs. Must be manually updated as needed.

Tools and Methods:

 SDKs: Use the appropriate SDK (e.g., .NET SDK) to set and retrieve properties
and metadata.
 Azure CLI: Use commands such as az storage blob metadata for metadata
operations.
 Azure Portal: View and edit properties and metadata through the Properties
and Metadata sections.

Definitions:

 System Properties: Information auto-added by Azure Storage services,


includes HTTP headers.
 User-Defined Metadata: Custom key-value pairs for Azure Storage
resources, managed manually.
 Azure Storage Explorer: A graphical tool for managing Azure Storage.
 AzCopy: A command-line tool for transferring data between different sources
and Azure Storage.
Interact with Data Using the Appropriate SDK

Summary: Microsoft provides several SDKs for working with Azure Storage, supporting
various programming languages like .NET, Java, Python, JavaScript, Go, PHP, and Ruby. These
SDKs offer more control over the data operations compared to other tools. You can perform
operations such as moving blobs between containers and storage accounts programmatically
using these SDKs.

Key Points:

 SDKs Available: .NET, Java, Python, JavaScript (Node.js or browser), Go,


PHP, Ruby.
 Blob Operations: Moving blobs involves copying the blob to the destination
and then deleting the original blob.
 NuGet Packages: Essential for .NET projects include Azure.Storage.Blobs,
Azure.Storage.Common, Microsoft.Extensions.Configuration,
Microsoft.Extensions.Configuration.Binder,
Microsoft.Extensions.Configuration.Json.
 BlobServiceClient: Used to create clients for storage accounts to manage
blobs.

Definitions:

 BlobServiceClient: A client for interacting with Blob storage.


 CopyFromUriAsync(): Method to copy a blob from a source to a destination.
 DeleteAsync(): Method to delete the original blob after copying.

Example Operations:

1. Copy Blob within the Same Storage Account:


o Create BlobServiceClient for the source account.
o Get references for source and destination containers.
o Copy the blob from the source to the destination container.
2. Move Blob between Different Storage Accounts:
o Create BlobServiceClient for both source and destination accounts.
o Get references for source and destination containers.
o Copy the blob from the source to the destination container and delete
the original blob.

Exam Tip: Remember, when moving blobs, you must perform a copy operation first and then
delete the source blob. There is no direct move method in the SDKs.
Move Items in Blob Storage Between Storage Accounts or
Containers

Summary: Azure provides tools like Azure Storage Explorer, AzCopy, Python SDK, and SSIS
for moving blobs between storage accounts or containers. Moving a blob involves copying it to
the destination and then deleting the original.

Key Points:

 Tools: Azure Storage Explorer, AzCopy, Python SDK, SSIS.


 Move Process: Copy blob to destination, then delete original.

Definitions:

 Azure Storage Explorer: Graphical tool for managing Azure Storage


operations.
 AzCopy: Command-line tool for bulk copy operations.
 Blob Storage: Service for storing large amounts of unstructured data.

Exam Tip: Remember that moving blobs involves copying and then deleting the original. Tools
like AzCopy are ideal for bulk operations and cross-account blob copying.

Quick Summary:

Azure Blob Storage provides different access tiers (Hot, Cool, and Archive) to optimize storage
costs and performance based on data access frequency. The Hot tier is for frequently accessed
data, Cool for less frequently accessed data, and Archive for rarely accessed data. You can
implement data archiving and retention policies using lifecycle management policies, which
automate moving data between tiers. SDKs allow for programmatically managing blobs and
changing access tiers.

Key Points:

 Access Tiers: Hot, Cool, Archive


 Lifecycle Management: Automates moving data between tiers based on
defined rules
 Blob Rehydration: Process of moving data from Archive to Hot/Cool
 Storage Tiering: Only available on General Purpose v2 (GPv2) Storage
Accounts
 SDK Usage: Allows for fine-grained control over blob operations

Key Terms and Definitions:

 Hot Tier: For frequently accessed data.


 Cool Tier: For data that is less frequently accessed and stored for at least 30
days.
 Archive Tier: For rarely accessed data stored for at least 180 days.
 Blob Rehydration: The process of moving data from Archive to an online
tier, which can take up to 15 hours.
 Lifecycle Management Policy: A JSON document defining rules for moving
blobs between access tiers based on criteria.
 BlobServiceClient: Client for interacting with Blob storage.
 BlobLeaseClient: Object that allows acquiring, renewing, and releasing
leases on blobs.
 SetAccessTier(): Method to change the access tier of a blob.

Exam Tips:

 Access Tiers: Remember that only GPv2 Storage Accounts support access
tiers.
 Blob Movement: Moving blobs involves a copy operation followed by
deleting the original blob.
 Leases: Use leases to prevent simultaneous access conflicts on blobs. Infinite
leases must be released manually.
 Rehydration: Be aware of the rehydration process when moving data from
Archive to other tiers.
 Lifecycle Management: Policies are evaluated every 24 hours, and changes
may not be immediate.
 SDK Version: Be mindful of which SDK version you are using as they offer
different features.

Chapter Summary

 Cosmos DB: A premium storage service offering low-latency, globally


distributed data access.
 PartitionKey: Defines the storage partition for an entity; choosing the right
PartitionKey is crucial for performance.
 APIs for Cosmos DB: Access using SQL, Table, Gremlin (Graph), MongoDB,
and Cassandra.
 Custom Indexes: Create custom indexes for efficient querying.
 Partition Key Selection: Avoid keys that create too many or too few logical
partitions; logical partitions have a 20 GB storage limit.
 Consistency Levels: Define data replication across regions; five levels are
strong, bounded staleness, session, consistent prefix, and eventual.
o Strong Consistency: Higher consistency, higher latency.
o Eventual Consistency: Lower latency, lower consistency.
 Azure Blob Storage: Move items between containers or accounts; three
access tiers (Hot, Cool, Archive).
 Access Tiers:
o Hot: Frequently accessed data.
o Cool: Less frequently accessed data, stored for at least 30 days.
o Archive: Rarely accessed data, stored for at least 180 days.
 Lifecycle Management Policies: Automate data movement between tiers.
Key Terms and Definitions

 Cosmos DB: Globally distributed, low-latency database service.


 PartitionKey: Defines the logical partition for storing data in Cosmos DB.
 Consistency Levels: Determines the replication and consistency of data
across regions.
 Azure Blob Storage: Cloud storage service for unstructured data.
 Access Tiers: Different levels (Hot, Cool, Archive) for managing storage
costs and performance.
 Lifecycle Management Policy: Rules for automating data movement based
on criteria.
 Blob Rehydration: Process of moving data from Archive to Hot/Cool, which
can take up to 15 hours.
 BlobLeaseClient: Manages leases to prevent simultaneous access issues.

Thought Experiment

1. Problem: Partition key causing "hot spots".


o Solution: Create a new container with a different partition key to
distribute items evenly across partitions. Use AzCopy to migrate data
to the new container.

2. Problem: Storing information for several years securely and cost-effectively.


o Solution: Upgrade to a Gen2 Storage Account. Use lifecycle
management policies to move data to the archive tier when not
accessed for a set period.

Thought Experiment Answers

1. Hot Spots in Partition Key:


o Issue: Hot spots due to an uneven distribution of items across
partitions.
o Solution: Change the partition key to distribute items evenly. Since
the partition key cannot be modified, create a new container with the
new key and migrate data using AzCopy.

2. Long-term Data Storage:


o Issue: Need to store data for several years securely and cost-
effectively.
o Solution: Upgrade from Gen1 to Gen2 Storage Account. Implement
lifecycle management policies to automatically move data to the
archive tier after a period of inactivity, ensuring cost-effective storage.

Section Summary: Implement OAuth2 Authentication

Overview: Implementing OAuth2 authentication is essential for securing applications by


managing user identities and access tokens. OAuth2 is a robust and widely-adopted framework
that supports various flows for different types of applications, including web, mobile, and
desktop applications.

Key Concepts:

1. OAuth2 Authentication:
o OAuth2 is a protocol for authorization that allows applications to
securely access resources on behalf of a user without sharing
credentials.
o Key components include the Resource Owner (user), Resource Server
(API or service), Client (application), and Authorization Server
(responsible for issuing tokens).

2. Authentication Flows:
o Authorization Code Flow: Suitable for server-side applications. It
involves redirecting the user to an authorization server to get an
authorization code, which is then exchanged for an access token.
o Implicit Flow: Designed for client-side applications (e.g., SPA). It
directly issues an access token without an intermediate authorization
code.
o Resource Owner Password Credentials Flow: Used when the
application has a high degree of trust, where the user provides their
credentials directly to the client application.
o Client Credentials Flow: Used for application-to-application
communication, where the client uses its own credentials to access
resources.

3. Azure AD Integration:
o Register applications in Azure AD to use OAuth2 for authentication.
o Define supported account types and configure redirect URIs.
o Manage client secrets and certificates for securing API access.

Definitions:

 OAuth2: An open standard for access delegation commonly used for token-
based authentication and authorization on the internet.
 Authorization Code: A temporary code that the client will exchange for an
access token.
 Access Token: A token that the client uses to make authenticated requests
on behalf of the user.
 Refresh Token: A token used to obtain a new access token without requiring
the user to re-authenticate.
 JWT (JSON Web Token): A compact, URL-safe means of representing claims
to be transferred between two parties.

Key Points:
 OAuth2 decouples authentication from authorization, allowing applications to
access resources without exposing user credentials.
 Tokens have different lifespans, and refresh tokens help maintain user
sessions without re-authentication.
 Secure applications by integrating Azure Active Directory (Azure AD) for
OAuth2 authentication.
 Use Azure AD to register apps, configure redirect URIs, manage client secrets,
and define API permissions.

Important Code Snippet:

csharp
Copy code
// Configuring OAuth2 in Startup.Auth.cs
app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/token"),
Provider = new ApplicationOAuthProvider(PublicClientId),
AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
AllowInsecureHttp = true // Set to false in production
});
app.UseOAuthBearerTokens(OAuthOptions);

Exam Tips:

 Token Management: Understand the differences between access tokens


and refresh tokens, their lifespans, and how they are used in OAuth2 flows.
 Security Best Practices: Always use HTTPS to secure token exchanges. Do
not expose client secrets in client-side applications.
 Azure AD: Be familiar with registering applications, configuring permissions,
and managing client secrets in Azure AD.
 Flow Selection: Choose the appropriate OAuth2 flow based on the type of
application (e.g., Authorization Code Flow for server-side apps, Implicit Flow
for SPAs).

Exam Tip Box: When you are working with OAuth2 authentication, remember that you don’t
need to store the username and password information in your system. You can delegate that task
to specialized authentication servers. Once the user has been authenticated successfully, the
authentication server sends an access token that you can use for confirming the identity of the
client. This access token needs to be refreshed once the token expires. The OAuth2 can use a
refresh token for requesting a new access token without asking the user again for their
credentials.

Key Takeaway: Implementing OAuth2 authentication using Azure AD provides a secure,


scalable, and efficient way to manage user identities and secure access to resources across
various types of applications. Understanding the different OAuth2 flows and how to integrate
them with Azure AD is crucial for developing secure cloud solutions.
Section Summary: Create and Implement Shared Access
Signatures (SAS)

Overview: Shared Access Signatures (SAS) are a secure way to grant limited access to resources
in Azure Storage without exposing your account key. SAS tokens are useful for sharing data with
clients or services while controlling the permissions, duration, and access restrictions.

Key Concepts:

1. Types of SAS:
o Service SAS: Provides access to a specific service (Blob, Queue, Table,
or File) within a storage account.
o Account SAS: Grants access to resources within a storage account,
with more extensive permissions than a service SAS.
o User Delegation SAS: Uses Azure AD credentials to delegate access
to Blob Storage or Data Lake Storage Gen2.

2. Components of SAS:
o Permissions: Define the allowed operations, such as read, write,
delete, and list.
o Expiry Time: Sets the validity period for the SAS token.
o Resource: Specifies the resource type (e.g., blob, container, queue).
o IP Address or IP Range: Restricts access to specific IP addresses or
ranges.
o Protocol: Limits the allowed protocol (HTTPS or HTTP).

3. Creating SAS Tokens:


o Service SAS: Generated at the service level with specific permissions
and resource constraints.
o Account SAS: Generated at the account level with broader
permissions and resource constraints.
o User Delegation SAS: Created using Azure AD credentials and is
more secure as it doesn't require storage account keys.

Definitions:

 SAS (Shared Access Signature): A token that provides delegated access


to resources in Azure Storage.
 Service SAS: A SAS token limited to accessing specific services within a
storage account.
 Account SAS: A SAS token that provides access to multiple services within a
storage account.
 User Delegation SAS: A SAS token that uses Azure AD credentials to grant
access to Blob Storage or Data Lake Storage Gen2.
 Stored Access Policy: A container-level policy that defines the SAS
parameters, making it easier to manage and revoke SAS tokens.

Key Points:
 SAS tokens allow you to grant limited and controlled access to your Azure
Storage resources.
 You can specify permissions, expiry time, resource type, IP address
restrictions, and protocol limitations in a SAS token.
 Using SAS tokens is a secure way to share data without exposing your
storage account keys.
 User Delegation SAS provides enhanced security by leveraging Azure AD
credentials.
 Stored Access Policies simplify SAS management by centralizing the SAS
settings at the container level.

Important Code Snippet:

csharp
Copy code
// Generate a service SAS for a blob
BlobSasBuilder sasBuilder = new BlobSasBuilder
{
BlobContainerName = "mycontainer",
BlobName = "myblob.txt",
Resource = "b",
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)
};
sasBuilder.SetPermissions(BlobSasPermissions.Read | BlobSasPermissions.Write);
string sasToken = sasBuilder.ToSasQueryParameters(new
StorageSharedKeyCredential(accountName, accountKey)).ToString();
Console.WriteLine($"SAS Token: {sasToken}");

Exam Tips:

 User Delegation SAS: Available only for Azure Blob Storage and Azure Data
Lake Storage Gen2. It cannot use Stored Access Policies.
 Stored Access Policies: Simplify the management of SAS tokens but are
not applicable with User Delegation SAS.
 Permissions and Restrictions: Understand how to specify permissions,
expiry times, IP restrictions, and protocols when creating SAS tokens.
 Security Best Practices: Always prefer User Delegation SAS for enhanced
security and avoid using account keys directly.

Exam Tip Box: If you plan to work with user delegation SAS, you need to consider that this
type of SAS is available only for Azure Blob Storage and Azure Data Lake Storage Gen2. You
cannot use either Stored Access Policies when working with user delegation SAS.

Key Takeaway: Implementing Shared Access Signatures (SAS) is essential for securely
granting limited access to Azure Storage resources. Understanding the different types of SAS,
their components, and how to create and manage them is crucial for developing secure and
efficient cloud solutions.
Section Summary: Register Apps and Use Azure Active Directory
to Authenticate Users

Overview: This section covers how to register applications in Azure Active Directory (Azure
AD) and use it to authenticate users. Registering apps in Azure AD allows you to manage
identities and access to resources, ensuring secure and streamlined authentication processes.

Key Concepts:

1. App Registration: The process of integrating an application with Azure AD


to enable authentication and authorization.
2. Authentication: The process of verifying the identity of a user or
application.
3. Authorization: The process of determining what an authenticated user or
application is allowed to do.

Definitions:

 App Registration: Registering an application in Azure AD to integrate it for


identity and access management.
 Azure AD: A cloud-based identity and access management service.
 Multitenant App: An application that can be accessed by users from any
Azure AD organization.

Key Points:

 Steps to Register an App in Azure AD:


1. Sign in to the Azure portal.
2. Navigate to Azure Active Directory.
3. Select "App registrations" and click "New registration".
4. Fill in the required fields such as the name of the application and
supported account types.
5. Specify the redirect URI if needed.
6. Click "Register" to complete the registration.

 Supported Account Types:

o Single tenant: Accessible only within the Azure AD tenant where it was
registered.
o Multitenant: Accessible by users from any Azure AD tenant.
o Personal Microsoft accounts: Accessible by personal Microsoft account
users.

 Authentication Methods:

o OAuth2: Standard protocol for authorization.


o OpenID Connect: Extends OAuth2 for user authentication.
 Redirect URI: The endpoint where Azure AD sends the authentication response.

Exam Tips:

 Target User Consideration: Determine whether your application will be


single tenant or multitenant based on your user base.
 App Registration Location: Register and manage multitenant apps within
your own Azure AD tenant.
 Redirect URI Importance: Ensure correct configuration of the redirect URI
to handle authentication responses.

Exam Tip Box: When you are registering a new application in your Azure Active Directory
tenant, you need to consider which will be your target user. If you need for any user from any
Azure Active Directory organization to be able to log into your application, you need to
configure a multitenant app. In those multitenant scenarios, the app registration and management
is always performed in your tenant and not in any other external tenant.

Key Takeaway: Registering applications in Azure AD is crucial for enabling secure


authentication and authorization. Understanding the differences between single tenant and
multitenant applications, and how to configure them appropriately, is essential for integrating
identity and access management in your solutions.

Section Summary: Control Access to Resources by Using Role-


Based Access Controls (RBAC)

Overview: This section explains how to manage access to Azure resources using Role-Based
Access Control (RBAC). RBAC allows precise access management by assigning roles to users,
groups, or service principals, ensuring they have the necessary permissions to perform their
tasks.

Key Concepts:

1. RBAC (Role-Based Access Control): Regulates access to resources based


on user roles.
2. Security Principal: The entity requesting permission to perform actions.
Types include:
o User: An individual with a profile in Azure Active Directory.
o Group: A set of users.
o Service Principal: An application represented inside the tenant.
o Managed Identity: Cloud applications managed by Azure needing
resource access.
3. Permission: The action allowed on a resource, like listing container contents
or requesting a delegation key.
4. Role Definition: A collection of permissions assigned to a security principal.
o Owner: Full access to all resources.
o Contributor: Modify access to all resources, but cannot grant roles.
o Reader: Read access to all resources.
o User Access Administrator: Manages user access to resources.
5. Scope: The level at which a role is assigned, organized hierarchically from
management group to resource.
6. Role Assignment: The connection between a security principal, a role, and a
scope.

Definitions:

 Security Principal: Entity requesting permissions.


o User: Individual with an Azure AD profile.
o Group: Collection of users.
o Service Principal: Application representation in a tenant.
o Managed Identity: Managed identity for cloud applications.
 Permission: Allowed action on a resource.
 Role Definition: Collection of permissions for a role.
o Owner: Full access.
o Contributor: Modify access.
o Reader: Read access.
o User Access Administrator: Manages user access.
 Scope: The level where a role is assigned (management group, subscription,
resource group, resource).
 Role Assignment: Junction connecting security principal, role, and scope.

Key Points:

 Assigning Roles:
o Navigate to the desired resource in the Azure portal.
o Select "Access control (IAM)".
o Click "Add" and then "Add role assignment".
o Choose the role and select the user, group, or service principal to
assign the role to.
 Creating Custom Roles:
o Identify specific actions required for the role.
o Define the role using JSON format.
o Assign the role to users, groups, or service principals.

Important Code Snippet: Assigning a role using Azure CLI:

bash
Copy code
az role assignment create --assignee <userPrincipalName> --role <roleName> --
scope <resourceScope>

Exam Tips:

 Review Permissions: Always review the permissions granted by a role


before assigning it to ensure it aligns with the required access level.
 Resource vs. Data Access: Understand that granting access to manage a
resource does not necessarily grant access to the data within that resource.
For example, the "Storage Account Contributor" role allows management of
the storage account but does not provide access to the data stored in the
account.

Exam Tip Box: When you are assigning specific service roles, carefully review the permissions
granted by the role. In general, granting access to a resource doesn’t grant access to the data
managed by that resource. For example, the Storage Account Contributor grants access for
managing Storage Accounts but doesn’t grant access to the data itself.

Section Summary: Manage Keys, Secrets, and Certificates by


Using the KeyVault API

Overview: This section discusses how to manage keys, secrets, and certificates using the Azure
Key Vault API. Azure Key Vault is a cloud service for securely storing and accessing sensitive
information like passwords, connection strings, and cryptographic keys. The section covers how
to perform various operations on these items using the Key Vault API.

Definitions:

 Key Vault: A service for securely storing and accessing secrets, keys, and
certificates.
 Secret: Sensitive data such as passwords and API keys stored securely.
 Key: Cryptographic keys used for encryption and decryption.
 Certificate: Digital certificates used for secure communication.
 Access Policies: Policies that define permissions for accessing items in the
Key Vault.
 Security Principal: An entity that can request access to resources. This
includes users, groups, and applications.
 Least Privilege Principle: Security principle that users should be granted
the minimum levels of access – or permissions – needed to perform their job
functions.

Key Points:

 Storing and Retrieving Secrets:


o Use the SecretClient class to manage secrets in the Key Vault.
o Methods include SetSecret, GetSecret, UpdateSecret, and DeleteSecret.

 Managing Keys:
o Use the KeyClient class to manage keys.
o Methods include CreateKey, GetKey, UpdateKey, and DeleteKey.

 Handling Certificates:
o Use the CertificateClient class to manage certificates.
o Methods include CreateCertificate, GetCertificate,
UpdateCertificate, and DeleteCertificate.
 Access Policies:
o Define who can access the Key Vault and what operations they can
perform.
o Follow the principle of least privilege to grant minimal necessary
permissions.

Important Code Snippets: Creating and retrieving a secret using the .NET SDK:

csharp
Copy code
var client = new SecretClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.SetSecret("SecretName", "SecretValue");
KeyVaultSecret secret = client.GetSecret("SecretName");
Console.WriteLine(secret.Value);

Creating and retrieving a key using the .NET SDK:

csharp
Copy code
var client = new KeyClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.CreateKey("KeyName", KeyType.Rsa);
KeyVaultKey key = client.GetKey("KeyName");
Console.WriteLine(key.Key.ToString());

Creating and retrieving a certificate using the .NET SDK:

csharp
Copy code
var client = new CertificateClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.StartCreateCertificate("CertificateName", new CertificatePolicy());
KeyVaultCertificate certificate = client.GetCertificate("CertificateName");
Console.WriteLine(certificate.Properties.Version);

Exam Tips:

 Sensitive Information: Store essential information like passwords,


connection strings, and private keys in Azure Key Vault.
 Access Policies: Carefully configure access policies to ensure the least
privilege is granted to security principals.
 Key Management: Understand how to create, retrieve, update, and delete
keys, secrets, and certificates using the Key Vault API.

Exam Tip Box: The kind of information that you usually store in an Azure Key Vault is
essential information that needs to be kept secret, like passwords, connection strings, private
keys, and things like that. When configuring access to your Key Vault, carefully review the
access level you grant to the security principal. As a best practice, you should always apply the
principle of least privilege. You grant access to the different levels in a Key Vault by creating
Access Policies.
Section Summary: Implement Managed Identities for Azure
Resources

Overview: This section discusses the implementation of Managed Identities for Azure resources.
Managed Identities provide Azure services with an automatically managed identity in Azure
Active Directory (Azure AD) that can be used to authenticate to any service that supports Azure
AD authentication, without requiring credentials in your code.

Definitions:

 Managed Identity: An identity in Azure AD automatically managed by


Azure. It can be used to authenticate to any service that supports Azure AD
authentication.
 System-assigned Managed Identity: Tied to the lifecycle of a specific
service instance. It is automatically deleted when the service instance is
deleted.
 User-assigned Managed Identity: Independent of the service instance
lifecycle and can be assigned to multiple service instances.
 Azure AD (Azure Active Directory): A cloud-based identity and access
management service.

Key Points:

 Types of Managed Identities:


o System-assigned Managed Identity: Created by Azure when a
service instance is created and is deleted when the service instance is
deleted.
o User-assigned Managed Identity: Created as a standalone Azure
resource and can be assigned to one or more service instances.

 Using Managed Identities:


o Simplifies access to other Azure services by eliminating the need to
manage credentials.
o Can be used with services such as Azure Key Vault, Azure SQL
Database, and Azure Storage.

 Configuring Managed Identities:


o Enable managed identities for your Azure resources using the Azure
portal, Azure CLI, or ARM templates.
o Assign roles to the managed identity to grant access to resources.

Important Code Snippets: Assigning a system-assigned managed identity to an Azure Virtual


Machine using Azure CLI:

bash
Copy code
az vm create --resource-group myResourceGroup --name myVM --image UbuntuLTS --
assign-identity
Assigning a user-assigned managed identity to an Azure Virtual Machine using Azure CLI:

bash
Copy code
az identity create --resource-group myResourceGroup --name myIdentity
az vm identity assign --resource-group myResourceGroup --name myVM --
identities myIdentity

Exam Tips:

 System-assigned vs. User-assigned Managed Identities: Understand


the differences between system-assigned and user-assigned managed
identities, particularly their lifecycle and assignment scope.
 Lifecycle Management: Remember that system-assigned identities are tied
to the service instance lifecycle, while user-assigned identities are
independent and reusable across multiple services.
 Role Assignments: Ensure you know how to assign appropriate roles to
managed identities to allow access to required Azure resources.

Exam Tip Box: You can configure two different types of managed identities: system- and user-
assigned. System-assigned managed identities are tied to the service instance. If you delete the
service instance, the system-assigned managed identity is automatically deleted as well. You can
assign the same user-assigned managed identities to several service instances.

Condensed Chapter Summary

 Authentication: Verifying a user's identity.


 Form-based authentication: Stores user passwords and requires HTTPS for
security.
 Token-based authentication: Delegates authorization to third-party
providers; enables social logins and multifactor authentication.
 OAuth actors: Client, resource server, resource owner, authentication
server.
 OAuth tokens: Access token (grants resource access), authorization code
(grants right to request access token), refresh token (requests new access
token without user credentials), JSON web token (JWT).
 Shared Access Signatures (SAS): Authenticates access to Azure Storage
without sharing account keys; types include user delegation, account, and
service SAS.
 SAS tokens: Signed tokens that provide fine-grained access control.
 Azure Active Directory (AAD) app registration: Required for
authenticating users; supports organizational and Microsoft accounts; needs
return URL and either a secret or certificate for API access.
 Role-Based Access Control (RBAC): Fine-grained resource access control.
o Security principal: Users, groups, service principals, managed
identities.
o Permission: Actions allowed on resources.
o Role definition: Collection of permissions.
o Scope: Levels at which roles are assigned (management group,
subscription, resource group, resource).
o Role assignment: Links a security principal, role, and scope.
 Azure App Configuration: Centralizes app configuration using key-value
pairs; values are encrypted.
 Azure Key Vault: Stores keys, secrets, and certificates securely; supports
managed identities for authentication.
 Managed identities: System-assigned (tied to service instance) and user-
assigned (independent, reusable).

Thought Experiment Questions and Answers

1. Access Application with Office 365 Credentials:


o Solution: Use OAuth authentication with Azure Active Directory (AAD).
o Explanation: Register the app in your AAD tenant and create a client
secret to access Microsoft Graph API. AAD syncs with the company’s
AD domain, allowing employees to use the same Office 365
credentials.

2. Secure Access to Azure Services without Credentials:


o Solution: Use Managed Service Identity (MSI) authentication.
o Explanation: Enable a system-assigned or user-assigned managed
identity on Azure App Service. MSI allows services to authenticate
without passwords, supported by Azure Key Vault and Azure SQL
Databases.

3. Store Application Configuration Securely:


o Solution: Create an Azure App Configuration store.
o Explanation: Provides centralized, encrypted storage for app settings.
Use Key Vault references for sensitive information to ensure higher
security levels.

Exam Tips

 OAuth2 Authentication: Don’t store usernames/passwords; use access


tokens and refresh tokens for secure authentication.
 SAS Tokens: User delegation SAS is limited to Azure Blob Storage and Data
Lake Storage Gen2; can't use Stored Access Policies with it.
 AAD App Registration: Multitenant apps allow users from any Azure AD
organization to log in, managed in your tenant.
 RBAC: Understand roles and permissions; resource access doesn't imply data
access.
 App Configuration Keys: Max length is 10,000 characters; keys are case-
sensitive.
 Key Vault Access: Apply least privilege principle when configuring access;
essential for storing secrets like passwords and private keys.
 Managed Identities: Know the differences between system-assigned and
user-assigned identities, their lifecycle, and usage scenarios.
Section Summary: Develop Code to Implement CDNs in Solutions
Overview:

This section discusses how to implement Content Delivery Networks (CDNs) in your solutions
to improve the performance of web applications by caching static content and serving it from
locations closer to the user.

Definitions:

 Content Delivery Network (CDN): A system of distributed servers that


deliver web content based on the geographic location of the user, the origin
of the content, and the content delivery server.
 Dynamic Site Acceleration (DSA): A feature of some CDNs that optimizes
the delivery of dynamic content by reducing latency but is not equivalent to a
caching system.
 Caching: The process of storing copies of files in a cache, or temporary
storage location, so they can be accessed more quickly.
 Static Content: Content that doesn't change frequently and can be cached,
such as images, CSS, and JavaScript files.
 Endpoint: A specific URL where content is cached and delivered to users via
the CDN.
 Origin Server: The original location where the content is stored before being
distributed to the CDN.
 Custom DNS Domain: A custom domain name that can be assigned to a
CDN endpoint for better URL management and branding.
 Compression: The process of reducing the size of files delivered from the
CDN cache to improve performance.
 Caching Rules: Settings that control how content is stored in the cache,
allowing customization of cache expiration times based on specific
conditions.
 Geo-filtering: The ability to block or allow content to specific countries.
 Optimization: Configurations in the CDN to enhance delivery for specific
types of content like web pages, media streaming, or large file downloads.

Key Points:

1. CDN Setup:
o To set up a CDN, you need to create a CDN profile and endpoint in the
Azure portal.
o The origin server URL is specified when setting up the CDN endpoint.
o Propagation of the CDN typically completes within 10 minutes for the
Standard Microsoft CDN.

2. CDN Benefits:
o Reduces latency by serving content from a location closer to the user.
o Offloads traffic from the origin server, improving its performance.

3. Static vs. Dynamic Content:


o Static content is suitable for CDN caching.
o Dynamic content can be optimized using DSA but should not be
confused with caching.

4. Azure CDN Options:


o Azure CDN offers services from Akamai, Verizon, and Microsoft.
o Choose based on specific needs, such as geographic distribution and
advanced features like DSA.

5. Cache Control:
o Set cache expiration policies to control how long content is cached.
o Use HTTP headers like Cache-Control to manage caching behavior.

6. Advanced CDN Options:


o Custom DNS Domain: Allows users to access your web application
using a business-related URL.
o Compression: Compresses specific MIME types to deliver smaller files
and improve performance.
o Caching Rules: Modify cache expiration times for different paths or
content types.
o Geo-filtering: Block or allow content to specific countries.
o Optimization: Configure for various content types such as general
web delivery, media streaming, or large file downloads.

Important Code Snippets:

Here's a brief code snippet to demonstrate setting cache control headers in a web application:

csharp
Copy code
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.Use(async (context, next) =>
{
context.Response.Headers["Cache-Control"] = "public,max-age=600";
await next();
});

// Other middleware registrations


app.UseStaticFiles();
}
Exam Tips:

 CDN for Static Content: Content Delivery Networks (CDN) are appropriate
for caching static content that changes infrequently. Although Azure CDN
from Akamai and Azure CDN from Verizon include Dynamic Site Acceleration
(DSA), this feature is not the same as a cache system. You should not confuse
Azure CDN DSA optimization with Azure CDN cache.
Exam Tip Box:

Exam Tip: Content Delivery Networks (CDN) are appropriate for caching static content that
changes infrequently. Although Azure CDN from Akamai and Azure CDN from Verizon include
Dynamic Site Acceleration (DSA), this feature is not the same as a cache system. You should not
confuse Azure CDN DSA optimization with Azure CDN cache.

Section Summary: Configure Cache and Expiration Policies for


FrontDoor, CDNs, and Redis Caches
Overview:

This section discusses the configuration of cache and expiration policies for Azure Front Door,
Content Delivery Networks (CDNs), and Azure Cache for Redis. It covers how to optimize
performance by caching static content and frequently accessed dynamic data.

Definitions:

Azure Front Door: A scalable and secure entry point for fast delivery of your global
applications.
Content Delivery Network (CDN): A distributed network of servers that delivers web content
to a user based on the geographic location of the user, the origin of the webpage, and a content
delivery server.
Azure Cache for Redis: A secure, dedicated cache service that provides a high-performance
caching solution.
Cache Expiration Policies: Rules that define how long data should be stored in the cache before
it is considered stale.

Key Points:

 Azure Front Door provides global load balancing and near-instant failover
for high availability.
 CDNs reduce latency and improve load times by caching content at
strategically placed edge locations.
 Azure Cache for Redis can be used for both static content and dynamic
data, providing low-latency access to frequently accessed data.
 Cache Expiration Policies ensure that cached content is refreshed
periodically to reflect the most up-to-date information.
 Dynamic Site Acceleration (DSA) improves the performance of dynamic
content delivery, although it is not a caching mechanism.
 Custom Domain configuration allows you to use a user-friendly URL for
accessing your CDN content.
 Compression can significantly reduce the size of transmitted data,
enhancing the performance of web applications.
 Caching Rules allow granular control over the caching behavior, enabling
different cache settings for different types of content.
 Geo-filtering can be used to restrict or allow access to content based on the
user’s location.
 Optimization settings in CDNs can be tailored to different types of content
to maximize performance.

Important Code Snippets:

Example of setting cache expiration policy in Azure Front Door using Azure CLI:

bash
Copy code
az network front-door routing-rule update --resource-group myResourceGroup --
front-door-name myFrontDoor --name myRoutingRule --caching duration=10m

Example of configuring cache settings for Azure CDN using Azure CLI:

bash
Copy code
az cdn endpoint update --resource-group myResourceGroup --profile-name
myCDNProfile --name myCDNEndpoint --content-types-to-compress text/html
application/json --query-string-caching-behavior IgnoreQueryString

Example of configuring Azure Cache for Redis using Azure CLI:

bash
Copy code
az redis create --name myCache --resource-group myResourceGroup --location
westus --sku Basic --vm-size c0
Exam Tips:

 Azure Front Door vs. CDN: Understand the differences and use cases for
Azure Front Door and Azure CDN, especially regarding global load balancing
and content caching.
 Cache Expiration Policies: Be familiar with configuring cache expiration
policies to ensure that cached content is kept up-to-date without unnecessary
latency.
 Dynamic Content Delivery: Recognize that Dynamic Site Acceleration
(DSA) optimizes the delivery of dynamic content and is not the same as
caching static content.
 Custom Domains and Compression: Know how to configure custom
domains and enable compression to improve the performance and user
experience of web applications.
 Geo-filtering and Optimization: Understand how geo-filtering and
optimization settings can be used to enhance content delivery based on user
location and content type.
Exam Tip Box:

Exam Tip: You can use Azure Cache for Redis for static content and the most-accessed dynamic
data. You can use it for in-memory databases or message queues using a publication/subscription
pattern.

Section Summary: Configure Instrumentation in an App or Service


by Using Application Insights
Overview:

This section covers the process of configuring instrumentation in an application or service using
Azure Application Insights. Instrumentation is crucial for monitoring the performance and usage
of applications, detecting issues, and understanding user behavior.

Definitions:

Application Insights: A feature of Azure Monitor, Application Insights is an extensible


Application Performance Management (APM) service for web developers on multiple platforms.
It is used for monitoring live applications, automatically detecting performance anomalies, and
diagnosing issues.
Instrumentation Key: A unique identifier that connects an application with its Application
Insights resource.
Telemetry: The data collected by Application Insights, including metrics, logs, traces, and
custom events.
SDK (Software Development Kit): A collection of software development tools in one
installable package. SDKs make it easier to develop applications for a specific platform.
Dependency Tracking: A feature of Application Insights that tracks calls to external services,
databases, and other dependencies.
Custom Events: User-defined events that provide specific insights into user actions and
application behavior.
Correlation ID: A unique identifier used to trace requests across different components and
services, ensuring end-to-end tracking of a request.

Key Points:

 Installation and Configuration: To start using Application Insights, you


need to install the appropriate SDK for your application platform and
configure it using the instrumentation key.
 Telemetry Data: Application Insights collects various types of telemetry
data such as request rates, response times, failure rates, dependency rates,
and exception rates.
 Auto-Instrumentation: Some platforms support automatic instrumentation
where Application Insights can automatically monitor your application without
significant code changes.
 Custom Telemetry: You can create custom events and metrics to capture
specific application behaviors and business metrics.
 Dependency Tracking: This feature helps you understand how your
application interacts with external services and databases, and it tracks the
performance and failures of these dependencies.
 Live Metrics Stream: Allows you to see real-time telemetry data to monitor
live performance and diagnose issues quickly.
 Alerting and Dashboards: Application Insights supports setting up alerts
based on telemetry data and creating dashboards to visualize application
performance and usage data.

Important Concepts:

 SDK Installation: The first step in using Application Insights is to install the
SDK appropriate for your application platform (e.g., .NET, Java, JavaScript).
 Instrumentation Key: Configure your application with the instrumentation
key from the Application Insights resource to ensure that telemetry data is
sent to the correct resource.
 Telemetry Collection: The SDK collects and sends various telemetry data to
Application Insights. This includes automatic collection of requests,
dependencies, and exceptions.
 Custom Events and Metrics: You can enhance the insights by sending
custom events and metrics that provide business-specific information.
 Live Metrics Stream: Use this feature to monitor real-time telemetry data
for immediate insights into application performance.

Important Code Snippets:

Example of configuring Application Insights in a .NET application:

csharp
Copy code
// Installing the SDK in your project
Install-Package Microsoft.ApplicationInsights.AspNetCore

// Configuring Application Insights in Startup.cs


public void ConfigureServices(IServiceCollection services)
{

services.AddApplicationInsightsTelemetry(Configuration["ApplicationInsights:In
strumentationKey"]);
}

Example of sending a custom event:

csharp
Copy code
var telemetryClient = new TelemetryClient();
telemetryClient.TrackEvent("CustomEvent", new Dictionary<string, string>{{
"EventDetail", "EventValue" }});
Exam Tips:

 Platform Support: Understand that Application Insights supports multiple


platforms and languages, including .NET, Java, JavaScript, and Node.js.
 Instrumentation Key: Remember the importance of the instrumentation
key in linking your application to the Application Insights resource.
 Telemetry Types: Be familiar with different types of telemetry collected by
Application Insights, such as metrics, logs, traces, and custom events.
 Customization: Know how to extend the default telemetry with custom
events and metrics to gain deeper insights into application behavior.
 Usage Outside Azure: Application Insights can be used to monitor
applications hosted on-premises or in other cloud environments, not just
those running in Azure.

Exam Tip Box:

Exam Tip: Remember that Application Insights is a solution for monitoring the behavior of an
application on different platforms, written in different languages. You can use Application
Insights with web applications and native applications or mobile applications written in .NET,
Java, JavaScript, or Node.js. There is no requirement to run your application in Azure. You only
need to use Azure for deploying the Application Insights resource that you use for analyzing the
information sent by your application.

-==-=-=-=-=--=-=-==-=-=-=--=================-==--=-==-=-=-=--==--==--=-=-=-
==---

Section Summary: Analyze Log Data and Troubleshoot Solutions


by Using Azure Monitor
Overview:

This section focuses on using Azure Monitor to analyze log data and troubleshoot issues within
applications and services. Azure Monitor is a comprehensive monitoring solution that helps you
collect, analyze, and act on telemetry data from your cloud and on-premises environments.

Definitions:

Azure Monitor: A platform service that provides a single source for monitoring Azure
resources. It collects metrics and logs, sets up alerts, and enables monitoring and diagnostics.
Diagnostics Logs: Logs that provide detailed information about the operations and activities
within an Azure resource. These logs are essential for troubleshooting and monitoring the health
of your services. Log Analytics: A service within Azure Monitor that helps collect and analyze
log data from various sources. It uses a powerful query language called Kusto Query Language
(KQL). Kusto Query Language (KQL): The query language used in Log Analytics for
analyzing large datasets. KQL is designed for querying structured, semi-structured, and
unstructured data. Workspace: A container in Log Analytics where data from various sources is
stored and queried. Log Queries: Queries written in KQL to retrieve and analyze data from the
Log Analytics workspace. Metrics: Quantitative measurements that provide insights into the
health and performance of resources, such as CPU usage, memory usage, and request rates.
Alerts: Notifications or automated actions triggered when specific conditions are met based on
metrics or log data. Azure Resource Manager (ARM): The deployment and management
service for Azure. ARM provides a consistent management layer that allows you to create,
update, and delete resources in your Azure account.

Key Points:

 Azure Monitor Capabilities: Azure Monitor provides comprehensive


monitoring capabilities for Azure resources, including collection of metrics,
logs, and traces, and setting up alerts and dashboards.
 Enabling Diagnostics Logs: To effectively monitor and troubleshoot your
applications, you must enable diagnostics logs for your Azure resources, such
as Azure App Services.
 Log Analytics and KQL: Log Analytics is a powerful tool within Azure
Monitor for analyzing log data. KQL is the language used to query this data.
 Using Workspaces: Log Analytics workspaces store data from various
sources. You can run KQL queries against this data to gain insights.
 Creating and Running Log Queries: You can create log queries using KQL
to analyze data and troubleshoot issues. These queries can be saved and
reused.
 Setting Up Alerts: Azure Monitor allows you to set up alerts based on
specific conditions. Alerts can notify you or trigger automated actions when
certain thresholds are met.
 Integration with Azure Services: Azure Monitor integrates seamlessly with
other Azure services, such as Azure App Services, Azure Functions, and Azure
SQL Database.

Important Concepts:

 Azure Monitor: The central service for monitoring Azure resources,


collecting metrics, logs, and enabling alerts.
 Diagnostics Logs: Essential for detailed insights into the operations of Azure
resources. Must be enabled for effective monitoring.
 Log Analytics Workspace: Stores log data from various sources. KQL
queries are run against this workspace to analyze data.
 Kusto Query Language (KQL): The powerful query language used in Log
Analytics for analyzing log data.
 Metrics and Alerts: Metrics provide quantitative data about resource
performance, while alerts notify you or trigger actions based on these
metrics.

Important Code Snippets:

Example of enabling diagnostics logs for an Azure App Service using Azure CLI:

bash
Copy code
az monitor diagnostic-settings create --name <diagnostic_setting_name> --
resource <resource_id> --logs '[{"category": "AppServiceHTTPLogs", "enabled":
true}]' --metrics '[{"category": "AllMetrics", "enabled": true}]'

Example of a KQL query to retrieve log data from Log Analytics workspace:

kql
Copy code
AppRequests
| where timestamp > ago(1h)
| summarize count() by bin(timestamp, 5m)
Exam Tips:

 Enabling Diagnostics Logs: Ensure that you have enabled diagnostics logs
for your Azure resources, as they are crucial for analyzing log data and
troubleshooting issues.
 Using Log Analytics: Be familiar with creating and running KQL queries in
Log Analytics to analyze data and identify issues.
 Setting Up Alerts: Know how to set up alerts in Azure Monitor to get
notified or trigger actions based on specific conditions.
 KQL Proficiency: Understanding KQL is important for querying and
analyzing log data effectively.

Exam Tip Box:

Exam Tip: When you try to query logs from the Azure Monitor, remember that you need to
enable the diagnostics logs for the Azure App Services. If you get the message, "We didn’t find
any logs" when you try to query the logs for your Azure App Service, that could mean that you
need to configure the diagnostic settings in your App Service.

Section Summary: Implement Application Insights Web Test and


Alerts
Overview:

This section covers how to implement Application Insights web tests and alerts. Application
Insights web tests allow you to monitor the availability and performance of your web
applications by simulating user interactions. Alerts notify you when certain conditions are met,
such as a web test failure or a performance issue.

Definitions:

Application Insights: A feature of Azure Monitor that provides extensible Application


Performance Management (APM) and monitoring for live web applications. Web Test: A test
that simulates user interactions with a web application to monitor its availability and
performance. Multistep Web Test: A complex web test that involves multiple steps, simulating
a sequence of user actions. Alert: A notification or automated action triggered when specific
conditions are met based on metrics, logs, or other criteria. Visual Studio Enterprise: An
integrated development environment (IDE) from Microsoft that includes advanced tools for
development and testing, including the creation of multistep web tests.

Key Points:

 Application Insights Web Tests: Used to monitor the availability and


performance of web applications by simulating user interactions.
o Types of Web Tests:
 URL Ping Test: A simple test that checks if an application is
reachable and measures the response time.
 Multistep Web Test: Simulates a sequence of user actions to
test more complex scenarios.
 Creating Web Tests:
o URL Ping Test: Can be created directly in the Azure portal.
o Multistep Web Test: Requires a Visual Studio Enterprise license for
creation and definition of the steps.
 Uploading Web Tests: After defining a multistep web test in Visual Studio,
you can upload the test definition to Azure Application Insights for execution.
 Alerts: Set up to notify you when certain conditions are met, such as web
test failures or performance issues.
o Creating Alerts: Define conditions based on metrics, logs, or web test
results. Configure actions to take when the conditions are met, such as
sending an email or triggering a webhook.

Important Concepts:

 Web Tests in Application Insights: Used for monitoring and ensuring the
availability and performance of web applications.
 Types of Web Tests: Understand the difference between URL Ping Tests and
Multistep Web Tests.
 Setting Up Alerts: Know how to configure alerts to get notified or take
automated actions based on specific conditions.

Important Code Snippets:

Example of creating a URL ping test in the Azure portal:

1. Go to your Application Insights resource.


2. Select "Availability" under "Monitoring."
3. Click "+ Add test" to create a new URL ping test.
4. Configure the test details, such as URL, test frequency, and test locations.

Exam Tips:

 Visual Studio Enterprise: Remember that creating multistep web tests


requires a Visual Studio Enterprise license. You use Visual Studio Enterprise to
define the steps of the test before uploading it to Application Insights.
 Web Test Types: Understand the differences between URL Ping Tests and
Multistep Web Tests and when to use each type.
 Alerts Configuration: Be familiar with setting up alerts in Application
Insights based on web test results and other metrics.

Exam Tip Box:

Exam Tip: Remember that you need a Visual Studio Enterprise license for creating multistep
web tests. You use the Visual Studio Enterprise for the definition of the steps that are part of the
test, and then you upload the test definition to Azure Application Insights.

Section Summary: Implement Code That Handles Transient Faults


Overview:

This section explains how to implement code that handles transient faults in your applications.
Transient faults are temporary errors that occur in cloud environments, often due to temporary
unavailability or connectivity issues. Properly handling these faults can improve the resilience
and reliability of your applications.

Definitions:

Transient Faults: Temporary errors that are usually self-correcting and occur sporadically in
cloud environments due to factors like network congestion, service unavailability, or throttling.
Retry Strategy: A method of handling transient faults by retrying the failed operation after a
certain period. Exponential Backoff: A retry strategy where the wait time between retries
increases exponentially. Circuit Breaker: A design pattern used to detect failures and
encapsulate the logic of preventing an application from trying to perform an operation that is
likely to fail.

Key Points:

 Transient Fault Handling: Essential for making cloud applications more


resilient and reliable.
o Retry Strategies: Different strategies can be employed to handle
transient faults, including fixed interval, exponential backoff, and
custom strategies.
 Fixed Interval: Retries the operation after a fixed period.
 Exponential Backoff: Increases the wait time exponentially
between retries.
 Custom Strategy: Allows for specific customization based on
the application’s needs.
o Circuit Breaker Pattern: Prevents an application from continually
trying to perform an operation that is likely to fail, helping to avoid
exhausting system resources.

Important Concepts:

 Implementation of Retry Logic: Understand how to implement retry logic


using different strategies.
 Handling Different Scenarios: Be prepared to handle various transient
fault scenarios, including network issues, service unavailability, and
throttling.
 Circuit Breaker Pattern: Learn how to implement a circuit breaker to
prevent resource exhaustion and infinite loops.

Important Code Snippets:

Example of implementing a retry strategy using exponential backoff in C#:

csharp
Copy code
public async Task<T> RetryOnExceptionAsync<T>(int maxRetries, Func<Task<T>>
operation)
{
int attempt = 0;
while (true)
{
try
{
return await operation();
}
catch (Exception ex) when (attempt < maxRetries)
{
attempt++;
var delay = TimeSpan.FromSeconds(Math.Pow(2, attempt));
await Task.Delay(delay);
}
}
}
Exam Tips:

 Retry Strategy Testing: Carefully test your retry strategy to ensure it


doesn’t lead to resource exhaustion or infinite loops.
 Using Circuit Breakers: Implement circuit breakers to handle continuous
failures gracefully, preventing infinite retry loops and protecting system
resources.

Exam Tip Box:

Exam Tip: Remember to test your retry strategy carefully. Using a wrong retry strategy could
lead your application to exhaust the resources needed for executing your code. A wrong retry
strategy can potentially lead to infinite loops if you don’t use circuit breakers.

Chapter Summary: Handling Transient Faults and Improving


Application Performance

 Transient Faults: Temporary errors occurring due to network congestion,


service unavailability, or throttling. Handling them improves application
resilience.
 Retry Strategy: Determine the fault type before retrying; avoid immediate
retries more than once; use random starting values for retry periods.
 SDK Mechanisms: Utilize built-in SDK mechanisms for retry logic when
available.
 Logging: Log both transient and nontransient faults for better diagnostics.
 Caching: Enhances performance by storing frequently accessed data. Azure
Cache for Redis can cache dynamic content and create in-memory databases.
 Message Queue Patterns: Implemented using Azure Cache for Redis for
efficient message handling.
 Content Delivery Networks (CDNs): Distribute static content globally,
reducing latency. Control cache invalidation using TTL settings or manual
content removal.
 Application Insights: Collects application data for monitoring. Supports
various platforms and languages. Part of Azure Monitor, generating metrics
and logs.
 Web Tests in Application Insights: Monitor application availability;
configure alerts and actions based on test results.

Thought Experiment: Troubleshooting and Performance


Enhancement
Scenario:

Your company's eCommerce LOB application faces stability and performance issues, especially
during high usage periods like holidays. The application interacts with external systems. You
need to address complaints regarding its stability and performance.

Questions and Answers:

1. Improving Internal Workflow Monitoring:


o Solution: Integrate Application Insights instrumentation with your
code to track custom events and complex operations. This provides
detailed information about internal workflows, unlike agent-based
monitoring which is insufficient.

2. Minimizing Stability Issues Due to External Systems:


o Solution: Implement a retry strategy with a small number of retries
and short intervals. Start with an immediate retry, and if it fails, switch
to a different retry strategy. Test the strategy to ensure optimal user
experience.

3. Ensuring Purchase Process Functionality:


o Solution: Configure a multistep web test using Visual Studio
Enterprise. Record the steps for a purchase, generate the test file, and
create a web test in Application Insights to monitor the process
comprehensively.
Key Terms and Definitions:

 Transient Faults: Temporary errors typically self-correcting.


 Retry Strategy: Method to retry failed operations after certain intervals.
 Exponential Backoff: Retry strategy where wait time increases
exponentially between retries.
 Circuit Breaker: Pattern to prevent retrying operations likely to fail,
protecting resources.
 Caching: Storing data temporarily for faster access.
 Azure Cache for Redis: Service for caching data, supporting in-memory
databases and message queue patterns.
 Content Delivery Network (CDN): Distributed servers providing cached
content to users based on geographic location.
 TTL (Time-To-Live): Cache expiration setting.
 Application Insights: Azure service for monitoring application behavior,
collecting metrics and logs.
 Web Tests: Tests configured in Application Insights to monitor application
availability and performance.

Exam Tips:

 Retry Strategy: Test your retry strategy to avoid resource exhaustion and
infinite loops.
 Caching and Redis: Use Redis for caching static content and dynamic data,
as well as for in-memory databases or message queues.
 CDNs: Use CDNs for caching static content with infrequent changes; do not
confuse DSA with cache systems.
 Application Insights: Suitable for monitoring applications across various
platforms and languages, not limited to Azure-hosted applications.

Exam Tip Box:

Exam Tip:

 Transient Faults: Test your retry strategy carefully. An incorrect strategy can
exhaust resources or cause infinite loops if not combined with circuit
breakers.
 Azure Cache for Redis: Suitable for caching static content and dynamic
data, supporting in-memory databases or message queues.
 CDNs: Ideal for caching static content with infrequent changes. Do not
confuse DSA with cache systems in Azure CDN.
 Application Insights: Monitors applications across different platforms and
languages. Does not require the application to run in Azure, only the
deployment of Application Insights in Azure.
Section Summary: Create a Logic App

Overview: This section discusses how to create a Logic App in Azure. Logic Apps provide a
platform for building automated workflows that integrate with various services and systems. This
enables you to design complex workflows with minimal code by leveraging connectors and pre-
built templates.

Definitions and Key Terms:

 Logic App: A cloud service in Azure that automates and orchestrates tasks,
business processes, and workflows by integrating various services and
systems.
 Triggers: Events that start a workflow in a Logic App. Triggers can be time-
based or event-based, such as when a new email arrives.
 Actions: Steps that follow a trigger in a Logic App workflow. Actions can
include operations like sending an email, creating a file, or making an HTTP
request.
 Connectors: Pre-built integrations in Logic Apps that allow you to connect to
various services such as Office 365, Salesforce, SQL Server, etc.
 Integration Service Environment (ISE): A dedicated environment for
running Logic Apps that need high isolation and performance. ISE provides
private, isolated network environments for securely running integration
workloads.
 Designer: The interface in the Azure portal used to design Logic Apps. It
provides a visual way to create and manage workflows.
 Managed Identity: An identity in Azure AD automatically managed by Azure
for authenticating to other services.

Schedules:

 Recurrence: Configures a regular time interval for workflow execution.


Options include setting start date, time, and frequency (seconds to months).
Missing recurrences are not processed.
 Sliding Window: Similar to Recurrence but processes missed recurrences.
Does not support advanced scheduling settings.
 Polling: Periodically queries a system or service for new data/events,
triggering a workflow instance when detected.
 Push: Listens for new events/data in a system or service, triggering a
workflow instance immediately when detected.

Connector Categories:

 Built-in: Fundamental triggers and actions like scheduling workflows, making


HTTP calls, and processing messages in batches.
 Managed: Developed and maintained by Microsoft for accessing services like
Office 365, Azure Blob Storage, and SharePoint.
 On-Premises: For accessing on-premises systems using a data gateway.
Includes connectors for File Systems, Oracle, MySQL, etc.
 Integration Account: For B2B solutions, enabling message transformations
(e.g., AS2, EDIFACT, X12).
 ISE: Specialized connectors for Integration Service Environment, marked with
CORE or ISE labels.

Pricing Levels:

 Basic: Includes built-in connectors, triggers, and control workflow actions.


 Standard: Includes managed connectors and custom connectors.
 Enterprise: Specialized connectors for B2B applications (e.g., SAP, IBM
3270).

Key Points:

 Creating a Logic App: Can be done using Azure portal, Visual Studio, or
Azure CLI.
 Triggers and Actions: Define workflow logic starting with triggers and
followed by actions.
 Connectors: Integrate various services into Logic Apps, categorized by their
functionality and environment.
 ISE: Provides a secure, dedicated environment for Logic Apps with specialized
connectors.

Example Logic App Creation:

1. Sign in to Azure Portal: Go to https://portal.azure.com and log in with your


Azure account.
2. Create a Resource: Navigate to the Azure portal menu, select "Create a
resource," and search for "Logic App."
3. Configure Basic Settings: Provide a name for the Logic App, select the
subscription, resource group, and location.
4. Define Trigger: Select a trigger from the available options, such as a
recurring timer or when an HTTP request is received.
5. Add Actions: Add actions to the workflow by selecting from the available
connectors. Configure each action as required.

Exam Tips:

 ISE Connectors: Understand the distinction and usage of CORE and ISE
labeled connectors in Integration Service Environments.
 Global Connectors: Recognize that public, multitenant connectors like
Office 365 can still be used within an ISE.
 Pricing Levels: Know the differences between Basic, Standard, and
Enterprise pricing levels and what each includes.

Exam Tip Box:


Exam Tip: When working with Integration Service Environments (ISE), note that specific
connectors run inside the ISE and are marked with a label. Despite ISE environments using
dedicated resources, you can still utilize global, public, multitenant connectors, such as Office
365 or Dropbox connectors.

Conclusion:

Creating a Logic App involves defining triggers and actions, using connectors, and
understanding the environment in which the Logic App runs. Utilizing ISE can enhance the
security and performance of Logic Apps, especially when dealing with sensitive or high-
performance workloads.

Section Summary: Create a Custom Connector for Logic Apps

Overview: This section covers the creation of custom connectors for Azure Logic Apps. Custom
connectors allow you to integrate your Logic Apps with APIs and services that are not available
through built-in or managed connectors.

Definitions and Key Terms:

 Custom Connector: A user-defined connector that allows integration with


APIs and services not supported by built-in or managed connectors in Logic
Apps.
 OpenAPI (Swagger) Definition: A standard, language-agnostic interface to
RESTful APIs which allows humans and computers to discover and understand
the capabilities of the service without access to source code.
 Postman Collection: A set of pre-defined API requests that can be grouped
and organized to share with others. Used to create custom connectors.
 PowerApps: A suite of apps, services, connectors, and a data platform to
build custom apps for your business needs.
 Microsoft Flow: A service for automating workflows across apps and
services.
 API Management: A service in Azure that helps protect and manage APIs.
 Policy: A set of rules that define how API calls are processed, including
transformation, validation, and security policies.
 Connector: A proxy or a wrapper around an API that allows the underlying
service to talk to Microsoft Flow, PowerApps, and Logic Apps.

Key Points:

 Creating a Custom Connector:


o OpenAPI Definition: Use an OpenAPI definition to define the custom
connector. OpenAPI (formerly known as Swagger) provides a standard
way to describe REST APIs.
o Postman Collection: Use Postman to create a collection of API
requests and use this collection to define the custom connector.
o API Management: Utilize API Management to create, publish, secure,
and analyze APIs in minutes.

 Steps to Create a Custom Connector:

1. Sign in to Azure Portal: Go to the Azure portal and log in with your
credentials.
2. Navigate to Logic Apps: Select "Logic Apps" from the left-hand
menu.
3. Create Custom Connector: Select "Custom connectors" under "API
Management."
4. Define the Connector: Use OpenAPI definition or Postman collection
to define the connector.
5. Set up Security: Configure the authentication type such as Basic
Auth, API Key, OAuth 2.0, etc.
6. Test the Connector: Test the custom connector to ensure it works as
expected.
7. Use in Logic Apps: Once created, the custom connector can be used
in your Logic Apps workflows.

 Security and Policies:


o Authentication Types: Support for various authentication
mechanisms including Basic, API Key, and OAuth 2.0.
o Policies: Apply policies to enforce security, control, and performance
aspects of APIs.

 Connector Sharing:
o Microsoft Flow and PowerApps: Custom connectors can be used in
both Microsoft Flow and PowerApps. However, connectors created for
Logic Apps cannot be directly reused in Flow or PowerApps.
o OpenAPI Definition: You can use the same OpenAPI definition to
create a custom connector for Logic Apps, Microsoft Flow, and
PowerApps.

Important Code Snippets:

 Example of OpenAPI Definition:

json
Copy code
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "My API",
"description": "API for my custom connector"
},
"host": "api.example.com",
"basePath": "/v1",
"schemes": [
"https"
],
"paths": {
"/resource": {
"get": {
"summary": "Get resource",
"operationId": "getResource",
"produces": ["application/json"],
"responses": {
"200": {
"description": "OK"
}
}
}
}
}
}

Exam Tips:

 Custom Connectors in Different Services: Understand that custom


connectors created for Azure Logic Apps cannot be directly reused in
Microsoft Flow or PowerApps, and vice versa.
 OpenAPI Definition: Be familiar with creating and using OpenAPI definitions
to define custom connectors.
 Security Configurations: Know the different authentication types
supported by custom connectors and how to configure them.
 Testing Custom Connectors: Ensure you know how to test and validate
custom connectors before using them in Logic Apps.

Exam Tip Box:

Exam Tip: You can create custom connectors for Azure Logic Apps, Microsoft Flow, and
Microsoft PowerApps. You cannot reuse a connector created for Azure Logic Apps in Microsoft
Flow or PowerApps (or vice versa). You can use the same OpenAPI definition to create a custom
connector for these three services.

Conclusion:

Creating custom connectors in Azure Logic Apps allows you to extend the functionality of your
workflows by integrating with APIs and services not available through built-in or managed
connectors. Understanding the process of defining, securing, and testing these connectors is
crucial for leveraging their full potential in automating business processes.

Section Summary: Create a Custom Template for Logic Apps

Overview: This section discusses the creation of custom templates for Azure Logic Apps.
Custom templates allow users to create pre-defined workflows that can be reused and shared
across different Logic Apps.
Definitions and Key Terms:

 Custom Template: A pre-defined Logic App workflow that can be reused and
shared across different Logic Apps.
 ARM Template (Azure Resource Manager Template): A JSON file that
defines the infrastructure and configuration for your Azure deployment. It
allows you to define the resources you need and automate the deployment
process.
 Logic App Workflow Definition: The specific set of actions and triggers
that make up the Logic App. This definition can be exported and used to
create custom templates.
 Parameters: Variables in the ARM template that allow you to customize the
deployment of your Logic App without changing the template itself.

Key Points:

 Creating a Custom Template:


o Exporting Workflow Definition: Export the definition of your Logic
App workflow as an ARM template. This includes all the actions,
triggers, and configurations you have set up in your Logic App.
o Defining Parameters: Use parameters in your ARM template to allow
customization of your Logic App during deployment. Parameters can
include connection strings, API keys, and other configurable values.
o Template Structure: Ensure your ARM template follows the correct
structure, including sections for parameters, variables, resources, and
outputs.

 Steps to Create a Custom Template:

1. Export Logic App: Go to your Logic App in the Azure portal and
export it as an ARM template.
2. Edit Template: Customize the ARM template by adding parameters
and making any necessary modifications to the workflow definition.
3. Deploy Template: Use the Azure portal, Azure CLI, or PowerShell to
deploy the ARM template. During deployment, provide the necessary
parameter values.

 Benefits of Using Custom Templates:


o Reusability: Custom templates allow you to create standardized
workflows that can be reused across different projects and teams.
o Automation: Automate the deployment of your Logic Apps, ensuring
consistency and reducing the chance of manual errors.
o Customization: Parameters in the ARM template allow you to
customize the Logic App deployment without modifying the template
itself.
Important Code Snippets:

 Example of an ARM Template for a Logic App:

json
Copy code
{
"$schema":
"https://schema.management.azure.com/schemas/2019-04-01/deploymentTempla
te.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logicAppName": {
"type": "string",
"defaultValue": "MyLogicApp"
},
"storageAccountConnectionString": {
"type": "securestring"
}
},
"resources": [
{
"type": "Microsoft.Logic/workflows",
"apiVersion": "2016-06-01",
"name": "[parameters('logicAppName')]",
"location": "[resourceGroup().location]",
"properties": {
"definition": {
"$schema":
"https://schema.management.azure.com/providers/Microsoft.Logic/schemas/
2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"actions": {
"Get_blob_content": {
"type": "Http",
"inputs": {
"method": "GET",
"uri":
"[concat('https://myaccount.blob.core.windows.net/mycontainer/myblob')]"
}
}
},
"outputs": {}
},
"parameters": {
"$connections": {
"value": {
"AzureBlob": {
"connectionId": "[concat('/subscriptions/',
subscription().subscriptionId, '/resourceGroups/', resourceGroup().name,
'/providers/Microsoft.Web/connections/AzureBlob')]",
"connectionName": "azureblob",
"id": "/subscriptions/00000000-0000-0000-0000-
000000000000/providers/Microsoft.Web/locations/westus/managedApis/
azureblob"
}
}
}
}
}
}
]
}

Exam Tips:

 ARM Template Structure: Understand the structure of an ARM template,


including the sections for parameters, variables, resources, and outputs.
 Parameters and Customization: Be familiar with how to use parameters to
customize the deployment of a Logic App without changing the template
itself.
 Exporting and Editing Templates: Know the process of exporting a Logic
App as an ARM template and making necessary modifications for reuse.

Exam Tip Box:

Exam Tip: Custom templates for Logic Apps allow for standardization and reusability across
different projects. Understand how to create, export, and modify ARM templates to fit various
deployment scenarios.

Conclusion:

Creating custom templates for Azure Logic Apps helps in standardizing workflows, automating
deployments, and ensuring consistency across different projects. Understanding the structure and
customization of ARM templates is crucial for effectively utilizing custom templates in Logic
Apps.

Section Summary: Create an APIM Instance

Overview: This section covers the steps and considerations involved in creating an Azure API
Management (APIM) instance. APIM is a platform that helps organizations publish APIs to
external, partner, and internal developers to unlock the potential of their data and services.

Definitions and Key Terms:

 API Management (APIM): A service that enables you to create, publish,


maintain, monitor, and secure APIs in a scalable and efficient manner.
 Instance: A specific deployment of an APIM service, which includes all the
configurations and settings necessary to manage your APIs.
 Azure Resource Group: A container that holds related resources for an
Azure solution. The resource group includes those resources that you want to
manage as a group.
 API Gateway: The server that acts as an API front-end, receiving API
requests, enforcing throttling and security policies, passing requests to the
back-end service, and then passing the response back to the requester.
 Developer Portal: A customizable, managed website where API developers
can access API documentation, test APIs, and obtain subscription keys.
 Management Plane: The set of APIs, tools, and processes used to manage
an APIM instance.
 Policies: Rules that define how an API is processed and transformed.

Key Points:

 Creating an APIM Instance:


o Choose the Tier: APIM offers several tiers including Developer, Basic,
Standard, and Premium. Each tier offers different features and
performance levels.
o Set Up the Instance: Define the instance name, resource group,
location, and other configurations.
o Configure the API Gateway: Set up the gateway to handle API
requests and apply policies.
o Set Up the Developer Portal: Customize the developer portal for
your API consumers.
o Manage APIs: Add, configure, and manage your APIs within the APIM
instance.

 Steps to Create an APIM Instance:

1. Navigate to the Azure Portal: Go to the Azure portal and search for
"API Management".
2. Create a New Instance: Click on "Create API Management service".
3. Configure Basic Settings: Fill in the basic settings including the
name, resource group, and tier.
4. Set Up Networking: Configure the networking settings for your APIM
instance.
5. Review and Create: Review your settings and create the APIM
instance.

 APIM Pricing Tiers:


o Developer Tier: Best suited for development and test environments.
It includes all APIM features but has lower performance and is not
recommended for production use.
o Basic Tier: Suitable for entry-level production use. It offers higher
performance than the Developer tier but fewer features than the
higher tiers.
o Standard Tier: Provides a good balance of features, performance, and
cost for most production environments.
o Premium Tier: Offers the highest level of performance and additional
features such as multi-region deployment, which is suitable for
mission-critical production environments.
Exam Tips:

 Understand Tiers: Be familiar with the differences between the APIM pricing
tiers and what each tier offers.
 Configuration Options: Know the steps to set up an APIM instance,
including configuring the API Gateway and Developer Portal.
 Use Cases: Be aware of the scenarios where different tiers of APIM would be
appropriate.

Exam Tip Box:

Exam Tip: When creating an APIM instance, carefully choose the pricing tier based on your
needs. The Developer tier is for development and testing, Basic for entry-level production,
Standard for typical production use, and Premium for high-performance, mission-critical
environments.

Conclusion:

Creating an APIM instance involves selecting the appropriate tier, setting up the instance
configurations, and configuring the API gateway and developer portal. Understanding the
features and limitations of each APIM tier is crucial for making the right choice for your API
management needs.

Section Summary: Implement Solutions that Use Azure Event Grid

Overview: This section covers the implementation of solutions using Azure Event Grid, a
service that enables event-based architectures by allowing different systems to publish and
consume events. Azure Event Grid facilitates decoupling of components within a system, leading
to more scalable and maintainable solutions.

Definitions and Key Terms:

 Azure Event Grid: A fully managed event routing service that allows for the
integration of various Azure services and third-party services using events.
 Event: A lightweight notification of a condition or a state change. Each event
is self-contained and includes enough information for the receiving service to
process it.
 Event Source: The service or application that publishes events to Event
Grid. Examples include Azure Blob Storage, Azure Resource Manager, and
custom topics.
 Event Handler: The service or application that consumes events. Examples
include Azure Functions, Logic Apps, and Webhooks.
 Topic: An endpoint where publishers send events. Azure Event Grid supports
both system topics (built-in topics provided by Azure services) and custom
topics.
 Subscription: A configuration that tells Event Grid which events on a topic it
should route to a specific endpoint.
 Event Schema: The structure of the event data. Azure Event Grid uses a
predefined schema for events.
 System Topics: Predefined topics provided by Azure services, which
automatically publish events.
 Custom Topics: User-defined topics that can be used to publish events from
custom sources.

Key Points:

 Event Grid Architecture:


o Publishers: Services or applications that push events to the Event
Grid.
o Topics: Endpoints in Event Grid where publishers send events.
o Event Subscriptions: Filters that route events from a topic to an
event handler based on conditions.
o Event Handlers: Services or applications that process the events
received from the Event Grid.

 Creating and Managing Event Grid Topics:


o Create a Custom Topic: Define a new topic to publish custom events.
o Publish Events: Send events to the custom topic using HTTP
requests.
o Subscribe to Events: Create event subscriptions to route events from
the topic to event handlers.

 Event Handlers and Routing:


o Azure Functions: Can be used to process events from Event Grid.
o Logic Apps: Can automate workflows based on events from Event
Grid.
o Webhooks: Custom endpoints that can receive events from Event
Grid.

 Event Filtering and Security:


o Event Filters: Allow routing of specific events based on event type or
content.
o Authentication and Authorization: Use Azure Active Directory
(Azure AD) and managed identities to secure access to Event Grid
topics and event subscriptions.

Example Code Snippet:


json
Copy code
{
"topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/
providers/Microsoft.EventGrid/topics/{topic-name}",
"subject": "myapp/vehicles/motorcycles",
"eventType": "recordInserted",
"eventTime": "2020-01-01T00:00:00Z",
"id": "1234",
"data": {
"make": "Contoso",
"model": "AdventureWorks",
"year": 2020
},
"dataVersion": "1.0"
}

Exam Tips:

 Understanding Event Grid: Know the role of Event Grid in decoupling


systems through event publishing and consumption.
 Elements of Event Grid: Familiarize yourself with publishers, topics, event
subscriptions, and event handlers.
 Event Schema: Understand the structure and schema used by Event Grid for
events.
 Security Practices: Learn about securing Event Grid with Azure AD and
managed identities.

Exam Tip Box:

Exam Tip: Event Grid is one of the services that Azure provides for exchanging information
between different systems. These systems publish and consume events from the Event Grid,
allowing you to decouple the different elements of your architecture. Ensure that you fully
understand the role each element plays in the exchange of information using Event Grid.

Conclusion:

Azure Event Grid is a powerful service for implementing event-driven architectures. By


understanding its components and their roles, you can effectively use Event Grid to create
scalable, decoupled solutions that respond to events in real-time.

Section Summary: Implement Solutions That Use Azure


Notification Hubs

Overview: This section covers the implementation of Azure Notification Hubs, which provides a
scalable and cross-platform push notification infrastructure for sending notifications to various
devices.

Definitions:

 Azure Notification Hubs: A service that enables sending push notifications


to multiple platforms, such as iOS, Android, Windows, and more, from any
back-end (cloud or on-premises).
 Namespace: A container for Notification Hubs that provides a unique
scoping mechanism.
 Notification Hub: The entity within the namespace responsible for
managing the delivery of notifications to the devices.
 Push Notification Service (PNS): Platform-specific services that handle the
delivery of push notifications to devices. Examples include Apple Push
Notification Service (APNS) for iOS and Firebase Cloud Messaging (FCM) for
Android.
 Registration: The process of associating a device with a Notification Hub by
providing the device's PNS handle.
 Tag: A label assigned to a registration that allows sending targeted
notifications to a specific group of devices.
 Template: A mechanism for defining the format of notifications to be sent to
different platforms from a single hub.

Key Points:

 Creating a Notification Hub:


o Set up a namespace to contain your Notification Hubs.
o Create a Notification Hub within the namespace.
o Configure platform-specific settings for the Notification Hub (e.g.,
APNS, FCM).

 Managing Device Registrations:


o Register devices with the Notification Hub by providing the device's
PNS handle.
o Use tags to target specific groups of devices.
o Update or delete registrations as needed.

 Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.

Important Code Snippets:

 Registering a device with a Notification Hub using .NET SDK:

csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var registration = new FcmRegistrationDescription("<fcm_token>");
registration.Tags.Add("myTag");
await hub.CreateOrUpdateRegistrationAsync(registration);

 Sending a notification to a specific tag using .NET SDK:

csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var notification = new FcmNotification("{\"data\":{\"message\":\"Hello
World!\"}}");
await hub.SendFcmNativeNotificationAsync(notification, "myTag");

Exam Tips:

 Namespace vs. Notification Hub: Understand the relationship between


namespaces and Notification Hubs.
 Platform Configuration: Be familiar with configuring platform-specific
settings (APNS, FCM, etc.).
 Tagging and Templates: Know how to use tags and templates to target
specific devices and format notifications.

Exam Tip Box: When working with Azure Notification Hubs, remember that the namespace
provides a unique scoping mechanism, while the Notification Hub is responsible for managing
the delivery of notifications to the devices.

Section Summary: Implement Solutions That Use Azure


Notification Hubs

Overview: This section discusses the implementation of Azure Notification Hubs, which provide
a scalable and cross-platform push notification infrastructure that allows you to send
notifications to various devices.

Definitions:

 Azure Notification Hubs: A service that enables sending push notifications


to multiple platforms, such as iOS, Android, Windows, and more, from any
back-end (cloud or on-premises).
 Namespace: A container for Notification Hubs that provides a unique
scoping mechanism.
 Notification Hub: The entity within the namespace responsible for
managing the delivery of notifications to the devices.
 Push Notification Service (PNS): Platform-specific services that handle the
delivery of push notifications to devices. Examples include Apple Push
Notification Service (APNS) for iOS and Firebase Cloud Messaging (FCM) for
Android.
 Registration: The process of associating a device with a Notification Hub by
providing the device's PNS handle.
 Tag: A label assigned to a registration that allows sending targeted
notifications to a specific group of devices.
 Template: A mechanism for defining the format of notifications to be sent to
different platforms from a single hub.

Key Points:
 Creating a Notification Hub:
o Set up a namespace to contain your Notification Hubs.
o Create a Notification Hub within the namespace.
o Configure platform-specific settings for the Notification Hub (e.g.,
APNS, FCM).

 Managing Device Registrations:


o Register devices with the Notification Hub by providing the device's
PNS handle.
o Use tags to target specific groups of devices.
o Update or delete registrations as needed.

 Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.

Tiering:

 Free Tier: Provides limited functionality and is suitable for development and
testing.
 Basic Tier: Suitable for small-scale applications and includes some additional
features not available in the Free Tier.
 Standard Tier: Provides advanced features and is suitable for large-scale
applications.

Important Code Snippets:

 Registering a device with a Notification Hub using .NET SDK:

csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var registration = new FcmRegistrationDescription("<fcm_token>");
registration.Tags.Add("myTag");
await hub.CreateOrUpdateRegistrationAsync(registration);

 Sending a notification to a specific tag using .NET SDK:

csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var notification = new FcmNotification("{\"data\":{\"message\":\"Hello
World!\"}}");
await hub.SendFcmNativeNotificationAsync(notification, "myTag");

Exam Tips:

 Namespace vs. Notification Hub: Understand the relationship between


namespaces and Notification Hubs.
 Platform Configuration: Be familiar with configuring platform-specific
settings (APNS, FCM, etc.).
 Tagging and Templates: Know how to use tags and templates to target
specific devices and format notifications.

Exam Tip Box: The Azure Event Hub is a service appropriate for processing huge amounts of
events with low latency. You should consider the event hub as the starting point in an event
processing pipeline. You can use the event hub as the event source of the Event Grid service.

Chapter Summary: Section 5.3 - Implement Solutions That Use


Azure Event Grid and Azure Notification Hubs
Overview:

Section 5.3 covers the implementation of Azure Event Grid and Azure Notification Hubs,
focusing on how these services facilitate the exchange of information and delivery of
notifications across various systems and platforms.

Key Points and Definitions:

Azure Event Grid:

 Event Grid Architecture:


o Publishers: Services or applications that push events to the Event
Grid.
o Topics: Endpoints in Event Grid where publishers send events.
o Event Subscriptions: Filters that route events from a topic to an
event handler based on conditions.
o Event Handlers: Services or applications that process the events
received from the Event Grid.

 Creating and Managing Event Grid Topics:


o Create a Custom Topic: Define a new topic to publish custom events.
o Publish Events: Send events to the custom topic using HTTP
requests.
o Subscribe to Events: Create event subscriptions to route events from
the topic to event handlers.

 Event Handlers and Routing:


o Azure Functions: Can be used to process events from Event Grid.
o Logic Apps: Can automate workflows based on events from Event
Grid.
o Webhooks: Custom endpoints that can receive events from Event
Grid.

 Event Filtering and Security:


o Event Filters: Allow routing of specific events based on event type or
content.
o Authentication and Authorization: Use Azure Active Directory
(Azure AD) and managed identities to secure access to Event Grid
topics and event subscriptions.

 Important Concepts:
o Elements of Event Grid: Publishers, topics, event subscriptions, and
event handlers.

 Exam Tip: Event Grid is one of the services that Azure provides for exchanging
information between different systems. These systems publish and consume events from
the Event Grid, allowing you to decouple the different elements of your architecture.

Azure Notification Hubs:

 Azure Notification Hubs: Provides a scalable and cross-platform push notification


infrastructure that allows you to send notifications to various devices.
 Namespace: A container for Notification Hubs that provides a unique scoping
mechanism.
 Notification Hub: The entity within the namespace responsible for managing the
delivery of notifications to devices.
 Push Notification Service (PNS): Platform-specific services that handle the delivery of
push notifications to devices (e.g., APNS for iOS, FCM for Android).
 Registration: The process of associating a device with a Notification Hub by providing
the device's PNS handle.
 Tag: A label assigned to a registration that allows sending targeted notifications to a
specific group of devices.
 Template: Defines the format of notifications to be sent to different platforms from a
single hub.
 Creating a Notification Hub:
o Set up a namespace to contain your Notification Hubs.
o Create a Notification Hub within the namespace.
o Configure platform-specific settings for the Notification Hub (e.g.,
APNS, FCM).

 Managing Device Registrations:


o Register devices with the Notification Hub by providing the device's
PNS handle.
o Use tags to target specific groups of devices.
o Update or delete registrations as needed.

 Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.

 Exam Tip: The Azure Event Hub is a service appropriate for processing huge amounts of
events with low latency. You should consider the event hub as the starting point in an
event processing pipeline. You can use the event hub as the event source of the Event
Grid service.

Related Azure Services:

 Azure Event Grid: Used for event-based architectures, enabling applications


to react to events from various Azure services and custom sources.
 Azure Notification Hubs: Used for sending push notifications to various
platforms from a single backend.

Important Concepts:

 Publishers, Topics, Event Subscriptions, and Event Handlers: Key


elements in Azure Event Grid.
 Namespaces, Notification Hubs, PNS, Registration, Tags, and
Templates: Key elements in Azure Notification Hubs.

Chapter Summary: Section 5.4 - Implement Solutions That Use


Azure Service Bus
Overview:

This section covers the implementation of solutions using Azure Service Bus, a messaging
service that enables reliable communication between distributed applications and services.

Key Points and Definitions:

Azure Service Bus:

 Azure Service Bus: A fully managed enterprise integration message broker


that provides reliable and secure asynchronous communication between
microservices and applications.
 Namespace: A scoping container for all messaging components such as
queues and topics.
 Queue: A message storage entity in Service Bus that holds messages until
they are retrieved and processed by a receiver.
 Topic: Similar to a queue but allows multiple subscribers to independently
receive copies of each message.
 Subscription: A named entity inside a topic that receives messages.
 Message: A unit of data that is sent to a queue or topic.
 Message Session: Enables grouping of related messages for ordered
processing.
 Dead-letter Queue (DLQ): A sub-queue that holds messages that cannot
be delivered or processed.

Creating and Managing Service Bus Components:

 Creating a Namespace:
o A namespace provides a unique scoping container for addressing
Service Bus resources within your application.
 Creating Queues and Topics:
o Queue:

bash
Copy code
az servicebus queue create --resource-group myResourceGroup --
namespace-name myNamespace --name myQueue

o Topic:

bash
Copy code
az servicebus topic create --resource-group myResourceGroup --
namespace-name myNamespace --name myTopic

 Creating Subscriptions:
o A subscription receives messages sent to a topic.
o Subscription:

bash
Copy code
az servicebus topic subscription create --resource-group
myResourceGroup --namespace-name myNamespace --topic-name myTopic
--name mySubscription
Sending and Receiving Messages:

 Sending Messages:
o Use the Service Bus SDK or REST API to send messages to a queue or
topic.
o Example (C#):

csharp
Copy code
QueueClient queueClient = new QueueClient(connectionString,
queueName);
await queueClient.SendAsync(new
Message(Encoding.UTF8.GetBytes("Hello, Service Bus!")));

 Receiving Messages:
o Use the Service Bus SDK or REST API to receive messages from a
queue or subscription.
o Example (C#):

csharp
Copy code
QueueClient queueClient = new QueueClient(connectionString,
queueName);
MessageHandlerOptions messageHandlerOptions = new
MessageHandlerOptions(ExceptionReceivedHandler)
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
queueClient.RegisterMessageHandler(ProcessMessagesAsync,
messageHandlerOptions);
Message Sessions:

 Message Session:
o Enables grouping of related messages for ordered processing.
o Allows handling of sessions for message correlation.
o Example:

csharp
Copy code
sessionClient.AcceptMessageSessionAsync(sessionId);
Dead-letter Queues (DLQs):

 Dead-letter Queue (DLQ):


o Messages that cannot be delivered or processed are moved to the
DLQ.
o Allows inspection and reprocessing of problematic messages.
o Example:

csharp
Copy code
queueClient = new QueueClient(connectionString,
EntityNameHelper.FormatDeadLetterPath(queueName));
Security and Authentication:

 Shared Access Policies:


o Used to control access to Service Bus resources.
o Defines the permissions for managing and accessing the Service Bus.
o Example:

bash
Copy code
az servicebus namespace authorization-rule create --resource-group
myResourceGroup --namespace-name myNamespace --name myAuthRule --
rights Send Listen

 Managed Identities:
o Use Azure AD authentication to access Service Bus without managing
credentials.

Pricing Tiers:

 Basic:
o Suitable for smaller workloads.
 Standard:
o Supports topics and subscriptions in addition to queues.
 Premium:
o Provides dedicated resources and enhanced features such as isolation,
Geo-disaster recovery, and higher throughput.

Exam Tip:

 Azure Service Bus:


o Appropriate for handling high-volume and high-throughput messaging.
o Understand the roles of namespaces, queues, topics, subscriptions,
and dead-letter queues in managing messaging workflows.

By mastering these concepts and components, you can effectively implement robust and scalable
messaging solutions using Azure Service Bus.

Chapter 5: Section 5.4


Implement Solutions that Use Azure Queue Storage Queues
Overview:

This section covers how to implement solutions using Azure Queue Storage queues. Azure
Queue Storage is a service for storing large numbers of messages that can be accessed from
anywhere via authenticated calls using HTTP or HTTPS.

Key Terms and Definitions:

 Queue Storage: A service for storing large numbers of messages that can
be accessed from anywhere via authenticated calls using HTTP or HTTPS.
 Message: A unit of data stored in a queue. Messages can be up to 64 KB in
size.
 Queue: A storage system for holding messages until they are processed.
 Visibility Timeout: The period a message is invisible to other clients after
being retrieved.
 Peek: The action of reading a message from the queue without removing it.
 Dequeue: The action of reading and removing a message from the queue.
 Poison Message: A message that cannot be processed successfully after
multiple attempts.
Key Points:

 Creating a Queue:
o Use the Azure portal, Azure CLI, or Azure Storage SDK to create a
queue.
o Example command using Azure CLI:

bash
Copy code
az storage queue create --name myqueue --account-name
mystorageaccount

 Adding Messages to a Queue:


o Use the PutMessage operation to add a message to a queue.
o Example code using Azure Storage SDK:

python
Copy code
queue_client.send_message("This is a message")

 Reading and Deleting Messages:


o Use the GetMessage operation to read a message.
o Use the DeleteMessage operation to remove a message after
processing.
o Example code using Azure Storage SDK:

python
Copy code
message = queue_client.receive_message()
queue_client.delete_message(message.id, message.pop_receipt)

 Handling Poison Messages:


o Implement logic to move messages to a separate queue after multiple
failed processing attempts.
o Example approach:
 Check the dequeue_count property of the message.
 If it exceeds a threshold, move the message to a "poison"
queue.

Important Side Notes:

 Visibility Timeout:
o When a message is read, it becomes invisible for a specified period
(default 30 seconds). If not deleted, it becomes visible again.
o Adjust the visibility timeout to match the expected processing time.

 Queue Monitoring:
o Monitor the length and age of messages in the queue to ensure timely
processing and identify bottlenecks.
 Security:
o Use Shared Access Signatures (SAS) to delegate access to queue
operations.
o Ensure secure access to queue storage using Azure AD and managed
identities.

Example Code Snippet:

 Adding a Message to a Queue:

python
Copy code
from azure.storage.queue import QueueClient

queue_client =
QueueClient.from_connection_string(conn_str="my_connection_string",
queue_name="myqueue")
queue_client.send_message("Hello, Azure Queue Storage!")

Exam Tip:

You can use Azure Queue Storage to decouple and scale applications, providing reliable message
delivery between application components. Remember to handle poison messages and implement
appropriate security measures using SAS and Azure AD.

CHAPTER SUMMARY

 Azure App Service Logic Apps: Connect different services without specific
code.
 Logic App Workflows: Steps to exchange info between applications.
 Connectors: Get and send info from/to different services.
 Triggers: Events fired on source systems.
 Actions: Steps in a workflow.
 Graphical Editor: Simplifies workflow creation.
 Custom Connectors: Wrappers for REST/SOAP APIs for connecting apps with
Logic Apps, Flow, PowerApps.
 API Management (APIM): Publish back-end APIs securely.
 APIM Subscriptions: Authenticate API access.
 APIM Policies: Modify APIM gateway behavior.
 Event-Driven Architecture: Publisher doesn’t expect event
processing/storage by subscriber.
 Azure Event Grid: Service for event-driven architectures.
 Event Grid Topic: Endpoint for publishers.
 Subscribers: Services that read events from a topic.
 Azure Notification Hub: Unifies push notifications for mobile platforms.
 Azure Event Hub: Entry point for Big Data event pipelines, handles millions
of events/sec.
 Message-Driven Architecture: Publisher expects message
processing/storage by subscriber.
 Azure Service Bus/Azure Queue: Message broker services.

THOUGHT EXPERIMENT

1. Implement Business Process Across Azure and On-Premises:


o Solution: Use Azure Logic Apps with on-premises data gateway and
custom connectors.

2. Share LOB Application Data with Partner:


o Solution: Use API Management (APIM) to convert SOAP to REST,
secure with Azure AD, mutual certificates, or API keys.

3. Develop Decoupled Architecture for New Web Application:


o Solution: Use Azure Event Hub for high-volume event ingestion and
forwarding to services like Azure Storage or Data Lake. Choose Event
Hub for processing events over messages.

 Azure Virtual Machines (VMs)

 Azure VMs are scalable computing resources that you can use to run various applications
and services in the cloud.

 Azure Container Registry

 Azure Container Registry is a managed Docker container registry service used for storing
and managing private Docker container images.

 Azure Container Instances

 Azure Container Instances offer a quick and easy way to run containers in the cloud
without managing virtual machines or adopting a higher-level service.
 Azure Resource Manager (ARM)

 Azure Resource Manager provides a management layer that enables you to create,
update, and delete resources in your Azure account through a unified interface.

 Azure App Service

 Azure App Service is a fully managed platform for building, deploying, and scaling web
apps, mobile back ends, and RESTful APIs.

 Azure Monitor

 Azure Monitor maximizes the availability and performance of your applications by


delivering a comprehensive solution for collecting, analyzing, and acting on telemetry
from your cloud and on-premises environments.

 Azure DevOps

 Azure DevOps provides development collaboration tools including pipelines,


repositories, boards, test plans, and artifacts for managing the end-to-end software
development lifecycle.

 Azure Storage

 Azure Storage offers scalable, durable, and secure storage options for a variety of data
objects, including blobs, files, queues, and tables.

 Azure Functions

 Azure Functions is a serverless compute service that enables you to run code on-demand
without provisioning or managing infrastructure.

 Azure Durable Functions

 Azure Durable Functions is an extension of Azure Functions that lets you write stateful
functions in a serverless compute environment.

 Azure Key Vault

 Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud
applications and services, providing secure key management capabilities.

 Azure Cosmos DB

 Azure Cosmos DB is a globally distributed, multi-model database service designed for


high availability, low latency, and scalability.
 Azure SDKs

 Azure SDKs provide programming libraries for accessing Azure services in various
programming languages, enabling developers to interact with Azure resources
programmatically.

 Azure Table Storage

 Azure Table Storage offers NoSQL key-value storage for rapid development using large
datasets and applications requiring a flexible data schema.

 Azure Storage Explorer

 Azure Storage Explorer is a standalone app that enables you to easily work with Azure
Storage data on Windows, macOS, and Linux.

 AzCopy

 AzCopy is a command-line utility that you can use to copy blobs or files to or from a
storage account.

 Azure CLI

 Azure CLI is a set of commands used to create, manage, and configure Azure resources,
offering a streamlined command-line experience for developers.

 Azure Active Directory (Azure AD)

 Azure AD is a cloud-based identity and access management service that helps employees
sign in and access resources in external resources, such as Microsoft 365, the Azure
portal, and thousands of other SaaS applications.

 OAuth2

 OAuth2 is an authorization framework that enables third-party applications to obtain


limited access to user accounts on an HTTP service.

 Role-Based Access Control (RBAC)

 RBAC helps manage who has access to Azure resources, what they can do with those
resources, and what areas they have access to.

 Shared Access Signatures (SAS)

 SAS provides secure delegated access to resources in your storage account without
exposing the account keys.
 Azure App Configuration

 Azure App Configuration is a service that provides a centralized place to manage


application settings and feature flags across all your environments.

 Azure CDN (Content Delivery Network)

 Azure CDN provides global content delivery and caching to optimize speed and
availability for your web applications and static content.

 Azure Front Door

 Azure Front Door is a scalable and secure entry point for fast delivery of your global
applications, providing load balancing, SSL offload, and application acceleration.

 Azure Cache for Redis

 Azure Cache for Redis is a fully managed, in-memory cache that enables high-
performance and scalable architectures by storing data in memory for quick access.

 Dynamic Site Acceleration (DSA)

 DSA is a feature included in Azure CDN from Akamai and Verizon that optimizes the
delivery of dynamic web content.

 Application Insights

 Application Insights is an extensible application performance management (APM)


service for developers and DevOps professionals that provides telemetry data for
monitoring the performance and usage of applications.

 Azure Log Analytics

 Azure Log Analytics is a tool in the Azure portal used to edit and run log queries from
data collected by Azure Monitor.

 Azure Logic Apps

 Azure Logic Apps help you automate workflows and integrate apps, data, services, and
systems with pre-built or custom connectors.

 Custom Connectors

 Custom Connectors allow you to define your APIs and make them available as
connectors in Logic Apps, Microsoft Flow, and PowerApps.
 API Management (APIM)

 APIM helps organizations publish APIs to external, partner, and internal developers to
unlock the potential of their data and services.

 Azure Event Grid

 Azure Event Grid enables you to easily build applications with event-based architectures,
simplifying event routing from various sources to different handlers.

 Azure Notification Hubs

 Azure Notification Hubs provide a scalable, cross-platform push notification


infrastructure for sending notifications to mobile devices.

 Azure Event Hub

 Azure Event Hub is a big data streaming platform and event ingestion service capable of
processing millions of events per second with low latency.

 Azure Service Bus

 Azure Service Bus is a fully managed enterprise message broker with message queues
and publish-subscribe topics to integrate applications and services.

 Azure Queue Storage

 Azure Queue Storage is a service for storing large numbers of messages that can be
accessed from anywhere via authenticated calls using HTTP or HTTPS.

You might also like