D306 Study Guide
D306 Study Guide
1.1
Key Concepts:
Provision VMs: Deploying a VM in Azure involves choosing an operating system and
configuring essential aspects like name, location, size, limits, and extensions. Supported
operating systems include Windows, Windows Server, and major Linux distributions.
Definitions:
Related Resources:
Resource Group: Every VM must be in a resource group (create new or reuse existing).
Storage Account: VM disks are .vhd files stored as page blobs (standard or premium).
Virtual Network: VMs need to connect to a virtual network.
Network Interface: VMs require a network interface for communication.
Deployment Methods:
High Availability:
Use a load balancer and availability set to ensure high availability and fault tolerance.
VMs in an availability set are distributed across multiple servers to prevent simultaneous
downtimes.
Exam Tip:
Creating a VM is straightforward, but planning the deployment is crucial. Consider high
availability and scaling needs, and remember to create the availability set before the
VMs.
To configure remote access to your Azure VMs, ensure a public IP is set up, and use Network
Security Groups (NSGs) to manage traffic. By default, Azure enables remote protocols (RDP for
Windows and SSH for Linux).
Key Concepts:
Public IP: Needed for remote access; can be static (incurs cost) or dynamic.
Network Security Group (NSG): Manages security rules to allow or deny traffic to the
VM.
Security Rule: A rule within the NSG allowing traffic (e.g., TCP/3389 for RDP).
Windows
Linux
Exam Tip:
Avoid configuring remote access over public IPs for production VMs. Instead, deploy a virtual
private network and use the private IP for remote access. This enhances security by limiting
exposure to the internet.
Key Concepts:
"$schema":
"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": { },
"variables": { },
"functions": [ ],
"resources": [ ],
"outputs": { }
}
#!/bin/bash
#Azure CLI template deployment
az group create --name AZ204-ResourceGroup --location "West US"
az group deployment create \
--name AZ204DemoDeployment \
--resource-group AZ204-ResourceGroup \
--template-file az204-template.json \
--parameters @az204-parameters.json
The @ symbol is required in the az group deployment create command to call param
file.
dependsOn Element:
Parent-child relationships do not ensure deployment order; use dependsOn for correct
order.
Storing Templates:
Containerization is key for reliable and quick software deployment, reducing resource
requirements compared to virtual machines. Containers package code and dependencies, using
shared OS libraries, making them lightweight. Docker is the most widely used container
technology.
Key Concepts:
Container: Software that packages code and dependencies, running directly in the
environment.
Container image: A package containing everything needed to run an application.
Dockerfile: File defining the image, including application requirements.
Volumes: External mount points used to persist data across container reboots.
1. Directory: Create a directory for the new image, containing the Dockerfile, code, and
dependencies.
2. Dockerfile: Define the image requirements in a Dockerfile.
3. Command Line: Open a command line to run Docker commands.
4. Build Image: Use docker build --tag=<tag_name>[:<version>] <dockerfile_dir>
to create the image. default value latest.
5. List Image: Verify the image creation with docker image ls.
Docker Compose for Complex Applications: For complex applications requiring multiple
containers, Docker Compose is used to define and run multiple containers. Each service in
Docker Compose has a one-to-one relationship with an image but can have multiple instances
(containers).
Important Points:
Exam Tips:
The purpose of creating an image is to ensure your code is portable and independent from the
server executing it. For this, the image must be accessible to all servers. Storing your image in a
centralized service like Azure Container Registry (ACR) achieves this.
Tagging and Pushing an Image: Before uploading an image to ACR, you need to tag it using
the format <acr_name>.azurecr.io/[repository_name][:version].
1. Log in to Azure:
bash
Copy code
az login
2. Log in to ACR:
bash
Copy code
az acr login --name <acr_name>
bash
Copy code
docker tag foobar <acr_name>.azurecr.io/<repository_name>/<image_name>
bash
Copy code
docker push <acr_name>.azurecr.io/<repository_name>/<image_name>
Exam Tip: A container registry is essential not only for storing images but also for automating
container deployment into Azure Container services. Continuous delivery services, like Azure
Pipelines, rely on container registries for deploying container images.
Once you have created your image and pushed it to your Azure Container Registry (ACR),
follow these steps to run the container using Azure Container Instance (ACI):
1. Create Images: Ensure all necessary images for your application are created.
2. Push to Registry: Upload the images to a container registry.
3. Deploy Application: Deploy the application from the registry to ACI.
To create and run a container in ACI from ACR using Admin account authentication:
4. Save the Script: Click the ellipsis icon below user information, then click Save and
provide a name for the script.
5. Execute the Script: Run the script in Azure Cloud Shell:
bash
Copy code
sh <your_script_name>
Authentication Mechanisms: Options include individual login with Azure AD, admin
account, or service principal.
o Azure AD: Suitable for development and testing.
o Admin Account: Disabled by default, discouraged for production due to security
risks.
o Service Principal: Recommended for production to pull images securely.
Azure App Service is a Platform as a Service (PaaS) solution that allows you to develop web
applications, mobile app back-ends, or REST APIs without managing the underlying
infrastructure. It supports various programming languages (.NET, .NET Core, Java, Ruby,
Node.js, PHP, Python) and platforms (Linux, Windows). Key features include load balancing,
security, autoscaling, automated management, and integration with CI/CD tools like GitHub,
Docker Hub, and Azure DevOps.
Key Concepts
App Service Plan: Manages the group of VMs that host your web application.
o Region: Location where the App Service plan is deployed.
o Number of Instances: Number of VMs in the App Service plan.
o Size of Instances: Size of the VMs.
o Operating System Platform: OS (Linux or Windows) for the VMs.
o Pricing Tier: Features and cost of the App Service plan.
Pricing Tiers
Free Tier (F1): Basic, shared resources, not available for Linux VMs.
Standard and Premium Tiers: Better performance, custom domains, SSL, backups,
deployment slots.
Isolated Tier: Dedicated VMs and virtual networks, maximum scale-out capabilities.
Security Integration
Important Points
Operating System: Cannot change OS for the App Service without recreating it.
Continuous Integration/Deployment: Integrated with GitHub, Docker Hub, Azure
DevOps.
Scaling: Manually or automatically scale the number of VMs.
Deployment Slots: Use different slots for testing and production.
Summary
Azure App Service is a versatile PaaS solution supporting multiple languages and platforms,
providing robust infrastructure capabilities like load balancing and autoscaling. It integrates well
with CI/CD pipelines and offers various pricing tiers to match development and production
needs. Careful planning of App Service plans, resource groups, and pricing tiers ensures optimal
performance and cost-effectiveness.
Detailed Error Logging: Logs detailed information for HTTP status codes 400 or
greater, storing HTML files in the instance's file system (up to 50 files).
Failed Request Tracing: Logs detailed information about failed requests, including IIS
component traces and processing times.
Web Server Logging: Logs HTTP transaction information using the W3C extended log
file format, with customizable retention policies (default space quota: 35 MB).
Application Diagnostics
Send log messages directly from your code using the standard logging system of your
app's language. Different from Application Insights, which requires the Application
Insights SDK.
Deployment Diagnostics
bash
Copy code
az webapp log download --resource-group <Resource group name> --name
<App name>
Log Streams: View log messages as they are saved, available via the Azure portal or
Azure CLI:
bash
Copy code
az webapp log tail --resource-group <Resource group name> --name <App
name>
Exam Tip
Not all programming languages can write log information to Blob Storage. Blob Storage
is supported for .NET application logs, while Java, PHP, Node.js, or Python must use the
application log file system option.
By using these logging mechanisms, developers can ensure efficient monitoring and
troubleshooting of their applications deployed on Azure App Service.
Overview
Deploying code to Azure App Service can be done through various methods suitable for
continuous deployment or integration workflows.
Deployment Options
ZIP or WAR Files: Package all files and use the Kudu service to deploy.
FTP: Copy application files directly using the FTP/S endpoint.
Cloud Synchronization: Sync code from OneDrive or Dropbox with the App Service
using Kudu.
Continuous Deployment: Integrate with GitHub, BitBucket, or Azure Repos to deploy
updates.
Local Git Repository: Configure App Service as a remote repository for local Git and
push code to Azure.
ARM Template: Use Visual Studio and ARM templates for deployment.
Exam Tip
Configure Web App Settings Including SSL, API, and Connection Strings
Configuration Categories
Application Settings: Environment variables passed to your code, equivalent to
<appSettings> in Web.config or appsettings.json. Always encrypted at rest.
Connection Strings: Configure database connection strings, equivalent to
<connectionString> in Web.config or appsettings.json.
General Settings:
o Stack Settings: Configure the stack and version (e.g., .NET Core, .NET, Java,
PHP, Python).
o Platform Settings: 32- or 64-bit platform, IIS pipeline mode, FTP state, HTTP
version, web sockets, always on, ARR affinity, debugging, incoming client
certificates, default documents.
Path Mappings:
o Windows Apps (Uncontainerized): Handler mappings, virtual applications, and
directories.
o Containerized Apps: Mount points attached to containers during execution (up to
five Azure files or blob mount points per app).
PHP Example:
php
Copy code
$testing_var1 = getenv('APPSETTING_testing-var1');
$connection_string = getenv('SQLAZURECONNSTR_testing-connsql1');
ASP.NET Example:
csharp
Copy code
System.Configuration.ConfigurationManager.AppSettings["testing-var1"];
System.Configuration.ConfigurationManager.ConnectionStrings["testing-
connsql1"];
Exam Tip
Overview
Autoscaling in Azure allows you to dynamically assign more resources to your application as
needed, ensuring optimal performance without wasting resources. It addresses both vertical
(scaling up/down) and horizontal (scaling out/in) scaling needs.
Key Concepts
Vertical Scaling: Increases computing power by adding memory, CPU resources, and
IOPS to the application, usually by moving to a larger VM. This requires stopping the
system during resizing.
Horizontal Scaling: Increases application capacity by adding or removing instances of
the application, managed automatically by Azure. This does not require stopping the
system.
Autoscaling Rules
Autoscaling is typically configured through rules that determine when and how to scale:
Time-based Rules: Scale based on a schedule (e.g., increasing resources during the first
week of each month).
Metric-based Rules: Scale based on predefined metrics such as CPU usage, HTTP
queue length, or memory usage.
Custom-based Rules: Scale based on custom metrics exposed via Application Insights.
Scale based on CPU: Configure both scale-out and scale-in rules based on CPU usage.
Scale differently on weekdays vs. weekends: Use different profiles for weekdays and
weekends.
Scale during holidays: Add instances during high-demand periods like holidays.
Scale based on custom metrics: Use custom metrics for different layers of your
application (e.g., front-end, back-end, API).
Exam Tips
Create Opposite Rules: Always create scale-in rules when you create scale-out rules to
ensure efficient resource usage.
Authorization: Ensure your continuous deployment system is authorized before
performing deployments.
Custom Metrics: Utilize Application Insights to expose custom metrics for autoscaling.
1.3
Azure Functions allow you to run pieces of code that solve particular problems within an
application. These functions operate like classes or functions within your code, receiving input,
executing logic, and producing output. They are highly cost-efficient, especially with the
Consumption pricing tier, where you are charged only for the time your code is running. Azure
Functions can also run on an existing App Service Plan if you already have other app services
running.
Example Scenario:
Key Points:
Exam Tip: Remember to install necessary extensions using the func extensions install
command from the Azure Function CLI tools before using bindings or triggers.
Binding Expressions: Use curly braces to create dynamic paths or elements (e.g., {data.url}).
Summary: Azure Functions can be triggered based on various events such as data operations,
timers, and webhooks. These triggers initiate the function's execution and provide the necessary
data for the function to process.
Types of Triggers:
Azure Functions: Stateless; runtime doesn't maintain state between executions if the host
process or VM is recycled or rebooted.
Azure Durable Functions: Extension of Azure Functions providing stateful workflow
capabilities.
o Function Chaining: Allows calling functions in sequence while
maintaining state.
o Workflow Definition by Code: No need for JSON workflows or
external tools.
o Consistent Workflow State: Checkpoints save activity state during
waits.
Common Patterns:
Thought Experiment
1. Approval Workflow:
o Service: Azure Durable Functions
o Explanation: Use Human Interaction pattern for validation before
inserting data. Start workflow with Azure Blob Storage triggers.
Durable Functions handle human confirmation.
2. Performance Issues:
o Solution: Deploy Azure Durable Functions on Azure App Service Plans.
o Method: Configure autoscale rules in the Standard pricing tier to
add/remove resources based on CPU consumption or specific days.
Study usage patterns to set autoscale rules.
Important Definitions:
Exam Tip:
Chapter Summary
Thought Experiment
1. Approval Workflow:
o Service Needed: Azure Durable Functions.
o Reason: Allows human interaction for approval before inserting data,
using Blob Storage triggers to start the workflow.
2. Performance Issues:
o Solution: Deploy Azure Durable Functions on Azure App Service Plans.
o Action: Configure Autoscale rules to add resources during peak times
based on CPU consumption or specific days.
Summary: This chapter discusses the essential aspects of designing and implementing storage
solutions using Microsoft Azure. It highlights the challenges in storing data persistently and
efficiently, and how Azure's various storage solutions can address these challenges. The chapter
focuses on Cosmos DB storage and Blob Storage, detailing their features, benefits, and
implementation strategies.
Key Points:
1. Cosmos DB Storage:
o Globally distributed, low-latency, highly responsive, always-online
database service.
o Scalable across the globe with a simple configuration.
o Multiple APIs for accessing data: SQL, Table, Cassandra, MongoDB, and
Gremlin.
o SDKs available for various languages such as .NET, Java, Node.js, and
Python.
2. Blob Storage:
o Detailed information not covered in this section.
Cosmos DB Features:
Key Considerations:
Partitioning Schemes:
Logical Partitions: Smaller data slices within a container, sharing the same
partition key.
Physical Partitions: Groups of logical partitions managed by Azure,
containing replicas of data.
Partition Key: Critical for performance; immutable once set. Should ensure
even distribution of data and avoid "hot" partitions.
Exam Tip:
Quick Summary
Azure Cosmos DB allows you to interact with data using various APIs. Once you select an API
for your Cosmos DB account, it cannot be changed. This example focuses on using the Cosmos
DB SQL API with .NET Core to create, update, and delete elements in a Cosmos DB account. It
details the setup of a .NET Core console application, adding necessary NuGet packages, and
configuring the application to interact with Cosmos DB. The SDK provides classes such as
CosmosClient, Database, and Container for managing these elements. Methods like
CreateDatabaseIfNotExistsAsync, CreateContainerIfNotExistsAsync, and
UpsertItemAsync are used for CRUD operations. Consistency levels (Strong, Bounded
Staleness, Session, Consistent Prefix, and Eventual) determine how data is replicated across
regions, impacting latency, availability, and data consistency.
Exam Tip
The consistency level you choose affects latency and availability. Avoid the most extreme levels
unless necessary, as they can significantly impact your application. If unsure, the session
consistency level is generally the best-balanced option for most applications.
Quick Summary
In Cosmos DB, you create databases and containers to store and manage data. The API chosen
for a Cosmos DB account determines the storage and data access method. Databases group
containers and are similar to namespaces, while containers are the primary units of scalability for
throughput and storage. When creating containers, you can set properties like partition keys,
throughput modes, indexing policies, TTL, change feed policies, and unique keys. Some
properties can only be set during the container creation process, so planning is crucial.
Key Terms and Definitions
Exam Tip
Carefully plan the creation of new containers in Azure Cosmos DB. Some properties, such as
unique keys and partition keys, can only be set during the container creation process. Modifying
these properties later requires creating a new container and migrating data.
Moving Items in Blob Storage between Storage Accounts or
Containers
Summary: When managing Azure Blob Storage, you may need to move blobs between storage
accounts or containers. Tools available for these tasks include Azure Storage Explorer, AzCopy,
Python (using azure-storage-blob), and SSIS. These tools typically perform copy operations
followed by deleting the source blob or container.
Key Points:
1. Open Azure Storage Explorer and log into your Azure subscription.
2. Navigate to the source storage account and container.
3. Select the blob to move, copy it, then navigate to the destination
container and paste it.
4. Confirm the copy completion, then delete the source blob.
Definitions:
Exam Tip:
You can move blobs and containers across different storage accounts,
regions, and subscriptions, provided you have sufficient access privileges for
both the source and destination accounts.
Summary: Azure Storage allows you to work with additional information assigned to your blobs
through system properties and user-defined metadata. System properties are automatically added
by the storage service and can be either modifiable or read-only. User-defined metadata consists
of key-value pairs added for your purposes and needs to be managed manually.
Key Points:
SDKs: Use the appropriate SDK (e.g., .NET SDK) to set and retrieve properties
and metadata.
Azure CLI: Use commands such as az storage blob metadata for metadata
operations.
Azure Portal: View and edit properties and metadata through the Properties
and Metadata sections.
Definitions:
Summary: Microsoft provides several SDKs for working with Azure Storage, supporting
various programming languages like .NET, Java, Python, JavaScript, Go, PHP, and Ruby. These
SDKs offer more control over the data operations compared to other tools. You can perform
operations such as moving blobs between containers and storage accounts programmatically
using these SDKs.
Key Points:
Definitions:
Example Operations:
Exam Tip: Remember, when moving blobs, you must perform a copy operation first and then
delete the source blob. There is no direct move method in the SDKs.
Move Items in Blob Storage Between Storage Accounts or
Containers
Summary: Azure provides tools like Azure Storage Explorer, AzCopy, Python SDK, and SSIS
for moving blobs between storage accounts or containers. Moving a blob involves copying it to
the destination and then deleting the original.
Key Points:
Definitions:
Exam Tip: Remember that moving blobs involves copying and then deleting the original. Tools
like AzCopy are ideal for bulk operations and cross-account blob copying.
Quick Summary:
Azure Blob Storage provides different access tiers (Hot, Cool, and Archive) to optimize storage
costs and performance based on data access frequency. The Hot tier is for frequently accessed
data, Cool for less frequently accessed data, and Archive for rarely accessed data. You can
implement data archiving and retention policies using lifecycle management policies, which
automate moving data between tiers. SDKs allow for programmatically managing blobs and
changing access tiers.
Key Points:
Exam Tips:
Access Tiers: Remember that only GPv2 Storage Accounts support access
tiers.
Blob Movement: Moving blobs involves a copy operation followed by
deleting the original blob.
Leases: Use leases to prevent simultaneous access conflicts on blobs. Infinite
leases must be released manually.
Rehydration: Be aware of the rehydration process when moving data from
Archive to other tiers.
Lifecycle Management: Policies are evaluated every 24 hours, and changes
may not be immediate.
SDK Version: Be mindful of which SDK version you are using as they offer
different features.
Chapter Summary
Thought Experiment
Key Concepts:
1. OAuth2 Authentication:
o OAuth2 is a protocol for authorization that allows applications to
securely access resources on behalf of a user without sharing
credentials.
o Key components include the Resource Owner (user), Resource Server
(API or service), Client (application), and Authorization Server
(responsible for issuing tokens).
2. Authentication Flows:
o Authorization Code Flow: Suitable for server-side applications. It
involves redirecting the user to an authorization server to get an
authorization code, which is then exchanged for an access token.
o Implicit Flow: Designed for client-side applications (e.g., SPA). It
directly issues an access token without an intermediate authorization
code.
o Resource Owner Password Credentials Flow: Used when the
application has a high degree of trust, where the user provides their
credentials directly to the client application.
o Client Credentials Flow: Used for application-to-application
communication, where the client uses its own credentials to access
resources.
3. Azure AD Integration:
o Register applications in Azure AD to use OAuth2 for authentication.
o Define supported account types and configure redirect URIs.
o Manage client secrets and certificates for securing API access.
Definitions:
OAuth2: An open standard for access delegation commonly used for token-
based authentication and authorization on the internet.
Authorization Code: A temporary code that the client will exchange for an
access token.
Access Token: A token that the client uses to make authenticated requests
on behalf of the user.
Refresh Token: A token used to obtain a new access token without requiring
the user to re-authenticate.
JWT (JSON Web Token): A compact, URL-safe means of representing claims
to be transferred between two parties.
Key Points:
OAuth2 decouples authentication from authorization, allowing applications to
access resources without exposing user credentials.
Tokens have different lifespans, and refresh tokens help maintain user
sessions without re-authentication.
Secure applications by integrating Azure Active Directory (Azure AD) for
OAuth2 authentication.
Use Azure AD to register apps, configure redirect URIs, manage client secrets,
and define API permissions.
csharp
Copy code
// Configuring OAuth2 in Startup.Auth.cs
app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/token"),
Provider = new ApplicationOAuthProvider(PublicClientId),
AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
AllowInsecureHttp = true // Set to false in production
});
app.UseOAuthBearerTokens(OAuthOptions);
Exam Tips:
Exam Tip Box: When you are working with OAuth2 authentication, remember that you don’t
need to store the username and password information in your system. You can delegate that task
to specialized authentication servers. Once the user has been authenticated successfully, the
authentication server sends an access token that you can use for confirming the identity of the
client. This access token needs to be refreshed once the token expires. The OAuth2 can use a
refresh token for requesting a new access token without asking the user again for their
credentials.
Overview: Shared Access Signatures (SAS) are a secure way to grant limited access to resources
in Azure Storage without exposing your account key. SAS tokens are useful for sharing data with
clients or services while controlling the permissions, duration, and access restrictions.
Key Concepts:
1. Types of SAS:
o Service SAS: Provides access to a specific service (Blob, Queue, Table,
or File) within a storage account.
o Account SAS: Grants access to resources within a storage account,
with more extensive permissions than a service SAS.
o User Delegation SAS: Uses Azure AD credentials to delegate access
to Blob Storage or Data Lake Storage Gen2.
2. Components of SAS:
o Permissions: Define the allowed operations, such as read, write,
delete, and list.
o Expiry Time: Sets the validity period for the SAS token.
o Resource: Specifies the resource type (e.g., blob, container, queue).
o IP Address or IP Range: Restricts access to specific IP addresses or
ranges.
o Protocol: Limits the allowed protocol (HTTPS or HTTP).
Definitions:
Key Points:
SAS tokens allow you to grant limited and controlled access to your Azure
Storage resources.
You can specify permissions, expiry time, resource type, IP address
restrictions, and protocol limitations in a SAS token.
Using SAS tokens is a secure way to share data without exposing your
storage account keys.
User Delegation SAS provides enhanced security by leveraging Azure AD
credentials.
Stored Access Policies simplify SAS management by centralizing the SAS
settings at the container level.
csharp
Copy code
// Generate a service SAS for a blob
BlobSasBuilder sasBuilder = new BlobSasBuilder
{
BlobContainerName = "mycontainer",
BlobName = "myblob.txt",
Resource = "b",
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)
};
sasBuilder.SetPermissions(BlobSasPermissions.Read | BlobSasPermissions.Write);
string sasToken = sasBuilder.ToSasQueryParameters(new
StorageSharedKeyCredential(accountName, accountKey)).ToString();
Console.WriteLine($"SAS Token: {sasToken}");
Exam Tips:
User Delegation SAS: Available only for Azure Blob Storage and Azure Data
Lake Storage Gen2. It cannot use Stored Access Policies.
Stored Access Policies: Simplify the management of SAS tokens but are
not applicable with User Delegation SAS.
Permissions and Restrictions: Understand how to specify permissions,
expiry times, IP restrictions, and protocols when creating SAS tokens.
Security Best Practices: Always prefer User Delegation SAS for enhanced
security and avoid using account keys directly.
Exam Tip Box: If you plan to work with user delegation SAS, you need to consider that this
type of SAS is available only for Azure Blob Storage and Azure Data Lake Storage Gen2. You
cannot use either Stored Access Policies when working with user delegation SAS.
Key Takeaway: Implementing Shared Access Signatures (SAS) is essential for securely
granting limited access to Azure Storage resources. Understanding the different types of SAS,
their components, and how to create and manage them is crucial for developing secure and
efficient cloud solutions.
Section Summary: Register Apps and Use Azure Active Directory
to Authenticate Users
Overview: This section covers how to register applications in Azure Active Directory (Azure
AD) and use it to authenticate users. Registering apps in Azure AD allows you to manage
identities and access to resources, ensuring secure and streamlined authentication processes.
Key Concepts:
Definitions:
Key Points:
o Single tenant: Accessible only within the Azure AD tenant where it was
registered.
o Multitenant: Accessible by users from any Azure AD tenant.
o Personal Microsoft accounts: Accessible by personal Microsoft account
users.
Authentication Methods:
Exam Tips:
Exam Tip Box: When you are registering a new application in your Azure Active Directory
tenant, you need to consider which will be your target user. If you need for any user from any
Azure Active Directory organization to be able to log into your application, you need to
configure a multitenant app. In those multitenant scenarios, the app registration and management
is always performed in your tenant and not in any other external tenant.
Overview: This section explains how to manage access to Azure resources using Role-Based
Access Control (RBAC). RBAC allows precise access management by assigning roles to users,
groups, or service principals, ensuring they have the necessary permissions to perform their
tasks.
Key Concepts:
Definitions:
Key Points:
Assigning Roles:
o Navigate to the desired resource in the Azure portal.
o Select "Access control (IAM)".
o Click "Add" and then "Add role assignment".
o Choose the role and select the user, group, or service principal to
assign the role to.
Creating Custom Roles:
o Identify specific actions required for the role.
o Define the role using JSON format.
o Assign the role to users, groups, or service principals.
bash
Copy code
az role assignment create --assignee <userPrincipalName> --role <roleName> --
scope <resourceScope>
Exam Tips:
Exam Tip Box: When you are assigning specific service roles, carefully review the permissions
granted by the role. In general, granting access to a resource doesn’t grant access to the data
managed by that resource. For example, the Storage Account Contributor grants access for
managing Storage Accounts but doesn’t grant access to the data itself.
Overview: This section discusses how to manage keys, secrets, and certificates using the Azure
Key Vault API. Azure Key Vault is a cloud service for securely storing and accessing sensitive
information like passwords, connection strings, and cryptographic keys. The section covers how
to perform various operations on these items using the Key Vault API.
Definitions:
Key Vault: A service for securely storing and accessing secrets, keys, and
certificates.
Secret: Sensitive data such as passwords and API keys stored securely.
Key: Cryptographic keys used for encryption and decryption.
Certificate: Digital certificates used for secure communication.
Access Policies: Policies that define permissions for accessing items in the
Key Vault.
Security Principal: An entity that can request access to resources. This
includes users, groups, and applications.
Least Privilege Principle: Security principle that users should be granted
the minimum levels of access – or permissions – needed to perform their job
functions.
Key Points:
Managing Keys:
o Use the KeyClient class to manage keys.
o Methods include CreateKey, GetKey, UpdateKey, and DeleteKey.
Handling Certificates:
o Use the CertificateClient class to manage certificates.
o Methods include CreateCertificate, GetCertificate,
UpdateCertificate, and DeleteCertificate.
Access Policies:
o Define who can access the Key Vault and what operations they can
perform.
o Follow the principle of least privilege to grant minimal necessary
permissions.
Important Code Snippets: Creating and retrieving a secret using the .NET SDK:
csharp
Copy code
var client = new SecretClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.SetSecret("SecretName", "SecretValue");
KeyVaultSecret secret = client.GetSecret("SecretName");
Console.WriteLine(secret.Value);
csharp
Copy code
var client = new KeyClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.CreateKey("KeyName", KeyType.Rsa);
KeyVaultKey key = client.GetKey("KeyName");
Console.WriteLine(key.Key.ToString());
csharp
Copy code
var client = new CertificateClient(new Uri(keyVaultUrl), new
DefaultAzureCredential());
client.StartCreateCertificate("CertificateName", new CertificatePolicy());
KeyVaultCertificate certificate = client.GetCertificate("CertificateName");
Console.WriteLine(certificate.Properties.Version);
Exam Tips:
Exam Tip Box: The kind of information that you usually store in an Azure Key Vault is
essential information that needs to be kept secret, like passwords, connection strings, private
keys, and things like that. When configuring access to your Key Vault, carefully review the
access level you grant to the security principal. As a best practice, you should always apply the
principle of least privilege. You grant access to the different levels in a Key Vault by creating
Access Policies.
Section Summary: Implement Managed Identities for Azure
Resources
Overview: This section discusses the implementation of Managed Identities for Azure resources.
Managed Identities provide Azure services with an automatically managed identity in Azure
Active Directory (Azure AD) that can be used to authenticate to any service that supports Azure
AD authentication, without requiring credentials in your code.
Definitions:
Key Points:
bash
Copy code
az vm create --resource-group myResourceGroup --name myVM --image UbuntuLTS --
assign-identity
Assigning a user-assigned managed identity to an Azure Virtual Machine using Azure CLI:
bash
Copy code
az identity create --resource-group myResourceGroup --name myIdentity
az vm identity assign --resource-group myResourceGroup --name myVM --
identities myIdentity
Exam Tips:
Exam Tip Box: You can configure two different types of managed identities: system- and user-
assigned. System-assigned managed identities are tied to the service instance. If you delete the
service instance, the system-assigned managed identity is automatically deleted as well. You can
assign the same user-assigned managed identities to several service instances.
Exam Tips
This section discusses how to implement Content Delivery Networks (CDNs) in your solutions
to improve the performance of web applications by caching static content and serving it from
locations closer to the user.
Definitions:
Key Points:
1. CDN Setup:
o To set up a CDN, you need to create a CDN profile and endpoint in the
Azure portal.
o The origin server URL is specified when setting up the CDN endpoint.
o Propagation of the CDN typically completes within 10 minutes for the
Standard Microsoft CDN.
2. CDN Benefits:
o Reduces latency by serving content from a location closer to the user.
o Offloads traffic from the origin server, improving its performance.
5. Cache Control:
o Set cache expiration policies to control how long content is cached.
o Use HTTP headers like Cache-Control to manage caching behavior.
Here's a brief code snippet to demonstrate setting cache control headers in a web application:
csharp
Copy code
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.Use(async (context, next) =>
{
context.Response.Headers["Cache-Control"] = "public,max-age=600";
await next();
});
CDN for Static Content: Content Delivery Networks (CDN) are appropriate
for caching static content that changes infrequently. Although Azure CDN
from Akamai and Azure CDN from Verizon include Dynamic Site Acceleration
(DSA), this feature is not the same as a cache system. You should not confuse
Azure CDN DSA optimization with Azure CDN cache.
Exam Tip Box:
Exam Tip: Content Delivery Networks (CDN) are appropriate for caching static content that
changes infrequently. Although Azure CDN from Akamai and Azure CDN from Verizon include
Dynamic Site Acceleration (DSA), this feature is not the same as a cache system. You should not
confuse Azure CDN DSA optimization with Azure CDN cache.
This section discusses the configuration of cache and expiration policies for Azure Front Door,
Content Delivery Networks (CDNs), and Azure Cache for Redis. It covers how to optimize
performance by caching static content and frequently accessed dynamic data.
Definitions:
Azure Front Door: A scalable and secure entry point for fast delivery of your global
applications.
Content Delivery Network (CDN): A distributed network of servers that delivers web content
to a user based on the geographic location of the user, the origin of the webpage, and a content
delivery server.
Azure Cache for Redis: A secure, dedicated cache service that provides a high-performance
caching solution.
Cache Expiration Policies: Rules that define how long data should be stored in the cache before
it is considered stale.
Key Points:
Azure Front Door provides global load balancing and near-instant failover
for high availability.
CDNs reduce latency and improve load times by caching content at
strategically placed edge locations.
Azure Cache for Redis can be used for both static content and dynamic
data, providing low-latency access to frequently accessed data.
Cache Expiration Policies ensure that cached content is refreshed
periodically to reflect the most up-to-date information.
Dynamic Site Acceleration (DSA) improves the performance of dynamic
content delivery, although it is not a caching mechanism.
Custom Domain configuration allows you to use a user-friendly URL for
accessing your CDN content.
Compression can significantly reduce the size of transmitted data,
enhancing the performance of web applications.
Caching Rules allow granular control over the caching behavior, enabling
different cache settings for different types of content.
Geo-filtering can be used to restrict or allow access to content based on the
user’s location.
Optimization settings in CDNs can be tailored to different types of content
to maximize performance.
Example of setting cache expiration policy in Azure Front Door using Azure CLI:
bash
Copy code
az network front-door routing-rule update --resource-group myResourceGroup --
front-door-name myFrontDoor --name myRoutingRule --caching duration=10m
Example of configuring cache settings for Azure CDN using Azure CLI:
bash
Copy code
az cdn endpoint update --resource-group myResourceGroup --profile-name
myCDNProfile --name myCDNEndpoint --content-types-to-compress text/html
application/json --query-string-caching-behavior IgnoreQueryString
bash
Copy code
az redis create --name myCache --resource-group myResourceGroup --location
westus --sku Basic --vm-size c0
Exam Tips:
Azure Front Door vs. CDN: Understand the differences and use cases for
Azure Front Door and Azure CDN, especially regarding global load balancing
and content caching.
Cache Expiration Policies: Be familiar with configuring cache expiration
policies to ensure that cached content is kept up-to-date without unnecessary
latency.
Dynamic Content Delivery: Recognize that Dynamic Site Acceleration
(DSA) optimizes the delivery of dynamic content and is not the same as
caching static content.
Custom Domains and Compression: Know how to configure custom
domains and enable compression to improve the performance and user
experience of web applications.
Geo-filtering and Optimization: Understand how geo-filtering and
optimization settings can be used to enhance content delivery based on user
location and content type.
Exam Tip Box:
Exam Tip: You can use Azure Cache for Redis for static content and the most-accessed dynamic
data. You can use it for in-memory databases or message queues using a publication/subscription
pattern.
This section covers the process of configuring instrumentation in an application or service using
Azure Application Insights. Instrumentation is crucial for monitoring the performance and usage
of applications, detecting issues, and understanding user behavior.
Definitions:
Key Points:
Important Concepts:
SDK Installation: The first step in using Application Insights is to install the
SDK appropriate for your application platform (e.g., .NET, Java, JavaScript).
Instrumentation Key: Configure your application with the instrumentation
key from the Application Insights resource to ensure that telemetry data is
sent to the correct resource.
Telemetry Collection: The SDK collects and sends various telemetry data to
Application Insights. This includes automatic collection of requests,
dependencies, and exceptions.
Custom Events and Metrics: You can enhance the insights by sending
custom events and metrics that provide business-specific information.
Live Metrics Stream: Use this feature to monitor real-time telemetry data
for immediate insights into application performance.
csharp
Copy code
// Installing the SDK in your project
Install-Package Microsoft.ApplicationInsights.AspNetCore
services.AddApplicationInsightsTelemetry(Configuration["ApplicationInsights:In
strumentationKey"]);
}
csharp
Copy code
var telemetryClient = new TelemetryClient();
telemetryClient.TrackEvent("CustomEvent", new Dictionary<string, string>{{
"EventDetail", "EventValue" }});
Exam Tips:
Exam Tip: Remember that Application Insights is a solution for monitoring the behavior of an
application on different platforms, written in different languages. You can use Application
Insights with web applications and native applications or mobile applications written in .NET,
Java, JavaScript, or Node.js. There is no requirement to run your application in Azure. You only
need to use Azure for deploying the Application Insights resource that you use for analyzing the
information sent by your application.
-==-=-=-=-=--=-=-==-=-=-=--=================-==--=-==-=-=-=--==--==--=-=-=-
==---
This section focuses on using Azure Monitor to analyze log data and troubleshoot issues within
applications and services. Azure Monitor is a comprehensive monitoring solution that helps you
collect, analyze, and act on telemetry data from your cloud and on-premises environments.
Definitions:
Azure Monitor: A platform service that provides a single source for monitoring Azure
resources. It collects metrics and logs, sets up alerts, and enables monitoring and diagnostics.
Diagnostics Logs: Logs that provide detailed information about the operations and activities
within an Azure resource. These logs are essential for troubleshooting and monitoring the health
of your services. Log Analytics: A service within Azure Monitor that helps collect and analyze
log data from various sources. It uses a powerful query language called Kusto Query Language
(KQL). Kusto Query Language (KQL): The query language used in Log Analytics for
analyzing large datasets. KQL is designed for querying structured, semi-structured, and
unstructured data. Workspace: A container in Log Analytics where data from various sources is
stored and queried. Log Queries: Queries written in KQL to retrieve and analyze data from the
Log Analytics workspace. Metrics: Quantitative measurements that provide insights into the
health and performance of resources, such as CPU usage, memory usage, and request rates.
Alerts: Notifications or automated actions triggered when specific conditions are met based on
metrics or log data. Azure Resource Manager (ARM): The deployment and management
service for Azure. ARM provides a consistent management layer that allows you to create,
update, and delete resources in your Azure account.
Key Points:
Important Concepts:
Example of enabling diagnostics logs for an Azure App Service using Azure CLI:
bash
Copy code
az monitor diagnostic-settings create --name <diagnostic_setting_name> --
resource <resource_id> --logs '[{"category": "AppServiceHTTPLogs", "enabled":
true}]' --metrics '[{"category": "AllMetrics", "enabled": true}]'
Example of a KQL query to retrieve log data from Log Analytics workspace:
kql
Copy code
AppRequests
| where timestamp > ago(1h)
| summarize count() by bin(timestamp, 5m)
Exam Tips:
Enabling Diagnostics Logs: Ensure that you have enabled diagnostics logs
for your Azure resources, as they are crucial for analyzing log data and
troubleshooting issues.
Using Log Analytics: Be familiar with creating and running KQL queries in
Log Analytics to analyze data and identify issues.
Setting Up Alerts: Know how to set up alerts in Azure Monitor to get
notified or trigger actions based on specific conditions.
KQL Proficiency: Understanding KQL is important for querying and
analyzing log data effectively.
Exam Tip: When you try to query logs from the Azure Monitor, remember that you need to
enable the diagnostics logs for the Azure App Services. If you get the message, "We didn’t find
any logs" when you try to query the logs for your Azure App Service, that could mean that you
need to configure the diagnostic settings in your App Service.
This section covers how to implement Application Insights web tests and alerts. Application
Insights web tests allow you to monitor the availability and performance of your web
applications by simulating user interactions. Alerts notify you when certain conditions are met,
such as a web test failure or a performance issue.
Definitions:
Key Points:
Important Concepts:
Web Tests in Application Insights: Used for monitoring and ensuring the
availability and performance of web applications.
Types of Web Tests: Understand the difference between URL Ping Tests and
Multistep Web Tests.
Setting Up Alerts: Know how to configure alerts to get notified or take
automated actions based on specific conditions.
Exam Tips:
Exam Tip: Remember that you need a Visual Studio Enterprise license for creating multistep
web tests. You use the Visual Studio Enterprise for the definition of the steps that are part of the
test, and then you upload the test definition to Azure Application Insights.
This section explains how to implement code that handles transient faults in your applications.
Transient faults are temporary errors that occur in cloud environments, often due to temporary
unavailability or connectivity issues. Properly handling these faults can improve the resilience
and reliability of your applications.
Definitions:
Transient Faults: Temporary errors that are usually self-correcting and occur sporadically in
cloud environments due to factors like network congestion, service unavailability, or throttling.
Retry Strategy: A method of handling transient faults by retrying the failed operation after a
certain period. Exponential Backoff: A retry strategy where the wait time between retries
increases exponentially. Circuit Breaker: A design pattern used to detect failures and
encapsulate the logic of preventing an application from trying to perform an operation that is
likely to fail.
Key Points:
Important Concepts:
csharp
Copy code
public async Task<T> RetryOnExceptionAsync<T>(int maxRetries, Func<Task<T>>
operation)
{
int attempt = 0;
while (true)
{
try
{
return await operation();
}
catch (Exception ex) when (attempt < maxRetries)
{
attempt++;
var delay = TimeSpan.FromSeconds(Math.Pow(2, attempt));
await Task.Delay(delay);
}
}
}
Exam Tips:
Exam Tip: Remember to test your retry strategy carefully. Using a wrong retry strategy could
lead your application to exhaust the resources needed for executing your code. A wrong retry
strategy can potentially lead to infinite loops if you don’t use circuit breakers.
Your company's eCommerce LOB application faces stability and performance issues, especially
during high usage periods like holidays. The application interacts with external systems. You
need to address complaints regarding its stability and performance.
Exam Tips:
Retry Strategy: Test your retry strategy to avoid resource exhaustion and
infinite loops.
Caching and Redis: Use Redis for caching static content and dynamic data,
as well as for in-memory databases or message queues.
CDNs: Use CDNs for caching static content with infrequent changes; do not
confuse DSA with cache systems.
Application Insights: Suitable for monitoring applications across various
platforms and languages, not limited to Azure-hosted applications.
Exam Tip:
Transient Faults: Test your retry strategy carefully. An incorrect strategy can
exhaust resources or cause infinite loops if not combined with circuit
breakers.
Azure Cache for Redis: Suitable for caching static content and dynamic
data, supporting in-memory databases or message queues.
CDNs: Ideal for caching static content with infrequent changes. Do not
confuse DSA with cache systems in Azure CDN.
Application Insights: Monitors applications across different platforms and
languages. Does not require the application to run in Azure, only the
deployment of Application Insights in Azure.
Section Summary: Create a Logic App
Overview: This section discusses how to create a Logic App in Azure. Logic Apps provide a
platform for building automated workflows that integrate with various services and systems. This
enables you to design complex workflows with minimal code by leveraging connectors and pre-
built templates.
Logic App: A cloud service in Azure that automates and orchestrates tasks,
business processes, and workflows by integrating various services and
systems.
Triggers: Events that start a workflow in a Logic App. Triggers can be time-
based or event-based, such as when a new email arrives.
Actions: Steps that follow a trigger in a Logic App workflow. Actions can
include operations like sending an email, creating a file, or making an HTTP
request.
Connectors: Pre-built integrations in Logic Apps that allow you to connect to
various services such as Office 365, Salesforce, SQL Server, etc.
Integration Service Environment (ISE): A dedicated environment for
running Logic Apps that need high isolation and performance. ISE provides
private, isolated network environments for securely running integration
workloads.
Designer: The interface in the Azure portal used to design Logic Apps. It
provides a visual way to create and manage workflows.
Managed Identity: An identity in Azure AD automatically managed by Azure
for authenticating to other services.
Schedules:
Connector Categories:
Pricing Levels:
Key Points:
Creating a Logic App: Can be done using Azure portal, Visual Studio, or
Azure CLI.
Triggers and Actions: Define workflow logic starting with triggers and
followed by actions.
Connectors: Integrate various services into Logic Apps, categorized by their
functionality and environment.
ISE: Provides a secure, dedicated environment for Logic Apps with specialized
connectors.
Exam Tips:
ISE Connectors: Understand the distinction and usage of CORE and ISE
labeled connectors in Integration Service Environments.
Global Connectors: Recognize that public, multitenant connectors like
Office 365 can still be used within an ISE.
Pricing Levels: Know the differences between Basic, Standard, and
Enterprise pricing levels and what each includes.
Conclusion:
Creating a Logic App involves defining triggers and actions, using connectors, and
understanding the environment in which the Logic App runs. Utilizing ISE can enhance the
security and performance of Logic Apps, especially when dealing with sensitive or high-
performance workloads.
Overview: This section covers the creation of custom connectors for Azure Logic Apps. Custom
connectors allow you to integrate your Logic Apps with APIs and services that are not available
through built-in or managed connectors.
Key Points:
1. Sign in to Azure Portal: Go to the Azure portal and log in with your
credentials.
2. Navigate to Logic Apps: Select "Logic Apps" from the left-hand
menu.
3. Create Custom Connector: Select "Custom connectors" under "API
Management."
4. Define the Connector: Use OpenAPI definition or Postman collection
to define the connector.
5. Set up Security: Configure the authentication type such as Basic
Auth, API Key, OAuth 2.0, etc.
6. Test the Connector: Test the custom connector to ensure it works as
expected.
7. Use in Logic Apps: Once created, the custom connector can be used
in your Logic Apps workflows.
Connector Sharing:
o Microsoft Flow and PowerApps: Custom connectors can be used in
both Microsoft Flow and PowerApps. However, connectors created for
Logic Apps cannot be directly reused in Flow or PowerApps.
o OpenAPI Definition: You can use the same OpenAPI definition to
create a custom connector for Logic Apps, Microsoft Flow, and
PowerApps.
json
Copy code
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "My API",
"description": "API for my custom connector"
},
"host": "api.example.com",
"basePath": "/v1",
"schemes": [
"https"
],
"paths": {
"/resource": {
"get": {
"summary": "Get resource",
"operationId": "getResource",
"produces": ["application/json"],
"responses": {
"200": {
"description": "OK"
}
}
}
}
}
}
Exam Tips:
Exam Tip: You can create custom connectors for Azure Logic Apps, Microsoft Flow, and
Microsoft PowerApps. You cannot reuse a connector created for Azure Logic Apps in Microsoft
Flow or PowerApps (or vice versa). You can use the same OpenAPI definition to create a custom
connector for these three services.
Conclusion:
Creating custom connectors in Azure Logic Apps allows you to extend the functionality of your
workflows by integrating with APIs and services not available through built-in or managed
connectors. Understanding the process of defining, securing, and testing these connectors is
crucial for leveraging their full potential in automating business processes.
Overview: This section discusses the creation of custom templates for Azure Logic Apps.
Custom templates allow users to create pre-defined workflows that can be reused and shared
across different Logic Apps.
Definitions and Key Terms:
Custom Template: A pre-defined Logic App workflow that can be reused and
shared across different Logic Apps.
ARM Template (Azure Resource Manager Template): A JSON file that
defines the infrastructure and configuration for your Azure deployment. It
allows you to define the resources you need and automate the deployment
process.
Logic App Workflow Definition: The specific set of actions and triggers
that make up the Logic App. This definition can be exported and used to
create custom templates.
Parameters: Variables in the ARM template that allow you to customize the
deployment of your Logic App without changing the template itself.
Key Points:
1. Export Logic App: Go to your Logic App in the Azure portal and
export it as an ARM template.
2. Edit Template: Customize the ARM template by adding parameters
and making any necessary modifications to the workflow definition.
3. Deploy Template: Use the Azure portal, Azure CLI, or PowerShell to
deploy the ARM template. During deployment, provide the necessary
parameter values.
json
Copy code
{
"$schema":
"https://schema.management.azure.com/schemas/2019-04-01/deploymentTempla
te.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logicAppName": {
"type": "string",
"defaultValue": "MyLogicApp"
},
"storageAccountConnectionString": {
"type": "securestring"
}
},
"resources": [
{
"type": "Microsoft.Logic/workflows",
"apiVersion": "2016-06-01",
"name": "[parameters('logicAppName')]",
"location": "[resourceGroup().location]",
"properties": {
"definition": {
"$schema":
"https://schema.management.azure.com/providers/Microsoft.Logic/schemas/
2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"actions": {
"Get_blob_content": {
"type": "Http",
"inputs": {
"method": "GET",
"uri":
"[concat('https://myaccount.blob.core.windows.net/mycontainer/myblob')]"
}
}
},
"outputs": {}
},
"parameters": {
"$connections": {
"value": {
"AzureBlob": {
"connectionId": "[concat('/subscriptions/',
subscription().subscriptionId, '/resourceGroups/', resourceGroup().name,
'/providers/Microsoft.Web/connections/AzureBlob')]",
"connectionName": "azureblob",
"id": "/subscriptions/00000000-0000-0000-0000-
000000000000/providers/Microsoft.Web/locations/westus/managedApis/
azureblob"
}
}
}
}
}
}
]
}
Exam Tips:
Exam Tip: Custom templates for Logic Apps allow for standardization and reusability across
different projects. Understand how to create, export, and modify ARM templates to fit various
deployment scenarios.
Conclusion:
Creating custom templates for Azure Logic Apps helps in standardizing workflows, automating
deployments, and ensuring consistency across different projects. Understanding the structure and
customization of ARM templates is crucial for effectively utilizing custom templates in Logic
Apps.
Overview: This section covers the steps and considerations involved in creating an Azure API
Management (APIM) instance. APIM is a platform that helps organizations publish APIs to
external, partner, and internal developers to unlock the potential of their data and services.
Key Points:
1. Navigate to the Azure Portal: Go to the Azure portal and search for
"API Management".
2. Create a New Instance: Click on "Create API Management service".
3. Configure Basic Settings: Fill in the basic settings including the
name, resource group, and tier.
4. Set Up Networking: Configure the networking settings for your APIM
instance.
5. Review and Create: Review your settings and create the APIM
instance.
Understand Tiers: Be familiar with the differences between the APIM pricing
tiers and what each tier offers.
Configuration Options: Know the steps to set up an APIM instance,
including configuring the API Gateway and Developer Portal.
Use Cases: Be aware of the scenarios where different tiers of APIM would be
appropriate.
Exam Tip: When creating an APIM instance, carefully choose the pricing tier based on your
needs. The Developer tier is for development and testing, Basic for entry-level production,
Standard for typical production use, and Premium for high-performance, mission-critical
environments.
Conclusion:
Creating an APIM instance involves selecting the appropriate tier, setting up the instance
configurations, and configuring the API gateway and developer portal. Understanding the
features and limitations of each APIM tier is crucial for making the right choice for your API
management needs.
Overview: This section covers the implementation of solutions using Azure Event Grid, a
service that enables event-based architectures by allowing different systems to publish and
consume events. Azure Event Grid facilitates decoupling of components within a system, leading
to more scalable and maintainable solutions.
Azure Event Grid: A fully managed event routing service that allows for the
integration of various Azure services and third-party services using events.
Event: A lightweight notification of a condition or a state change. Each event
is self-contained and includes enough information for the receiving service to
process it.
Event Source: The service or application that publishes events to Event
Grid. Examples include Azure Blob Storage, Azure Resource Manager, and
custom topics.
Event Handler: The service or application that consumes events. Examples
include Azure Functions, Logic Apps, and Webhooks.
Topic: An endpoint where publishers send events. Azure Event Grid supports
both system topics (built-in topics provided by Azure services) and custom
topics.
Subscription: A configuration that tells Event Grid which events on a topic it
should route to a specific endpoint.
Event Schema: The structure of the event data. Azure Event Grid uses a
predefined schema for events.
System Topics: Predefined topics provided by Azure services, which
automatically publish events.
Custom Topics: User-defined topics that can be used to publish events from
custom sources.
Key Points:
Exam Tips:
Exam Tip: Event Grid is one of the services that Azure provides for exchanging information
between different systems. These systems publish and consume events from the Event Grid,
allowing you to decouple the different elements of your architecture. Ensure that you fully
understand the role each element plays in the exchange of information using Event Grid.
Conclusion:
Overview: This section covers the implementation of Azure Notification Hubs, which provides a
scalable and cross-platform push notification infrastructure for sending notifications to various
devices.
Definitions:
Key Points:
Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.
csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var registration = new FcmRegistrationDescription("<fcm_token>");
registration.Tags.Add("myTag");
await hub.CreateOrUpdateRegistrationAsync(registration);
csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var notification = new FcmNotification("{\"data\":{\"message\":\"Hello
World!\"}}");
await hub.SendFcmNativeNotificationAsync(notification, "myTag");
Exam Tips:
Exam Tip Box: When working with Azure Notification Hubs, remember that the namespace
provides a unique scoping mechanism, while the Notification Hub is responsible for managing
the delivery of notifications to the devices.
Overview: This section discusses the implementation of Azure Notification Hubs, which provide
a scalable and cross-platform push notification infrastructure that allows you to send
notifications to various devices.
Definitions:
Key Points:
Creating a Notification Hub:
o Set up a namespace to contain your Notification Hubs.
o Create a Notification Hub within the namespace.
o Configure platform-specific settings for the Notification Hub (e.g.,
APNS, FCM).
Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.
Tiering:
Free Tier: Provides limited functionality and is suitable for development and
testing.
Basic Tier: Suitable for small-scale applications and includes some additional
features not available in the Free Tier.
Standard Tier: Provides advanced features and is suitable for large-scale
applications.
csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var registration = new FcmRegistrationDescription("<fcm_token>");
registration.Tags.Add("myTag");
await hub.CreateOrUpdateRegistrationAsync(registration);
csharp
Copy code
var hub =
NotificationHubClient.CreateClientFromConnectionString("<connection_stri
ng>", "<hub_name>");
var notification = new FcmNotification("{\"data\":{\"message\":\"Hello
World!\"}}");
await hub.SendFcmNativeNotificationAsync(notification, "myTag");
Exam Tips:
Exam Tip Box: The Azure Event Hub is a service appropriate for processing huge amounts of
events with low latency. You should consider the event hub as the starting point in an event
processing pipeline. You can use the event hub as the event source of the Event Grid service.
Section 5.3 covers the implementation of Azure Event Grid and Azure Notification Hubs,
focusing on how these services facilitate the exchange of information and delivery of
notifications across various systems and platforms.
Important Concepts:
o Elements of Event Grid: Publishers, topics, event subscriptions, and
event handlers.
Exam Tip: Event Grid is one of the services that Azure provides for exchanging
information between different systems. These systems publish and consume events from
the Event Grid, allowing you to decouple the different elements of your architecture.
Sending Notifications:
o Use Notification Hub SDKs or REST APIs to send notifications.
o Send notifications to all registered devices or use tags to target
specific devices.
o Use templates to send platform-specific notifications from a single API
call.
Exam Tip: The Azure Event Hub is a service appropriate for processing huge amounts of
events with low latency. You should consider the event hub as the starting point in an
event processing pipeline. You can use the event hub as the event source of the Event
Grid service.
Important Concepts:
This section covers the implementation of solutions using Azure Service Bus, a messaging
service that enables reliable communication between distributed applications and services.
Creating a Namespace:
o A namespace provides a unique scoping container for addressing
Service Bus resources within your application.
Creating Queues and Topics:
o Queue:
bash
Copy code
az servicebus queue create --resource-group myResourceGroup --
namespace-name myNamespace --name myQueue
o Topic:
bash
Copy code
az servicebus topic create --resource-group myResourceGroup --
namespace-name myNamespace --name myTopic
Creating Subscriptions:
o A subscription receives messages sent to a topic.
o Subscription:
bash
Copy code
az servicebus topic subscription create --resource-group
myResourceGroup --namespace-name myNamespace --topic-name myTopic
--name mySubscription
Sending and Receiving Messages:
Sending Messages:
o Use the Service Bus SDK or REST API to send messages to a queue or
topic.
o Example (C#):
csharp
Copy code
QueueClient queueClient = new QueueClient(connectionString,
queueName);
await queueClient.SendAsync(new
Message(Encoding.UTF8.GetBytes("Hello, Service Bus!")));
Receiving Messages:
o Use the Service Bus SDK or REST API to receive messages from a
queue or subscription.
o Example (C#):
csharp
Copy code
QueueClient queueClient = new QueueClient(connectionString,
queueName);
MessageHandlerOptions messageHandlerOptions = new
MessageHandlerOptions(ExceptionReceivedHandler)
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
queueClient.RegisterMessageHandler(ProcessMessagesAsync,
messageHandlerOptions);
Message Sessions:
Message Session:
o Enables grouping of related messages for ordered processing.
o Allows handling of sessions for message correlation.
o Example:
csharp
Copy code
sessionClient.AcceptMessageSessionAsync(sessionId);
Dead-letter Queues (DLQs):
csharp
Copy code
queueClient = new QueueClient(connectionString,
EntityNameHelper.FormatDeadLetterPath(queueName));
Security and Authentication:
bash
Copy code
az servicebus namespace authorization-rule create --resource-group
myResourceGroup --namespace-name myNamespace --name myAuthRule --
rights Send Listen
Managed Identities:
o Use Azure AD authentication to access Service Bus without managing
credentials.
Pricing Tiers:
Basic:
o Suitable for smaller workloads.
Standard:
o Supports topics and subscriptions in addition to queues.
Premium:
o Provides dedicated resources and enhanced features such as isolation,
Geo-disaster recovery, and higher throughput.
Exam Tip:
By mastering these concepts and components, you can effectively implement robust and scalable
messaging solutions using Azure Service Bus.
This section covers how to implement solutions using Azure Queue Storage queues. Azure
Queue Storage is a service for storing large numbers of messages that can be accessed from
anywhere via authenticated calls using HTTP or HTTPS.
Queue Storage: A service for storing large numbers of messages that can
be accessed from anywhere via authenticated calls using HTTP or HTTPS.
Message: A unit of data stored in a queue. Messages can be up to 64 KB in
size.
Queue: A storage system for holding messages until they are processed.
Visibility Timeout: The period a message is invisible to other clients after
being retrieved.
Peek: The action of reading a message from the queue without removing it.
Dequeue: The action of reading and removing a message from the queue.
Poison Message: A message that cannot be processed successfully after
multiple attempts.
Key Points:
Creating a Queue:
o Use the Azure portal, Azure CLI, or Azure Storage SDK to create a
queue.
o Example command using Azure CLI:
bash
Copy code
az storage queue create --name myqueue --account-name
mystorageaccount
python
Copy code
queue_client.send_message("This is a message")
python
Copy code
message = queue_client.receive_message()
queue_client.delete_message(message.id, message.pop_receipt)
Visibility Timeout:
o When a message is read, it becomes invisible for a specified period
(default 30 seconds). If not deleted, it becomes visible again.
o Adjust the visibility timeout to match the expected processing time.
Queue Monitoring:
o Monitor the length and age of messages in the queue to ensure timely
processing and identify bottlenecks.
Security:
o Use Shared Access Signatures (SAS) to delegate access to queue
operations.
o Ensure secure access to queue storage using Azure AD and managed
identities.
python
Copy code
from azure.storage.queue import QueueClient
queue_client =
QueueClient.from_connection_string(conn_str="my_connection_string",
queue_name="myqueue")
queue_client.send_message("Hello, Azure Queue Storage!")
Exam Tip:
You can use Azure Queue Storage to decouple and scale applications, providing reliable message
delivery between application components. Remember to handle poison messages and implement
appropriate security measures using SAS and Azure AD.
CHAPTER SUMMARY
Azure App Service Logic Apps: Connect different services without specific
code.
Logic App Workflows: Steps to exchange info between applications.
Connectors: Get and send info from/to different services.
Triggers: Events fired on source systems.
Actions: Steps in a workflow.
Graphical Editor: Simplifies workflow creation.
Custom Connectors: Wrappers for REST/SOAP APIs for connecting apps with
Logic Apps, Flow, PowerApps.
API Management (APIM): Publish back-end APIs securely.
APIM Subscriptions: Authenticate API access.
APIM Policies: Modify APIM gateway behavior.
Event-Driven Architecture: Publisher doesn’t expect event
processing/storage by subscriber.
Azure Event Grid: Service for event-driven architectures.
Event Grid Topic: Endpoint for publishers.
Subscribers: Services that read events from a topic.
Azure Notification Hub: Unifies push notifications for mobile platforms.
Azure Event Hub: Entry point for Big Data event pipelines, handles millions
of events/sec.
Message-Driven Architecture: Publisher expects message
processing/storage by subscriber.
Azure Service Bus/Azure Queue: Message broker services.
THOUGHT EXPERIMENT
Azure VMs are scalable computing resources that you can use to run various applications
and services in the cloud.
Azure Container Registry is a managed Docker container registry service used for storing
and managing private Docker container images.
Azure Container Instances offer a quick and easy way to run containers in the cloud
without managing virtual machines or adopting a higher-level service.
Azure Resource Manager (ARM)
Azure Resource Manager provides a management layer that enables you to create,
update, and delete resources in your Azure account through a unified interface.
Azure App Service is a fully managed platform for building, deploying, and scaling web
apps, mobile back ends, and RESTful APIs.
Azure Monitor
Azure DevOps
Azure Storage
Azure Storage offers scalable, durable, and secure storage options for a variety of data
objects, including blobs, files, queues, and tables.
Azure Functions
Azure Functions is a serverless compute service that enables you to run code on-demand
without provisioning or managing infrastructure.
Azure Durable Functions is an extension of Azure Functions that lets you write stateful
functions in a serverless compute environment.
Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud
applications and services, providing secure key management capabilities.
Azure Cosmos DB
Azure SDKs provide programming libraries for accessing Azure services in various
programming languages, enabling developers to interact with Azure resources
programmatically.
Azure Table Storage offers NoSQL key-value storage for rapid development using large
datasets and applications requiring a flexible data schema.
Azure Storage Explorer is a standalone app that enables you to easily work with Azure
Storage data on Windows, macOS, and Linux.
AzCopy
AzCopy is a command-line utility that you can use to copy blobs or files to or from a
storage account.
Azure CLI
Azure CLI is a set of commands used to create, manage, and configure Azure resources,
offering a streamlined command-line experience for developers.
Azure AD is a cloud-based identity and access management service that helps employees
sign in and access resources in external resources, such as Microsoft 365, the Azure
portal, and thousands of other SaaS applications.
OAuth2
RBAC helps manage who has access to Azure resources, what they can do with those
resources, and what areas they have access to.
SAS provides secure delegated access to resources in your storage account without
exposing the account keys.
Azure App Configuration
Azure CDN provides global content delivery and caching to optimize speed and
availability for your web applications and static content.
Azure Front Door is a scalable and secure entry point for fast delivery of your global
applications, providing load balancing, SSL offload, and application acceleration.
Azure Cache for Redis is a fully managed, in-memory cache that enables high-
performance and scalable architectures by storing data in memory for quick access.
DSA is a feature included in Azure CDN from Akamai and Verizon that optimizes the
delivery of dynamic web content.
Application Insights
Azure Log Analytics is a tool in the Azure portal used to edit and run log queries from
data collected by Azure Monitor.
Azure Logic Apps help you automate workflows and integrate apps, data, services, and
systems with pre-built or custom connectors.
Custom Connectors
Custom Connectors allow you to define your APIs and make them available as
connectors in Logic Apps, Microsoft Flow, and PowerApps.
API Management (APIM)
APIM helps organizations publish APIs to external, partner, and internal developers to
unlock the potential of their data and services.
Azure Event Grid enables you to easily build applications with event-based architectures,
simplifying event routing from various sources to different handlers.
Azure Event Hub is a big data streaming platform and event ingestion service capable of
processing millions of events per second with low latency.
Azure Service Bus is a fully managed enterprise message broker with message queues
and publish-subscribe topics to integrate applications and services.
Azure Queue Storage is a service for storing large numbers of messages that can be
accessed from anywhere via authenticated calls using HTTP or HTTPS.