0% found this document useful (0 votes)
1 views

AI ASSESSMENT

ASSESSMENT

Uploaded by

Durga Devi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

AI ASSESSMENT

ASSESSMENT

Uploaded by

Durga Devi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

AI ASSESSMENT:

Generative AI models power ChatGPT's ability to produce new content, such


as text, code, and images, based on a natural language prompts. Many
generative AI models are a subset of deep learning algorithms. These
algorithms support various workloads across vision, speech, language,
decision, search, and more.

Azure OpenAI Service brings these generative AI models to the Azure


platform, enabling you to develop powerful AI solutions that benefit from the
security, scalability, and integration of other services provided by the Azure
cloud platform. These models are available for building applications through
a REST API, various SDKs, and a Studio interface. This module guides you
through the Azure OpenAI Studio experience, giving you the foundation to
further develop solutions with generative AI.

Create an Azure OpenAI Service resource in


Azure CLI
To create an Azure OpenAI Service resource from the CLI, refer to this
example and replace the following variables with your own:

 MyOpenAIResource: replace with a unique name for your resource


 OAIResourceGroup: replace with your resource group name
 eastus: replace with the region to deploy your resource
 subscriptionID: replace with your subscription ID

Use Azure OpenAI Studio


Azure OpenAI Studio provides access to model management, deployment,
experimentation, customization, and learning resources.

You can access the Azure OpenAI Studio through the Azure portal after
creating a resource, or at https://oai.azure.com by logging in with your Azure
OpenAI resource instance. During the sign-in workflow, select the
appropriate directory, Azure subscription, and Azure OpenAI resource.
Explore types of generative AI
models
Completed100 XP
 3 minutes

To begin building with Azure OpenAI, you need to choose a base model and
deploy it. Microsoft provides base models and the option to create
customized base models. This module covers the currently available base
models.

Azure OpenAI includes several types of model:

 GPT-4 models are the latest generation of generative


pretrained (GPT) models that can generate natural language and
code completions based on natural language prompts.
 GPT 3.5 models can generate natural language and code
completions based on natural language prompts. In
particular, GPT-35-turbo models are optimized for chat-based
interactions and work well in most generative AI scenarios.
 Embeddings models convert text into numeric vectors, and are
useful in language analytics scenarios such as comparing text
sources for similarities.
 DALL-E models are used to generate images based on natural
language prompts. Currently, DALL-E models are in preview.
DALL-E models aren't listed in the Azure OpenAI Studio interface
and don't need to be explicitly deployed.

ompletions Playground parameters

There are many parameters that you can adjust to change the performance
of your model:

 Temperature: Controls randomness. Lowering the temperature


means that the model produces more repetitive and deterministic
responses. Increasing the temperature results in more unexpected
or creative responses. Try adjusting temperature or Top P but not
both.
 Max length (tokens): Set a limit on the number of tokens per
model response. The API supports a maximum of 4000 tokens
shared between the prompt (including system message,
examples, message history, and user query) and the model's
response. One token is roughly four characters for typical English
text.
 Stop sequences: Make responses stop at a desired point, such
as the end of a sentence or list. Specify up to four sequences
where the model will stop generating further tokens in a response.
The returned text won't contain the stop sequence.
 Top probabilities (Top P): Similar to temperature, this controls
randomness but uses a different method. Lowering Top P narrows
the model’s token selection to likelier tokens. Increasing Top P lets
the model choose from tokens with both high and low likelihood.
Try adjusting temperature or Top P but not both.
 Frequency penalty: Reduce the chance of repeating a token
proportionally based on how often it has appeared in the text so
far. This decreases the likelihood of repeating the exact same text
in a response.
 Presence penalty: Reduce the chance of repeating any token
that has appeared in the text at all so far. This increases the
likelihood of introducing new topics in a response.
 Pre-response text: Insert text after the user’s input and before
the model’s response. This can help prepare the model for a
response.
 Post-response text: Insert text after the model’s generated
response to encourage further user input, as when modeling a
conversation.

Chat playground
The Chat playground is based on a conversation-in, message-out interface.
You can initialize the session with a system message to set up the chat
context.

In the Chat playground, you're able to add few-shot examples. The term few-
shot refers to providing a few of examples to help the model learn what it
needs to do. You can think of it in contrast to zero-shot, which refers to
providing no examples.

In the Assistant setup, you can provide few-shot examples of what the user
input may be, and what the assistant response should be. The assistant tries
to mimic the responses you include here in tone, rules, and format you've
defined in your system message.
Chat playground parameters

The Chat playground, like the Completions playground, also includes the
Temperature parameter. The Chat playground also supports other
parameters not available in the Completions playground. These include:

 Max response: Set a limit on the number of tokens per model


response. The API supports a maximum of 4000 tokens shared
between the prompt (including system message, examples,
message history, and user query) and the model's response. One
token is roughly four characters for typical English text.
 Top P: Similar to temperature, this controls randomness but uses
a different method. Lowering Top P narrows the model’s token
selection to likelier tokens. Increasing Top P lets the model choose
from tokens with both high and low likelihood. Try adjusting
temperature or Top P but not both.
 Past messages included: Select the number of past messages
to include in each new API request. Including past messages helps
give the model context for new user queries. Setting this number
to 10 will include five user queries and five system responses.

The Current token count is viewable from the Chat playground. Since the
API calls are priced by token and it's possible to set a max response token
limit, you'll want to keep an eye out for the current token count to make sure
the conversation-in doesn't exceed the max response token count.

Azure AI Vision is a branch of artificial intelligence (AI) in which software


interprets visual input, often from images or video feeds.

In this module, you'll learn how to use the Azure AI Vision service to
extract information from images.

After completing this module, you’ll be able to:

 Provision an Azure AI Vision resource.


 Analyze an image.
 Remove an image background.
 Generate a smart cropped thumbnail.

Provision an Azure AI Vision


resource
Completed100 XP
 3 minutes
The Azure AI Vision service is designed to help you extract information
from images. It provides functionality that you can use for:

 Description and tag generation - determining an appropriate


caption for an image, and identifying relevant "tags" that can be
used as keywords to indicate its subject.
 Object detection - detecting the presence and location of specific
objects within the image.
 People detection - detecting the presence, location, and features
of people in the image.
 Image metadata, color, and type analysis - determining the format
and size of an image, its dominant color palette, and whether it
contains clip art.
 Category identification - identifying an appropriate categorization
for the image, and if it contains any known landmarks.
 Background removal - detecting the background in an image and
output the image with the background transparent or a greyscale
alpha matte image.
 Moderation rating - determine if the image includes any adult or
violent content.
 Optical character recognition - reading text in the image.
 Smart thumbnail generation - identifying the main region of
interest in the image to create a smaller "thumbnail" version.

Remove image background


Azure AI Vision achieves this feature by creating an alpha matte of the
foreground subject, which is then used to return either the foreground or the
background.

When creating an alpha matte of an image, the result is the foreground in all
white, with a black background

Responsible use of AI
 Fairness. All AI systems should treat people fairly, regardless of
race, belief, gender, sexuality, or other factors.
 Reliability and safety. All AI systems should give reliable
answers with quantifiable confidence levels.
 Privacy and security. All AI systems should secure and protect
sensitive data and operate within applicable data protection laws.
 Inclusiveness. All AI systems should be available to all users,
regardless of their abilities.
 Transparency. All AI systems should operate understandably and
openly.
 Accountability. All AI systems should be run by people who are
accountable for the actions of those systems.

Azure AI Document Intelligence, three of the prebuilt models are for general
document analysis:

 Read
 General document
 Layout

The other prebuilt models expect a common type of form or document:

 Invoice
 Receipt
 W-2 US tax declaration
 ID Document
 Business card
 Health insurance card

You might also like