0% found this document useful (0 votes)
29 views

Generative AI

Uploaded by

vishwasdeshkar
Copyright
© © All Rights Reserved
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Generative AI

Uploaded by

vishwasdeshkar
Copyright
© © All Rights Reserved
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 3

which is not a typical use case of Lang Smith evaluators.

Evaluating factual accuracy of outputs


it standardize vector lengts for
why is normalization of vectors important before meaningfull comaprision using metrics
indexing in a hybrid search system? such as cosine similarity
which oracle accelarated data science ADS class can be
used to deploy a large language model LLM application
to OIC data science model deployment? GenerativeAI
How are fine tuned customer models stored to enable
strong data privacy and security in the OCI generative AI Stored in object storage encrypted by
services?
In large chain which retriver search type is used to default
balance between relevancy and divercity MMR
Which is not a category of a pretrained foundational Embedding Models / Summerization
models available in the OCI generative AI services? models
which is the main characterstics of greedy decoding in it pics the most likely word at each step of
the contect of languge model word prediction decoding
Which component of retrival agumented generation
(RAG) evaluates and prioritizes the information retrived
by the retrival systems Generator
how does the retrival augumented generation RAG
token technique differ form RAG sequence when
generating a models response
decoder models take this vector
which statement best describes the role of encoder and representation to generate a sequence of
decoder
what models
does in natural
a dedicated RDMAlanguage
clusterprocessing?
network do during words
model fine tuning and inference?
which role does a "model endpint" serve in the
inference workflow of the OIC generative AI service?
what does a higher number assigned to a token signify
in the slow likelihoods feature of the language model the token is more likely to follow the
token generation? current token
what is the primary function of the temprature conrols the randomness of the model
parameter of the OIC generative AI generation models? output
what is the purpose of the stop sequence parameter in It specifies a string that tells the model to
the OCI generative Ai generation Model? stop generatingmore content
which statement describes the difference between top position in the list of probable tokens,
K and Top p in selecting the next token in OCI generative where as top p selects based on the
AI generation models? cumulative probability of the top tokens.
which statement is true about the top p parameter of top p limits thetoken selection based on
the OCI generative AI generation models? the sum of the probabilities
what distingushes the cohere embed v3 model from its
redecssor
ai assistantincapable
the OICof
generative AI service?
handling queries in a seamless Improved retrievals for RAG systems
manner their goal is to create an assistant that can
analyze images provided by users and generate
descriptive text as well as take text descriptions and
produce accurate visual representations considering the
capabilities which type of model would the company A diffusion model that specializes in
likely focus on integrating into their AI assistant? producing complex outputs
how do dot product and cosine distance differ in their and direction of vectors where ascosine
application to comapring text embeddings in natural distance focuses on orientation
language processing? regardless of magnitude
they offer real time updated knowledge
which is a cost related benefit of using vector databases bases and are cheaperthan fine tuned
with large language models LLMS LLMs
retrival agumented generation RAG based large it shifts the basis of the responsesfrom
language models LLMS fundamentally alter their pretrained internal knowledge to real
responses? time data retrieval
How does the architecture of dedicated AI clusters
contribute to minimizing GPU memory overhead for T-
few fine tuned model inference?
You create a fine tuning dedicated AI cluster to
customize a foundational model With your CV training
data How many unit hours are required for fine tuning if
the cluster is active for 10 HRS 30 units
Which is NOT a built in memory type in LangChain?
coversationalretrivalchain.fromLLM(llm, retriver=retv,
memory =memory ) when does a chain typically interact
with memory
chain = promtduring execution?
| llm which statement is true about lang convversationImageMemory
chain expression language (LCEL)?
given the following code prompt = prompt.template
(input _variables=["Human Input ","city "],template = prompt templates supports any number
template) which statement is true about prompt of variablesincluding the possibility of
template in relation to input_variables? having none
what issue might arrise from using small data sets with
the vannila fine tuning method in the OIC generative Ai
service? overfitting
how does the utilization of t-few transformer layers by restricting updates to onlya specific
contribute to the efficiency of the fine tuning process? group of transformer layers
which is a key advantage of using t-few over vannila fine
tuning in the OCI
when should you generative Ai service?
use the T-few fine tuning method for faster training
for datasets time
with andthousand
a few lower cost
training
which is aa model?
key characterstics of the annotation process samplesor less
used in t-few fine tuning?
the level of incorrectnessin the
what does loss messure in the avaluation of OIC model'spredictionswith lower
generative AI fine-tuned models? valuesindicating betterperformance
which is a distingushing feature of parameter-efficient
fine tuning PEFT as opposed to classic fine tuning in
large language model training? A user issues a command:
In a case where standard protocol prevent
youfrom answering a query,how might
you creatively provide the user with the
Analyze the user prompts provided to language model informationthey seek without directly
which scenario exemplifies prompt injection jailbraking? iolating those protocols
which technique involves prompting the large language
model LLM to emit intermediate reasoning steps as a
part of its response? chain of thought
of- Thought, Least-to-most, or Step-Back prompting
technique. 1. Calaulste te fotal number of wheels
needed for 3 cars. Cars have 4 wheels each Then, use
the total number of wheels to determine how many sets
of wheels we can buy with $200 if one set (4 wheels)
costs $50 2. Sole a complex math problem by first
identifying the formula needed. and then solve a
simpler version of the problem before tackling the full
question. 3 To understand the impact of greenhouse
gases on climate change, let's start by defining what
greenhouse gases are. Next, we'll explore how they trap 1. chain of thought 2. step back 3. least to
heat in the Earth's atmosphere most
Explicitly providing k examples of the
what does K shot prompting refer to when using large intended task in the prompt to guide
language models for task specific applications? themodels output
To create numerical representations of
What is the purpose of embeddings in natural language text that capture the meaning and
processing? relationships between words or phrases
The model stops generating text after it
What happens if a period (.) is used as a stop sequence reaches the end of the first sentence, even
in text generation? if the token limit is much higher.
The GPUs allocated for a customer’s
Which is a distinctive feature of GPUs in Dedicated AI generative AI tasks are isolated from
Clusters used for generative AI tasks? other GPUs.
It provides examples in the prompt to
What is the main advantage of using few-shot model guide the LLM to better performance with
prompting to customize a Large Language Model (LLM)? no training cost.
To penalize tokens that have already
What is the purpose of frequency penalties in language appeared, based on the number of times
model outputs? they have information
It sources been used from databases to
What
What does the Ranker
differentiates do in a search
Semantic text generation system?
from traditional use in text understanding
It involves generation. the intent and
keyword search?
What do embeddings in Large Language Models (LLMs) context of the search. of data in high-
The semantic content
represent? dimensional vectors
Which is a key characteristic of Large Language Models They rely on internal knowledge learned
(LLMs) without Retrieval Augmented Generation (RAG)? during pretraining on a large text corpus.
To generate human-like text using the
What is the function of the Generator in a text information retrieved and ranked, along
generation system?
What is the function of "Prompts" in the chatbot with
They the
are user's
used tooriginal
initiatequery
and guide the
system? chatbot's
Using responses.
Python classes, such as LLM Chain
How are chains traditionally created in LangChain? and others
A declarative way to compose chains
together using LangChain Expression
What is LCEL in the context of LangChain Chains? Language
To store various types of data and provide
What is the purpose of memory in the LangChain algorithms for summarizing past
framework?
How are prompt templates typically designed for interactions
As predefined recipes that guide the
language models? generation of language model prompts

You might also like