0% found this document useful (0 votes)
9 views

Requesting An Api Key: Openai Website

Uploaded by

automationt87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Requesting An Api Key: Openai Website

Uploaded by

automationt87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Transformers are powerful AI models that can process a lot of data, learn from it, and make

accurate predictions or create content. They use self-attention to understand context and
simulate human-like processing. Large Language Models (LLMs) are trained on text data
with many parameters, making them very powerful. LLMs use transformer architecture to
solve complex problems. Pre-trained models save time and resources by already
understanding a lot of data. However, be cautious of bias in responses due to societal biases
in the training data used for these models.

Requesting an API Key


Before we get to building an AI-powered application, we will need to obtain an
OpenAI API key.
Head over to the OpenAI website and register for a new account. Follow the prompts
to provide your full name, email, password, and date of birth. The organization field
can be left blank. You'll then be asked to provide a mobile phone number to verify
your identity. And finally, OpenAI will send you a verification email.
After verifying your email and logging in to your account, click on your profile at the
top right and click "View API Keys". Before creating a new secret key, click on
"Billing" on the panel to the left. You can click on "Pricing" to see OpenAI's pricing
models. For this course, we will be using "Chat". It is also advisable to set usage
limits and apply a monthly spending limit to avoid unexpected charges. Finally, click
on "Set up a paid account" to set up a payment method when you are ready to
proceed.
With a payment method set up, click on "API Keys" on the panel to the left and click
"Create new secret key" to generate an API key for developmental use. Give this key
a name so that it'll be easier to keep track of your keys if you decide to create
additional keys later. You can name this key anything you want!
After clicking "Create secret key" you will be able to view and copy your key. This is
the key we will use in our applications to make API calls to OpenAI. Because you
won't be able to view this key in the future, make sure you copy and store the key
now and store it somewhere safe.
If you run into any issues with OpenAI account creation or API key generation,
please refer to the OpenAI help forums.

Project Setup
With the OpenAI API key in hand, it's time to set up our project!
By the end of this course, we will create an AI-powered JS coding tutor CLI
application. This application will allow the user to ask any question related to coding
and will respond with an explanation as well as an example code snippet!
Before getting started, you will need to download Node.js. Be sure to download
the LTS version of Node.js as it is required for Langchain to function properly.
Start by creating a new directory that will serve as the root of our project anywhere
you wish. Change directories into the newly created folder and use the
command npm init to initialize the node package manager.
HINT
Next, we'll install all the dependencies we will need for our CLI application:
You can use npm init -y to skip all the prompts.

npm i [email protected] dotenv langchain

With the above command, we are installing "inquirer" (version 8.2.4) to help capture
user input, the "dotenv" package to obscure sensitive information such as our
OpenAI API key, and "langchain", which you will learn more about in a later section.
Next, create a new file called script.js at the root directory. You can leave this
one blank for now, but this is the file that will hold all of our code!
Then, create a .env file at the root directory.
Finally, create a .gitignore file at the root directory. Then add the following inside
the .gitignore file:

# dependencies

/node_modules

package-lock.json

.env

# misc

.DS_Store
The text added to the .gitignore file represents folders and files that will not be
tracked by Git when we push our code to a repository.
Our project scaffolding is now complete and should look like the following:

├── node_modules

├── .env

├── .gitignore

├── package.json

├── package-lock.json

└── script.js

If your folder structure looks like the above, it's time to move on and learn about
setting up the "dotenv" package to obscure our API keys!

Environment Variables with dotenv


The "dotenv" package helps us to obscure private information such as API keys from
online repositories like GitHub by loading environment variables from a .env file
into process.env. As such, the first thing we'll need to do is edit our .env file by
adding our OpenAI API key to it:

OPENAI_API_KEY="<PASTE-YOUR-KEY-HERE>"
The above text allows us to store our API key into a variable called
"OPENAI_API_KEY" such that we can use that variable instead of entering our API
key directly within our code. Ensure that, when you paste your OpenAI API key, you
remove the angled brackets.
Because we will be using our OpenAI API key within our script.js file, we'll need
to require the "dotenv" package there:

require('dotenv').config();

Finally, to test that we can access our API key from the .env file, we simply log the
following:

console.log(process.env.OPENAI_API_KEY);

From the root of your application, run the command node script.js and you
should see your API key printed in your console! Verify that your API key does
indeed print, then delete the console.log() and proceed to the next section on
"LangChain".

Introduction to LangChain
To create our AI-powered application we will leverage LangChain. LangChain is a
framework that helps developers build applications powered by language models
such as OpenAI or HuggingFace by solving issues with agency and cross-
platform integration.

 Agency: Allow your language models to interact with its environment via
decision-making. Use language models to help decide which action to take
next.

 Integration: Bring external data, such as your files, other applications, and
API data to your language models, thereby making responses more catered
towards you.
To learn more about LangChain take a look at the LangChain conceptual
documentation. Note that this is a link to their conceptual documentation and so it will
not go into technical implementation of the framework—that will come in later!
Now that we've learned a little about LangChain, it's time to continue building our
application.
OpenAI with LangChain
We'll start by requiring the OpenAI dependency from langchain. Inside
your script.js, add the following:

const { OpenAI } = require('langchain/llms/openai');

IMPORTANT
OpenAI has its own library, so it's important to point out that this version of OpenAI
comes directly from langchain. There's no need to install any additional packages
or libraries.
Next, we'll instantiate a new object using the OpenAI class:

const model = new OpenAI({

openAIApiKey: process.env.OPENAI_API_KEY,

temperature: 0,

model: 'gpt-3.5-turbo'

});

console.log({ model });

Before we continue, let's take a moment to break down the properties being used
when we instantiate a new object based on the OpenAI class.
The openAIApiKey property is used to pass in our OpenAI API key to check if a
project is authorized to use the API and for collecting usage information.
The temperature property represents variability in the words selected in a
response. Temperature ranges from 0 to 1 with 0 meaning higher precision but less
creativity and 1 meaning lower precision but more variation and creativity.
Finally, the model property represents which language model will be used. For this
project, we will use the gpt-3.5-turbo model because it’s optimized for chat and
the best available option.
Now when we run the script by using node script.js, we should see the
following output:

HINT
If you do not get a similar result or if the script takes more than a few minutes to
execute, ensure that you have added proper billing information in your OpenAI
account. Additionally, you may need to check that the OpenAI API key was copied
correctly (without any additional spaces) or that dotenv was configured properly.
By now, your script.js should look like the following:

// dependencies
const { OpenAI } = require('langchain/llms/openai');

require('dotenv').config();

// Creates and stores a wrapper for the OpenAI package along with basic configuration

const model = new OpenAI({

openAIApiKey: process.env.OPENAI_API_KEY,

temperature: 0,

model: 'gpt-3.5-turbo'

});

console.log({model});

Great job so far! With all the setup out of the way we can continue building the AI-
powered summary generator to accept input and produce corresponding output.

Calling the Model


Now that we've instantiated a new model object from the OpenAI class and verified
that it is functional, it's time to pass in prompts!
We’ll start by creating an asynchronous function named promptFunc inside
our script.js file and call the function:

const promptFunc = async () => {

};

promptFunc();

Inside the promptFunc function, we need to add a try/catch statement to help us


catch any potential errors that might arise:

try {

catch (err) {

console.error(err);
}

Within the try portion of the try/catch statement we’ll create a new variable, res,
that holds the returned value from the OpenAI .call() method, which we've passed
the question, "How do you capitalize all characters of a string in JavaScript?" as an
argument:

try {

const res = await model.call("How do you capitalize all characters of a string in


JavaScript?");

console.log(res);

catch (err) {

console.error(err);

When we run the script using node script.js, it may take a moment, but the result
should be the answer to the question as well as an example! But what if a user of our
application wanted to ask a different coding question? Instead of having the user go
into the script.js and alter the question themselves, we'll need a way to capture
their input and make a call based on that input. To do this, we'll need
the inquirer package to help us out!

User Input with Inquirer


Although there are many packages to capture user input (including a Node.js native
implementation), we'll use the inquirer package that we installed earlier. Let's begin
by setting up inquirer in our project starting by requiring it as a dependency:

const inquirer = require('inquirer');

Then we'll make a slight alteration to our promptFunc() function such that it will
accept a parameter and use that parameter in place of the question within
the .call() method:

const promptFunc = async (input) => {

try {

const res = await model.call(input);

console.log(res);

catch (err) {

console.error(err);

};
Next, we'll create an init function that, when called, will execute an inquirer prompt
that will ask the user for a question about a coding concept:

const init = () => {

inquirer.prompt([

type: 'input',

name: 'name',

message: 'Ask a coding question:',

},

]).then((inquirerResponse) => {

promptFunc(inquirerResponse.name)

});

};
init();

Notice that the inquirer.prompt() method returns a promise! So we'll need to call
the promptFunc() inside a subsequent .then() method.
NOTE
Because we're now calling promptFunc() inside the init() function, we can remove
the promptFunc() call from the global context.
Up to this point, your code should look like the following:

// dependencies

const { OpenAI } = require('langchain/llms/openai');

const inquirer = require('inquirer');

require('dotenv').config();

// Creates and stores a wrapper for the OpenAI package along with basic configuration

const model = new OpenAI({

openAIApiKey: process.env.OPENAI_API_KEY,

temperature: 0,
model: 'gpt-3.5-turbo'

});

console.log({model});

// Uses the instantiated OpenAI wrapper, model, and makes a call based on input from
inquirer

const promptFunc = async (input) => {

try {

const res = await model.call(input);

console.log(res);

catch (err) {

console.error(err);
}

};

// Initialization function that uses inquirer to prompt the user and returns a promise. It takes
the user input and passes it through the call method

const init = () => {

inquirer.prompt([

type: 'input',

name: 'name',

message: 'Ask a coding question:',

},

]).then((inquirerResponse) => {

promptFunc(inquirerResponse.name)
});

};

// Calls the initialization function and starts the script

init();

Now when we use node script.js to run our application, we are presented with the
question "Ask a coding question:":
However, we have an issue! The user still needs to specify that they want a
summary rather than being able to just pass in the title of a book or novel.
In the next lesson, we'll learn more about LangChain, including how to use prompt
templates and output parsers to make our AI-powered summary generator
application more expandable and user-friendly!
Feel free to use the LangChain JavaScript documentation to learn more about using
LangChain specifically with JavaScript!

What Is a Prompt?
Users interact with LLMs using natural language input. Human users type the
instructions for the model into a textual interface. The term prompt refers to the text
that the user enters into the text field. Put differently, a prompt is the text that the
user gives to the AI algorithm to tell it what to do.
The model takes the instructions from the prompt, completes the task, and then
provides a response to the user. The response from the model is frequently referred
to as the output.
In this course, our focus will be on text outputs, but different types of generative AI
models provide different outputs. In the case of image generators like Stable
Diffusion and Midjourney, the output is an image file.
A prompt can be a simple, straightforward question, such as, Who was the first
president of the United States? Or it could be a vague request for the
model to generate a type of text, such as, Write a haiku about the beauty
of AI.
Prompts can also be complex, with key pieces of data and instructions to guide the
model toward an output that will be the most useful to the user. Remember, LLMs
generate responses based on an extremely high volume of data. The exact data that
it uses to form the output will be significantly impacted by the specific words that the
user inputs in the prompt.

Prompt Engineering
Prompt engineering is the newly emerging field of methodically crafting the input for
a generative AI model to give users an output that best meets their needs.
In order to generate the best output from a generative AI model, it helps to
understand the opportunities and limitations of these models. This knowledge will
help you phrase your prompt in the way that best allows the model to meet your
needs.
Here are a few key pieces of information to keep in mind as you develop your
prompts:

 Even very minor changes in the way a user phrases a prompt can lead to
significant changes in the types of output an AI model generates. That’s why
it’s important to take a methodical approach when developing prompts.

 When approaching prompt engineering, remember that AI models are only


trained to predict the next word or “token.” Essentially, these models are text
completers. The more accurate the input you provide, the more accurate and
helpful you can expect the output to be.

 The AI model’s response is stochastic, which means randomly determined.


Because the model is pulling from large amounts of data you will get a
different output even when you enter the exact same prompt.

 Be on the lookout for AI hallucinations, the phenomenon of an AI model


generating content that “feels” like a legitimate output, but is based on
unreliable data. Again, because the model is pulling from an extremely high
volume of data, not all of that data may be factually correct. A well-engineered
prompt can decrease the risk of generating an AI hallucination.

 Sometimes a high-level of domain expertise may be required in order to


develop a well-engineered prompt. Take, for example, the case of a medical
doctor using an AI algorithm to suggest treatment options based on a patient’s
medical history. The person engineering the prompt would not only need to
know the best vocabulary to use to generate the desired output, they would
also need to have an understanding of the treatment options to be able to
evaluate and validate the output.

Components of Prompt Engineering


When developing a prompt, you must include at least one of the following:

 Instructions
 Question
In addition to instructions or a question, you will likely want to include some aspect of
these two optional components in your prompt to guide the algorithm toward an
output that will be the most useful to you:

 Input data
 Examples
Users can provide input data to give the model additional information about the type
of output they desire. The possibilities of the type of data that can be provided are
endless, and will depend on the type of output desired.
Users can add anything from simple audience demographic information (age, level of
education, physical location) to .csv files with many data points, and anything in
between that will help to guide the model toward the desired output.
It can also be helpful to specify the tone that you would like the algorithm to use in its
response.

In this case, instead of asking the model to tell us a joke about penguins, we
specified the tone—dad joke.
Examples help to focus the data that the model uses in its output, and are
particularly useful when prompting the model to provide recommendations.
Let’s say you’re looking for something to watch tonight and you know you’re in the
mood for a movie about a killer shark. Instead of just asking the algorithm to suggest
a shark movie, you can improve the chances of the model suggesting a movie that
you will enjoy by giving the algorithm some insight into your preferences. In your
prompt you could tell the algorithm that you like Jaws, The Shallows, and Open
Water, but don’t like the Sharknado movies or The Meg. This information makes it
more likely that you’ll get an output that matches your specific taste in shark movies.
Providing such examples after instructions appears to be more effective than
before them. Changing the order of the examples will also change the output, so you
may want to experiment with phrasing until you find the optimal result.
When you add these optional components to a prompt, you give the algorithm
additional data that personalizes the response, rather than relying on the entire
breadth of the training data.

Techniques to Improve Outputs


Now that you have a better understanding of what prompt engineering is, we’ll
explore some specific techniques that you can use to enhance your prompts.

Zero-shot, One-shot, and Few-shot Prompting


As mentioned, the number of examples and detail of input data that the user
provides to the algorithm can make a significant difference in the output. Prompt
engineers use specific terms to describe the number of data points provided.
Zero-shot prompting refers to the technique of providing the model with no
additional pieces of data to make its prediction. In our shark movie example, that
would look something like this:
Zero-shot prompting is useful when seeking a broad output. Through the use of
questions and answers with the algorithm, you can then guide it toward a more
specific output if you so choose. Zero-shot prompting can also be a great tool when
seeking a creative, out-of-the box output from the model.
One-shot prompting, as you probably guessed, refers to the practice of providing
the algorithm with a single example or data-point. In our shark movie example, it
would look something like this:

One-shot prompting can be useful when you’re seeking to narrow down the output,
but still want to give the algorithm room to come up with potentially unpredictable
outputs. It’s also clearly best utilized in cases where the user may only have a limited
amount of data.
It will come as no surprise that few-shot prompting is when a user provides more
than one example or datapoint—usually between two and five. This technique allows
users who have multiple data-points to focus the model and guide it to find more
specific outputs. Our original shark movie prompt is an example of few-shot
prompting:
Chain of Thought Prompting
Chain of Thought Prompting (also known as CoT) is a method used to guide the
model to providing factually correct responses by forcing it to respond to a series of
steps during the process of developing an output response.
The technique requires the user to explicitly ask the model to take a “step-by-step”
process in the instructions. In addition, it is generally best practice to ask the model
to explain its reasoning in the output, and to follow a specific format.
Here is comparison of results from standard versus CoT prompting:

[Image credit: Wei et al., 2022]


CoT is an extremely helpful technique when seeking factually accurate information
because it gives the user insight into how the algorithm reached the desired output,
enabling the user to validate the algorithm’s response.
CoT also significantly reduces the risk of hallucination in the model. Remember that
“hallucinating” is the phenomenon of an AI model generating content that “feels” like
a legitimate output, but isn’t based on legitimate data. It’s one of the biggest risks of
using transformer models right now.

Prompting Citations
Forcing the model to provide citations in its response is an effective way of reducing
the risk of hallucination. In the prompt, the user can either ask the model to use
specific sources, or take a more general approach, such as asking it to “use only
credible sources.”
As with CoT, asking the model to provide sources in its response will have two
benefits: it will guide the model toward a more accurate response, and it will also
provide you with a method of verifying the response from the model.
Here is an example of prompting an algorithm to use reliable sources:
Asking the Model to Play a Role
Asking the model to play a specific role is also an effective way to improve your
output and reduce the risk of hallucination. Consider the example below:

The word “helpful” guides the model towards a factual answer, reducing the risk of
hallucination. It also provides some context of the tone that the user would like the
model to use in the output.

Question and Answer


Leading the model toward an output through a series of follow up questions is
another effective way of steering the model toward the desired output. This
technique allows the user to course correct if the model begins to go off track.
Believe it or not, some transformer models respond well to forceful language,
including all capitalization and exclamation points, as in the example below:

Templates
Developing templates is one of the most effective strategies available for prompting
thorough, accurate responses from a transformer model. Templates can use any
combination of the engineering techniques we’ve covered so far, but the benefit is
that the same well-engineered prompt can be re-used with different data points,
increasing the model’s efficiency.
For the remainder of the course, we’ll focus on developing prompt templates.
Annotate

PromptTemplate Object
With a stronger understanding of prompts and prompt engineering, let's continue
with the application build!
The issue with our current application build is that, although we're asking the user to
enter a coding question, they could still ask whatever they want and our application
would give them a reasonable response. It's essentially a general question and
answer application. And so what we need to do is to find a way to establish a context
for our coding tutor application. To do that, we'll make use of the
LangChain promptTemplate object!
The promptTemplate object takes in a combination of user input along with a fixed
template string thereby allowing developers to set hard-coded parameters but at the
same time accept dynamic user input. Additionally, the promptTemplate object
contains out-of-the-box methods provided by LangChain including
the .format() method that we will use to add variables to our templates!

Implementing PromptTemplate
To start, we will need to require in the PromptTemplate module
from langchain/prompts:

const { PromptTemplate } = require("langchain/prompts");

Next, within the try section of the try/catch block, we will need to instantiate a new
object based off of the PromptTemplate class:

// Instantiation of a new object called "prompt" using the "PromptTemplate" class

const prompt = new PromptTemplate({

});

During instantiation of the prompt object, we'll need to pass in a couple of properties,
including the template and inputVariables.
The template will allow us to set the stage of our application by giving the AI context
about how it should behave and respond to user questions. In other words,
the template allows the developer to pass in instructional prompts.
The inputVariables property allows us to inject user input directly into
our template so that whatever question the user asks will always be within the
confines of the overall context that the developer sets.
Let's take a look at the code:

// Instantiation of a new object called "prompt" using the "PromptTemplate" class

const prompt = new PromptTemplate({


template: "You are a javascript expert and will answer the user’s coding questions
thoroughly as possible.\n{question}",

inputVariables: ["question"],

});

You'll see in the above code snippet that we have statically told the AI that it is a
"javascript expert" and that it will "answer the user’s coding question". We can further
refine this by adding a sort of validation in natural language telling the AI to tell the
user that their question is not code related if the user asks any question unrelated to
code. But for now, we'll keep the template as it is.
Additionally, the template property is where we inject the user input using \
n immediately followed by curly braces surrounding a variable name. The variable
name is defined in the next property, inputVariables. The inputVariables property
is an array and so, if we wanted, we could set that array to multiple variables and
use all of them in the template.
Next, we'll use the .format() method on our instantiated prompt object to pass in
user input to our template. Take note that the key of the object being passed into
the prompt method matches the variable name, question, and the value is the user
input captured from inquirer:

const promptInput = await prompt.format({

question: input

});

With the new code in place we'll start the script using node script.js, enter a coding
question, and receive an output similar to the image below:
And, with that, the application is now no longer a general question and answer
application as we're able to bake in how we want OpenAI to respond to user
inquiries.
There is just one last step. So far, we've been receiving responses from OpenAI
through a terminal in string format. But in many cases, we'll need a more structured
output to utilize. For instance, if we wanted the language model to return data so that
we could use it on a front-end application, we'd need the data to return as JSON.
LangChain provides this functionality through an output parser object. But the
capabilities of LangChain's output parser can do much more!
Annotate

StructuredOutputParser Object
One of LangChain's output parsers is the StructuredOutputParser class and will be
the one we use in the application. There are two main components to LangChain
output parsers: formatting instructions and the parsing method.
The formatting instructions play a pivotal role, allowing us to construct the exact
JSON that will be returned as the response. Additionally, we can use prompts to
define the values of the data that is being returned dynamically using OpenAI. For
instance, not only can we pass back the response, we can also ask OpenAI to
provide additional information such as a source, the date the source was last
updated, or even the response in another language! It's worth mentioning that the
additional information is not static. You can think of it as asking follow-up questions
based on the response and passed back to the user as a more completed dataset.
The .parse() method takes in the normal response from OpenAI as an argument
and structures it based on the formatting instructions.
Now that we have a high-level, conceptual understanding of an output parser, let's
implement it in our application!

Implementing StructuredOutputParser
To start, we'll need to require in the StructuredOutputParser class:

const { StructuredOutputParser } = require("langchain/output_parsers");


Next, we’ll instantiate a new object from the StructuredOutputParser class with
some additional properties. We can define the exact structure and properties we
want returned to the user with the .fromNamesAndDescriptions() method. We're
going to keep things simple and provide a single object. The object we return will
have two properties. Although we can make these properties anything we want, we'll
pass in code and explanation. The values for each property are static prompts. This
means that we can direct OpenAI to fill in the data for us:

// With a `StructuredOutputParser` we can define a schema for the output.

const parser = StructuredOutputParser.fromNamesAndDescriptions({

code: "Javascript code that answers the user's question",

explanation: "detailed explanation of the example code provided",

});

Just below our new parser object, we then need to create a new variable that will
hold the value of the getFormatInstructions() method:

const formatInstructions = parser.getFormatInstructions();

The .getFormatInstructions() method is how we'll pass instructions to our template


for how we want the final response to be structured.
And, speaking of which, we'll need to make a small change to our template. We'll
first add a new property where we instantiate our prompt object
called partialVariables, which is an object itself that contains the
key format_instructions. format_instructions will hold our formatInstructions as
its value. Lastly, we add format_instructions as a variable within
the template itself:
const prompt = new PromptTemplate({

template: "You are a javascript expert and will answer the user’s coding questions
thoroughly as possible.\n{format_instructions}\n{question}",

inputVariables: ["question"],

partialVariables: { format_instructions: formatInstructions }

});

And, finally, instead of just console logging res directly, we call the .parse() method
and pass res into the method:

console.log(await parser.parse(res));

With the output parser implemented, we'll start the script using node script.js,
enter a coding question, and receive an output similar to the image below:

Now that our demo application is complete, it's your turn!


Read through the challenge.md file and try to create your own AI-powered CLI
application that leverages LangChain's models, prompts, templates, and output
parsers!
Annotate

Your Task
Throughout this course, we worked towards building a JavaScript Consultant
application where users could ask any question related to JavaScript concepts. Your
task is to create an AI-powered CLI application of your choice using the tools and
techniques taught in this course. Here are just a few examples of AI-powered CLI
applications you could create:

 Translator

 Summary Generator

 News App

 Weather Forecast
The list of applications that can be built using AI is endless, and the only limitation is
your imagination! But don't fall for the trap of thinking that an AI application needs to
be novel. Adding AI to an existing application adds a level of interactivity and depth
that traditional applications simply cannot achieve.
Your AI-powered command-line application must accept user input using the Inquirer
package, hide sensitive information such as API keys using the dotenv package, and
last but not least, use the gpt-3.5-turbo model through LangChain. Additionally,
you must use LangChain templates and output parsers to ensure that data is
returned as a JSON object.
The application will be invoked by using the following command:

node script.js

User Story

AS A developer

I WANT an AI-powered CLI application

SO THAT I can leverage modern artificial intelligence in


everyday work or casual tasks

Acceptance Criteria

GIVEN a command-line application that accepts user input


WHEN I am prompted for information or a question

THEN an AI generated response is produced and sent back in


JSON format

Hints and Getting Started


Here are some guidelines to help you get started:

 If you get stuck, consult the previous lessons in this course or utilize the LangChain
JS Docs.

Annotate

You might also like