# OpenAI Node API Library
[](https://npmjs.org/package/openai)  [](https://jsr.io/@openai/openai)
This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs).
## Installation
```sh
npm install openai
```
You can also import from jsr:
<!-- x-release-please-start-version -->
```ts
import OpenAI from 'jsr:@openai/openai';
```
<!-- x-release-please-end -->
## Usage
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API.
<!-- prettier-ignore -->
```js
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});
async function main() {
const chatCompletion = await client.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
});
}
main();
```
## Streaming responses
We provide support for streaming responses using Server Sent Events (SSE).
```ts
import OpenAI from 'openai';
const client = new OpenAI();
async function main() {
const stream = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
}
main();
```
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
<!-- prettier-ignore -->
```ts
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});
async function main() {
const params: OpenAI.Chat.ChatCompletionCreateParams = {
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
};
const chatCompletion: OpenAI.Chat.ChatCompletion = await client.chat.completions.create(params);
}
main();
```
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
> [!IMPORTANT]
> Previous versions of this SDK used a `Configuration` class. See the [v3 to v4 migration guide](https://github.com/openai/openai-node/discussions/217).
### Polling Helpers
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
If an API method results in an action which could benefit from polling there will be a corresponding version of the
method ending in 'AndPoll'.
For instance to create a Run and poll until it reaches a terminal state you can run:
```ts
const run = await openai.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistantId,
});
```
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/deep-dive/run-lifecycle)
### Bulk Upload Helpers
When creating and interacting with vector stores, you can use the polling helpers to monitor the status of operations.
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
```ts
const fileList = [
createReadStream('/home/data/example.pdf'),
...
];
const batch = await openai.vectorStores.fileBatches.uploadAndPoll(vectorStore.id, fileList);
```
### Streaming Helpers
The SDK also includes helpers to process streams and handle the incoming events.
```ts
const run = openai.beta.threads.runs
.stream(thread.id, {
assistant_id: assistant.id,
})
.on('textCreated', (text) => process.stdout.write('\nassistant > '))
.on('textDelta', (textDelta, snapshot) => process.stdout.write(textDelta.value))
.on('toolCallCreated', (toolCall) => process.stdout.write(`\nassistant > ${toolCall.type}\n\n`))
.on('toolCallDelta', (toolCallDelta, snapshot) => {
if (toolCallDelta.type === 'code_interpreter') {
if (toolCallDelta.code_interpreter.input) {
process.stdout.write(toolCallDelta.code_interpreter.input);
}
if (toolCallDelta.code_interpreter.outputs) {
process.stdout.write('\noutput >\n');
toolCallDelta.code_interpreter.outputs.forEach((output) => {
if (output.type === 'logs') {
process.stdout.write(`\n${output.logs}\n`);
}
});
}
}
});
```
More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md)
### Streaming responses
This library provides several conveniences for streaming chat completions, for example:
```ts
import OpenAI from 'openai';
const openai = new OpenAI();
async function main() {
const stream = await openai.beta.chat.completions.stream({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
});
stream.on('content', (delta, snapshot) => {
process.stdout.write(delta);
});
// or, equivalently:
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
const chatCompletion = await stream.finalChatCompletion();
console.log(chatCompletion); // {id: "…", choices: […], …}
}
main();
```
Streaming with `openai.beta.chat.completions.stream({…})` exposes
[various helpers for your convenience](helpers.md#events) including event handlers and promises.
Alternatively, you can use `openai.chat.completions.create({ stream: true, … })`
which only returns an async iterable of the chunks in the stream and thus uses less memory
(it does not build up a final chat completion object for you).
If you need to cancel a stream, you can `break` from a `for await` loop or call `stream.abort()`.
### Automated function calls
We provide the `openai.beta.chat.completions.runTools({…})`
convenience helper for using function tool calls with the `/chat/completions` endpoint
which automatically call the JavaScript functions you provide
and sends their results back to the `/chat/completions` endpoint,
looping as long as the model requests tool calls.
If you pass a `parse` function, it will automatically parse the `arguments` for you
and returns any parsing errors to the model to attempt auto-recovery.
Otherwise, the args will be passed to the function you provide as a string.
If you pass `tool_choice: {function: {name: …}}` instead of `auto`,
it returns immediately after calling that function (and only loops to auto-recover parsing errors).
```ts
import OpenAI from 'openai';
const client = new OpenAI();
async function main() {
const runner = client.beta.chat.completions
.runTools({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'How is the weather this week?' }],
tools: [
{
type: 'function',
function: {
function: getCurrentLocation,
parameters: { type: 'object', properties: {} },
},
},
{
type: 'function',
function: {
function: getWeather,
parse: JSON.parse, // or use a validation library