使用 Gemini API 建構多輪對話 (即時通訊)

使用 Gemini API 時,您可以建立多輪自由形式的對話。Vertex AI in Firebase SDK 會管理對話狀態,簡化這個程序,因此您不必像使用 generateContent() (或 generateContentStream()) 那樣自行儲存對話記錄。

事前準備

如果您尚未完成,請參閱入門指南,瞭解如何設定 Firebase 專案、將應用程式連結至 Firebase、新增 SDK、初始化 Vertex AI 服務,以及建立 GenerativeModel 例項。

傳送即時通訊提示要求

如要建立多輪對話 (例如即時通訊),請先呼叫 startChat() 來初始化即時通訊。接著使用 sendMessage() 傳送新使用者訊息,這也會附加訊息和回應至聊天記錄。

與對話內容相關聯的 role 有兩種可能的選項:

  • user:提供提示的角色。這個值是對 sendMessage() 呼叫的預設值,如果傳遞不同的角色,函式就會擲回例外狀況。

  • model:提供回應的角色。這個角色可用於以現有 history 呼叫 startChat()

Swift

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

import FirebaseVertexAI

// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()

// Create a `GenerativeModel` instance with a model that supports your use case
let model = vertex.generativeModel(modelName: "gemini-2.0-flash")

// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = model.startChat(history: history)

// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
print(response.text ?? "No text in response.")

Kotlin

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

對於 Kotlin,這個 SDK 中的函式為暫停函式,需要從 協同程式範圍中呼叫。
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")

// Initialize the chat
val chat = generativeModel.startChat(
  history = listOf(
    content(role = "user") { text("Hello, I have 2 dogs in my house.") },
    content(role = "model") { text("Great to meet you. What would you like to know?") }
  )
)

val response = chat.sendMessage("How many paws are in my house?")
print(response.text)

Java

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

對於 Java,這個 SDK 中的各個方法會傳回 ListenableFuture
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
GenerativeModel gm = FirebaseVertexAI.getInstance()
        .generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder messageBuilder = new Content.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");

Content message = messageBuilder.build();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(message);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

import { initializeApp } from "firebase/app";
import { getVertexAI, getGenerativeModel } from "firebase/vertexai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Vertex AI service
const vertexAI = getVertexAI(firebaseApp);

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(vertexAI, { model: "gemini-2.0-flash" });

async function run() {
  const chat = model.startChat({
    history: [
      {
        role: "user",
        parts: [{ text: "Hello, I have 2 dogs in my house." }],
      },
      {
        role: "model",
        parts: [{ text: "Great to meet you. What would you like to know?" }],
      },
    ],
    generationConfig: {
      maxOutputTokens: 100,
    },
  });

  const msg = "How many paws are in my house?";

  const result = await chat.sendMessage(msg);

  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();

Dart

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

import 'package:firebase_vertexai/firebase_vertexai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
final model =
      FirebaseVertexAI.instance.generativeModel(model: 'gemini-2.0-flash');

final chat = model.startChat();
// Provide a prompt that contains text
final prompt = [Content.text('Write a story about a magic backpack.')];

final response = await chat.sendMessage(prompt);
print(response.text);

瞭解如何選擇適合用途和應用程式的模型,以及選用的位置

逐句顯示回應

請先完成本指南的「事前準備」一節,再嘗試使用這個範例。

您可以不等待模型產生的完整結果,改用串流處理部分結果,以便加快互動速度。如要串流回應,請呼叫 sendMessageStream()



你還可以做些什麼?

試用其他功能

瞭解如何控管內容產生作業

您也可以使用 Vertex AI Studio 實驗提示和模型設定。

進一步瞭解支援的型號

瞭解可用於各種用途的模型,以及相關配額價格


針對使用 Vertex AI in Firebase 的體驗提供意見回饋