使用 Gemini API 時,您可以建立多輪自由形式的對話。Vertex AI in Firebase SDK 會管理對話狀態,簡化這個程序,因此您不必像使用 generateContent()
(或 generateContentStream()
) 那樣自行儲存對話記錄。
事前準備
如果您尚未完成,請參閱入門指南,瞭解如何設定 Firebase 專案、將應用程式連結至 Firebase、新增 SDK、初始化 Vertex AI 服務,以及建立 GenerativeModel
例項。
傳送即時通訊提示要求
如要建立多輪對話 (例如即時通訊),請先呼叫 startChat()
來初始化即時通訊。接著使用 sendMessage()
傳送新使用者訊息,這也會附加訊息和回應至聊天記錄。
與對話內容相關聯的 role
有兩種可能的選項:
user
:提供提示的角色。這個值是對sendMessage()
呼叫的預設值,如果傳遞不同的角色,函式就會擲回例外狀況。model
:提供回應的角色。這個角色可用於以現有history
呼叫startChat()
。
Swift
您可以呼叫 startChat()
和 sendMessage()
來傳送新使用者訊息:
import FirebaseVertexAI
// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()
// Create a `GenerativeModel` instance with a model that supports your use case
let model = vertex.generativeModel(modelName: "gemini-2.0-flash")
// Optionally specify existing chat history
let history = [
ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]
// Initialize the chat with optional chat history
let chat = model.startChat(history: history)
// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
print(response.text ?? "No text in response.")
Kotlin
您可以呼叫 startChat()
和 sendMessage()
來傳送新使用者訊息:
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
// Initialize the chat
val chat = generativeModel.startChat(
history = listOf(
content(role = "user") { text("Hello, I have 2 dogs in my house.") },
content(role = "model") { text("Great to meet you. What would you like to know?") }
)
)
val response = chat.sendMessage("How many paws are in my house?")
print(response.text)
Java
您可以呼叫 startChat()
和 sendMessage()
來傳送新使用者訊息:
ListenableFuture
。// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
GenerativeModel gm = FirebaseVertexAI.getInstance()
.generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(gm);
// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();
Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();
List<Content> history = Arrays.asList(userContent, modelContent);
// Initialize the chat
ChatFutures chat = model.startChat(history);
// Create a new user message
Content.Builder messageBuilder = new Content.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");
Content message = messageBuilder.build();
// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(message);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
@Override
public void onSuccess(GenerateContentResponse result) {
String resultText = result.getText();
System.out.println(resultText);
}
@Override
public void onFailure(Throwable t) {
t.printStackTrace();
}
}, executor);
Web
您可以呼叫 startChat()
和 sendMessage()
來傳送新使用者訊息:
import { initializeApp } from "firebase/app";
import { getVertexAI, getGenerativeModel } from "firebase/vertexai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Vertex AI service
const vertexAI = getVertexAI(firebaseApp);
// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(vertexAI, { model: "gemini-2.0-flash" });
async function run() {
const chat = model.startChat({
history: [
{
role: "user",
parts: [{ text: "Hello, I have 2 dogs in my house." }],
},
{
role: "model",
parts: [{ text: "Great to meet you. What would you like to know?" }],
},
],
generationConfig: {
maxOutputTokens: 100,
},
});
const msg = "How many paws are in my house?";
const result = await chat.sendMessage(msg);
const response = await result.response;
const text = response.text();
console.log(text);
}
run();
Dart
您可以呼叫 startChat()
和 sendMessage()
來傳送新使用者訊息:
import 'package:firebase_vertexai/firebase_vertexai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
final model =
FirebaseVertexAI.instance.generativeModel(model: 'gemini-2.0-flash');
final chat = model.startChat();
// Provide a prompt that contains text
final prompt = [Content.text('Write a story about a magic backpack.')];
final response = await chat.sendMessage(prompt);
print(response.text);
逐句顯示回應
請先完成本指南的「事前準備」一節,再嘗試使用這個範例。
您可以不等待模型產生的完整結果,改用串流處理部分結果,以便加快互動速度。如要串流回應,請呼叫 sendMessageStream()
。
你還可以做些什麼?
- 瞭解如何計算符號,再將長提示傳送至模型。
- 設定 Cloud Storage for Firebase,這樣您就能在多模態要求中加入大型檔案,並透過更妥善的解決方案在提示中提供檔案。檔案可包含圖片、PDF、影片和音訊。
- 開始著手準備正式版,包括設定 Firebase App Check,以防範未經授權的用戶端濫用 Gemini API。此外,請務必詳閱製作檢查清單。
試用其他功能
- 使用文字提示來生成文字。
- 透過提示各種檔案類型 (例如圖片、PDF 檔案、影片和音訊) 產生文字。
- 從文字和多模態提示產生結構化輸出內容 (例如 JSON)。
- 使用文字提示生成圖片。
- 使用函式呼叫,將生成模型連結至外部系統和資訊。
瞭解如何控管內容產生作業
- 瞭解提示設計,包括最佳做法、策略和提示範例。
- 設定模型參數,例如溫度參數和輸出符記數量上限 (適用於 Gemini),或顯示比例和人物生成 (適用於 Imagen)。
- 使用安全性設定,調整可能會收到有害回應的機率。
進一步瞭解支援的型號
瞭解可用於各種用途的模型,以及相關配額和價格。針對使用 Vertex AI in Firebase 的體驗提供意見回饋