You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens
121
-
in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
132
+
in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
122
133
For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.
123
134
Specify a lower value for less random responses and a higher value for more random responses.
124
135
Default 40. Possible values [1, 40].
@@ -175,6 +186,10 @@ class PaLM2TextEmbeddingGenerator(base.Predictor):
175
186
"""PaLM2 text embedding generator LLM model.
176
187
177
188
Args:
189
+
model_name (str, Default to "textembedding-gecko"):
190
+
The model for text embedding. “textembedding-gecko” returns model embeddings for text inputs.
191
+
"textembedding-gecko-multilingual" returns model embeddings for text inputs which support over 100 languages
192
+
Default to "textembedding-gecko".
178
193
session (bigframes.Session or None):
179
194
BQ session to create the model. If None, use the global default session.
180
195
connection_name (str or None):
@@ -184,9 +199,13 @@ class PaLM2TextEmbeddingGenerator(base.Predictor):
0 commit comments