Skip to content

Commit c4af6fe

Browse files
authored
docs(bigquery): link types on package docs (#11036)
Towards internal b/372529756 and b/372535179
1 parent 2c83297 commit c4af6fe

File tree

2 files changed

+47
-44
lines changed

2 files changed

+47
-44
lines changed

bigquery/bigquery.go

+5-5
Original file line numberDiff line numberDiff line change
@@ -65,19 +65,19 @@ type Client struct {
6565
enableQueryPreview bool
6666
}
6767

68-
// DetectProjectID is a sentinel value that instructs NewClient to detect the
69-
// project ID. It is given in place of the projectID argument. NewClient will
68+
// DetectProjectID is a sentinel value that instructs [NewClient] to detect the
69+
// project ID. It is given in place of the projectID argument. [NewClient] will
7070
// use the project ID from the given credentials or the default credentials
7171
// (https://developers.google.com/accounts/docs/application-default-credentials)
7272
// if no credentials were provided. When providing credentials, not all
73-
// options will allow NewClient to extract the project ID. Specifically a JWT
73+
// options will allow [NewClient] to extract the project ID. Specifically a JWT
7474
// does not have the project ID encoded.
7575
const DetectProjectID = "*detect-project-id*"
7676

77-
// NewClient constructs a new Client which can perform BigQuery operations.
77+
// NewClient constructs a new [Client] which can perform BigQuery operations.
7878
// Operations performed via the client are billed to the specified GCP project.
7979
//
80-
// If the project ID is set to DetectProjectID, NewClient will attempt to detect
80+
// If the project ID is set to [DetectProjectID], NewClient will attempt to detect
8181
// the project ID from credentials.
8282
//
8383
// This client supports enabling query-related preview features via environmental

bigquery/doc.go

+42-39
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ connection pooling and similar aspects of this package.
2323
2424
# Creating a Client
2525
26-
To start working with this package, create a client:
26+
To start working with this package, create a client with [NewClient]:
2727
2828
ctx := context.Background()
2929
client, err := bigquery.NewClient(ctx, projectID)
@@ -33,7 +33,7 @@ To start working with this package, create a client:
3333
3434
# Querying
3535
36-
To query existing tables, create a Query and call its Read method, which starts the
36+
To query existing tables, create a [Client.Query] and call its [Query.Read] method, which starts the
3737
query and waits for it to complete:
3838
3939
q := client.Query(`
@@ -52,7 +52,7 @@ query and waits for it to complete:
5252
}
5353
5454
Then iterate through the resulting rows. You can store a row using
55-
anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value.
55+
anything that implements the [ValueLoader] interface, or with a slice or map of [Value].
5656
A slice is simplest:
5757
5858
for {
@@ -86,7 +86,7 @@ You can also use a struct whose exported fields match the query:
8686
}
8787
8888
You can also start the query running and get the results later.
89-
Create the query as above, but call Run instead of Read. This returns a Job,
89+
Create the query as above, but call [Query.Run] instead of [Query.Read]. This returns a [Job],
9090
which represents an asynchronous operation.
9191
9292
job, err := q.Run(ctx)
@@ -100,17 +100,17 @@ the results at a later time, even in another process.
100100
jobID := job.ID()
101101
fmt.Printf("The job ID is %s\n", jobID)
102102
103-
To retrieve the job's results from the ID, first look up the Job:
103+
To retrieve the job's results from the ID, first look up the [Job] with the [Client.JobFromID] method:
104104
105105
job, err = client.JobFromID(ctx, jobID)
106106
if err != nil {
107107
// TODO: Handle error.
108108
}
109109
110-
Use the Job.Read method to obtain an iterator, and loop over the rows.
111-
Calling Query.Read is preferred for queries with a relatively small result set,
110+
Use the [Job.Read] method to obtain an iterator, and loop over the rows.
111+
Calling [Query.Read] is preferred for queries with a relatively small result set,
112112
as it will call BigQuery jobs.query API for a optimized query path. If the query
113-
doesn't meet that criteria, the method will just combine Query.Run and Job.Read.
113+
doesn't meet that criteria, the method will just combine [Query.Run] and [Job.Read].
114114
115115
it, err = job.Read(ctx)
116116
if err != nil {
@@ -120,26 +120,26 @@ doesn't meet that criteria, the method will just combine Query.Run and Job.Read.
120120
121121
# Datasets and Tables
122122
123-
You can refer to datasets in the client's project with the Dataset method, and
124-
in other projects with the DatasetInProject method:
123+
You can refer to datasets in the client's project with the [Client.Dataset] method, and
124+
in other projects with the [Client.DatasetInProject] method:
125125
126126
myDataset := client.Dataset("my_dataset")
127127
yourDataset := client.DatasetInProject("your-project-id", "your_dataset")
128128
129129
These methods create references to datasets, not the datasets themselves. You can have
130-
a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to
130+
a dataset reference even if the dataset doesn't exist yet. Use [Dataset.Create] to
131131
create a dataset from a reference:
132132
133133
if err := myDataset.Create(ctx, nil); err != nil {
134134
// TODO: Handle error.
135135
}
136136
137-
You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference
137+
You can refer to tables with [Dataset.Table]. Like [Dataset], [Table] is a reference
138138
to an object in BigQuery that may or may not exist.
139139
140140
table := myDataset.Table("my_table")
141141
142-
You can create, delete and update the metadata of tables with methods on Table.
142+
You can create, delete and update the metadata of tables with methods on [Table].
143143
For instance, you could create a temporary table with:
144144
145145
err = myDataset.Table("temp").Create(ctx, &bigquery.TableMetadata{
@@ -153,15 +153,15 @@ We'll see how to create a table with a schema in the next section.
153153
# Schemas
154154
155155
There are two ways to construct schemas with this package.
156-
You can build a schema by hand, like so:
156+
You can build a schema by hand with the [Schema] struct, like so:
157157
158158
schema1 := bigquery.Schema{
159159
{Name: "Name", Required: true, Type: bigquery.StringFieldType},
160160
{Name: "Grades", Repeated: true, Type: bigquery.IntegerFieldType},
161161
{Name: "Optional", Required: false, Type: bigquery.IntegerFieldType},
162162
}
163163
164-
Or you can infer the schema from a struct:
164+
Or you can infer the schema from a struct with the [InferSchema] method:
165165
166166
type student struct {
167167
Name string
@@ -174,10 +174,10 @@ Or you can infer the schema from a struct:
174174
}
175175
// schema1 and schema2 are identical.
176176
177-
Struct inference supports tags like those of the encoding/json package, so you can
177+
Struct inference supports tags like those of the [encoding/json] package, so you can
178178
change names, ignore fields, or mark a field as nullable (non-required). Fields
179-
declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool,
180-
NullTimestamp, NullDate, NullTime, NullDateTime, and NullGeography) are
179+
declared as one of the Null types ([NullInt64], [NullFloat64], [NullString], [NullBool],
180+
[NullTimestamp], [NullDate], [NullTime], [NullDateTime], [NullGeography], and [NullJSON]) are
181181
automatically inferred as nullable, so the "nullable" tag is only needed for []byte,
182182
*big.Rat and pointer-to-struct fields.
183183
@@ -193,16 +193,17 @@ automatically inferred as nullable, so the "nullable" tag is only needed for []b
193193
}
194194
// schema3 has required fields "full_name" and "Grade", and nullable BYTES field "Optional".
195195
196-
Having constructed a schema, you can create a table with it like so:
196+
Having constructed a schema, you can create a table with it using the [Table.Create] method like so:
197197
198198
if err := table.Create(ctx, &bigquery.TableMetadata{Schema: schema1}); err != nil {
199199
// TODO: Handle error.
200200
}
201201
202202
# Copying
203203
204-
You can copy one or more tables to another table. Begin by constructing a Copier
205-
describing the copy. Then set any desired copy options, and finally call Run to get a Job:
204+
You can copy one or more tables to another table. Begin by constructing a [Copier]
205+
describing the copy using the [Table.CopierFrom]. Then set any desired copy options,
206+
and finally call [Copier.Run] to get a [Job]:
206207
207208
copier := myDataset.Table("dest").CopierFrom(myDataset.Table("src"))
208209
copier.WriteDisposition = bigquery.WriteTruncate
@@ -211,21 +212,21 @@ describing the copy. Then set any desired copy options, and finally call Run to
211212
// TODO: Handle error.
212213
}
213214
214-
You can chain the call to Run if you don't want to set options:
215+
You can chain the call to [Copier.Run] if you don't want to set options:
215216
216217
job, err = myDataset.Table("dest").CopierFrom(myDataset.Table("src")).Run(ctx)
217218
if err != nil {
218219
// TODO: Handle error.
219220
}
220221
221-
You can wait for your job to complete:
222+
You can wait for your job to complete with the [Job.Wait] method:
222223
223224
status, err := job.Wait(ctx)
224225
if err != nil {
225226
// TODO: Handle error.
226227
}
227228
228-
Job.Wait polls with exponential backoff. You can also poll yourself, if you
229+
[Job.Wait] polls with exponential backoff. You can also poll yourself, if you
229230
wish:
230231
231232
for {
@@ -247,8 +248,9 @@ wish:
247248
There are two ways to populate a table with this package: load the data from a Google Cloud Storage
248249
object, or upload rows directly from your program.
249250
250-
For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure
251-
it as well, and call its Run method.
251+
For loading, first create a [GCSReference] with the [NewGCSReference] method, configuring it if desired.
252+
Then make a [Loader] from a table with the [Table.LoaderFrom] method with the reference,
253+
optionally configure it as well, and call its [Loader.Run] method.
252254
253255
gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
254256
gcsRef.AllowJaggedRows = true
@@ -257,8 +259,9 @@ it as well, and call its Run method.
257259
job, err = loader.Run(ctx)
258260
// Poll the job for completion if desired, as above.
259261
260-
To upload, first define a type that implements the ValueSaver interface, which has a single method named Save.
261-
Then create an Inserter, and call its Put method with a slice of values.
262+
To upload, first define a type that implements the [ValueSaver] interface, which has
263+
a single method named Save. Then create an [Inserter], and call its [Inserter.Put]
264+
method with a slice of values.
262265
263266
type Item struct {
264267
Name string
@@ -286,7 +289,7 @@ Then create an Inserter, and call its Put method with a slice of values.
286289
// TODO: Handle error.
287290
}
288291
289-
You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type
292+
You can also upload a struct that doesn't implement [ValueSaver]. Use the [StructSaver] type
290293
to specify the schema and insert ID by hand:
291294
292295
type item struct {
@@ -324,13 +327,13 @@ Lastly, but not least, you can just supply the struct or struct pointer directly
324327
}
325328
326329
BigQuery allows for higher throughput when omitting insertion IDs. To enable this,
327-
specify the sentinel `NoDedupeID` value for the insertion ID when implementing a ValueSaver.
330+
specify the sentinel [NoDedupeID] value for the insertion ID when implementing a [ValueSaver].
328331
329332
# Extracting
330333
331334
If you've been following so far, extracting data from a BigQuery table
332335
into a Google Cloud Storage object will feel familiar. First create an
333-
Extractor, then optionally configure it, and lastly call its Run method.
336+
[Extractor], then optionally configure it, and lastly call its [Extractor.Run] method.
334337
335338
extractor := table.ExtractorTo(gcsRef)
336339
extractor.DisableHeader = true
@@ -339,16 +342,16 @@ Extractor, then optionally configure it, and lastly call its Run method.
339342
340343
# Errors
341344
342-
Errors returned by this client are often of the type googleapi.Error: https://godoc.org/google.golang.org/api/googleapi#Error
345+
Errors returned by this client are often of the type [googleapi.Error].
346+
These errors can be introspected for more information by using [errors.As]
347+
with the richer [googleapi.Error] type. For example:
343348
344-
These errors can be introspected for more information by using `xerrors.As` with the richer *googleapi.Error type. For example:
345-
346-
var e *googleapi.Error
347-
if ok := xerrors.As(err, &e); ok {
348-
if e.Code == 409 { ... }
349-
}
349+
var e *googleapi.Error
350+
if ok := errors.As(err, &e); ok {
351+
if e.Code == 409 { ... }
352+
}
350353
351-
In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that
354+
In some cases, your client may received unstructured [googleapi.Error] error responses. In such cases, it is likely that
352355
you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas
353356
*/
354357
package bigquery // import "cloud.google.com/go/bigquery"

0 commit comments

Comments
 (0)