Skip to content

Commit 844e5be

Browse files
authored
docs: use autogenerated readme functionality and regenerate (#568)
* docs: use autogenerated readme functionality and regenerate * update version
1 parent b3e71d6 commit 844e5be

File tree

3 files changed

+437
-43
lines changed

3 files changed

+437
-43
lines changed

.readme-partials.yml

+325
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,325 @@
1+
custom_content: |
2+
## About Cloud Bigtable
3+
4+
[Cloud Bigtable][cloud-bigtable] is Google's NoSQL Big Data database service. It's
5+
the same database that powers many core Google services, including Search, Analytics, Maps, and
6+
Gmail.
7+
8+
Be sure to activate the Cloud Bigtable API and the Cloud Bigtable Admin API under APIs & Services in the GCP Console to use Cloud Bigtable from your project.
9+
10+
See the Bigtable client library documentation ([Admin API](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/admin/v2/package-summary.html) and [Data API](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/data/v2/package-summary.html)) to learn how to
11+
interact with Cloud Bigtable using this Client Library.
12+
13+
## Concepts
14+
15+
Cloud Bigtable is composed of instances, clusters, nodes and tables.
16+
17+
### Instances
18+
Instances are containers for clusters.
19+
20+
### Clusters
21+
Clusters represent the actual Cloud Bigtable service. Each cluster belongs to a single Cloud Bigtable instance, and an instance can have up to 4 clusters. When your application
22+
sends requests to a Cloud Bigtable instance, those requests are actually handled by one of the clusters in the instance.
23+
24+
### Nodes
25+
Each cluster in a production instance has 3 or more nodes, which are compute resources that Cloud Bigtable uses to manage your data.
26+
27+
### Tables
28+
Tables contain the actual data and are replicated across all of the clusters in an instance.
29+
30+
31+
## Clients
32+
The Cloud Bigtable API consists of:
33+
34+
### Data API
35+
Allows callers to persist and query data in a table. It's exposed by [BigtableDataClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/data/v2/BigtableDataClient.html).
36+
37+
### Admin API
38+
Allows callers to create and manage instances, clusters, tables, and access permissions. This API is exposed by: [BigtableInstanceAdminClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/admin/v2/BigtableInstanceAdminClient.html) for Instance and Cluster level resources.
39+
40+
See [BigtableTableAdminClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/admin/v2/BigtableTableAdminClient.html) for table management.
41+
42+
See [BigtableDataClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/data/v2/BigtableDataClient.html) for the data client.
43+
44+
See [BigtableInstanceAdminClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/admin/v2/BigtableInstanceAdminClient.html) for the instance admin client.
45+
46+
See [BigtableTableAdminClient](https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigtable/admin/v2/BigtableTableAdminClient.html) for the table admin client.
47+
48+
#### Calling Cloud Bigtable
49+
50+
The Cloud Bigtable API is split into 3 parts: Data API, Instance Admin API and Table Admin API.
51+
52+
Here is a code snippet showing simple usage of the Data API. Add the following imports
53+
at the top of your file:
54+
55+
```java
56+
import com.google.cloud.bigtable.data.v2.BigtableDataClient;
57+
import com.google.cloud.bigtable.data.v2.models.Query;
58+
import com.google.cloud.bigtable.data.v2.models.Row;
59+
60+
```
61+
62+
Then, to make a query to Bigtable, use the following code:
63+
```java
64+
// Instantiates a client
65+
String projectId = "my-project";
66+
String instanceId = "my-instance";
67+
String tableId = "my-table";
68+
69+
// Create the client.
70+
// Please note that creating the client is a very expensive operation
71+
// and should only be done once and shared in an application.
72+
BigtableDataClient dataClient = BigtableDataClient.create(projectId, instanceId);
73+
74+
try {
75+
// Query a table
76+
Query query = Query.create(tableId)
77+
.range("a", "z")
78+
.limit(26);
79+
80+
for (Row row : dataClient.readRows(query)) {
81+
System.out.println(row.getKey());
82+
}
83+
} finally {
84+
dataClient.close();
85+
}
86+
```
87+
88+
The Admin APIs are similar. Here is a code snippet showing how to create a table. Add the following
89+
imports at the top of your file:
90+
91+
```java
92+
import static com.google.cloud.bigtable.admin.v2.models.GCRules.GCRULES;
93+
import com.google.cloud.bigtable.admin.v2.BigtableTableAdminClient;
94+
import com.google.cloud.bigtable.admin.v2.models.CreateTableRequest;
95+
import com.google.cloud.bigtable.admin.v2.models.Table;
96+
```
97+
98+
Then, to create a table, use the following code:
99+
```java
100+
String projectId = "my-instance";
101+
String instanceId = "my-database";
102+
103+
BigtableTableAdminClient tableAdminClient = BigtableTableAdminClient
104+
.create(projectId, instanceId);
105+
106+
try {
107+
tableAdminClient.createTable(
108+
CreateTableRequest.of("my-table")
109+
.addFamily("my-family")
110+
);
111+
} finally {
112+
tableAdminClient.close();
113+
}
114+
```
115+
116+
TIP: If you are experiencing version conflicts with gRPC, see [Version Conflicts](#version-conflicts).
117+
118+
## OpenCensus Tracing
119+
120+
Cloud Bigtable client supports [OpenCensus Tracing](https://opencensus.io/tracing/),
121+
which gives insight into the client internals and aids in debugging production issues.
122+
By default, the functionality is disabled. For example to enable tracing using
123+
[Google Stackdriver](https://cloud.google.com/trace/docs/):
124+
125+
[//]: # (TODO: figure out how to keep opencensus version in sync with pom.xml)
126+
127+
If you are using Maven, add this to your pom.xml file
128+
```xml
129+
<dependency>
130+
<groupId>io.opencensus</groupId>
131+
<artifactId>opencensus-impl</artifactId>
132+
<version>0.24.0</version>
133+
<scope>runtime</scope>
134+
</dependency>
135+
<dependency>
136+
<groupId>io.opencensus</groupId>
137+
<artifactId>opencensus-exporter-trace-stackdriver</artifactId>
138+
<version>0.24.0</version>
139+
<exclusions>
140+
<exclusion>
141+
<groupId>io.grpc</groupId>
142+
<artifactId>*</artifactId>
143+
</exclusion>
144+
<exclusion>
145+
<groupId>com.google.auth</groupId>
146+
<artifactId>*</artifactId>
147+
</exclusion>
148+
</exclusions>
149+
</dependency>
150+
```
151+
If you are using Gradle, add this to your dependencies
152+
```Groovy
153+
compile 'io.opencensus:opencensus-impl:0.24.0'
154+
compile 'io.opencensus:opencensus-exporter-trace-stackdriver:0.24.0'
155+
```
156+
If you are using SBT, add this to your dependencies
157+
```Scala
158+
libraryDependencies += "io.opencensus" % "opencensus-impl" % "0.24.0"
159+
libraryDependencies += "io.opencensus" % "opencensus-exporter-trace-stackdriver" % "0.24.0"
160+
```
161+
162+
At the start of your application configure the exporter:
163+
164+
```java
165+
import io.opencensus.exporter.trace.stackdriver.StackdriverTraceConfiguration;
166+
import io.opencensus.exporter.trace.stackdriver.StackdriverTraceExporter;
167+
168+
StackdriverTraceExporter.createAndRegister(
169+
StackdriverTraceConfiguration.builder()
170+
.setProjectId("YOUR_PROJECT_ID")
171+
.build());
172+
```
173+
174+
By default traces are [sampled](https://opencensus.io/tracing/sampling) at a rate of about 1/10,000.
175+
You can configure a higher rate by updating the active tracing params:
176+
177+
```java
178+
import io.opencensus.trace.Tracing;
179+
import io.opencensus.trace.samplers.Samplers;
180+
181+
Tracing.getTraceConfig().updateActiveTraceParams(
182+
Tracing.getTraceConfig().getActiveTraceParams().toBuilder()
183+
.setSampler(Samplers.probabilitySampler(0.01))
184+
.build()
185+
);
186+
```
187+
188+
## OpenCensus Stats
189+
190+
Cloud Bigtable client supports [Opencensus Metrics](https://opencensus.io/stats/),
191+
which gives insight into the client internals and aids in debugging production issues.
192+
All Cloud Bigtable Metrics are prefixed with `cloud.google.com/java/bigtable/`. The
193+
metrics will be tagged with:
194+
* `bigtable_project_id`: the project that contains the target Bigtable instance.
195+
Please note that this id could be different from project that the client is running
196+
in and different from the project where the metrics are exported to.
197+
* `bigtable_instance_id`: the instance id of the target Bigtable instance
198+
* `bigtable_app_profile_id`: the app profile id that is being used to access the target
199+
Bigtable instance
200+
201+
### Available operation level metric views:
202+
203+
* `cloud.google.com/java/bigtable/op_latency`: A distribution of latency of
204+
each client method call, across all of it's RPC attempts. Tagged by
205+
operation name and final response status.
206+
207+
* `cloud.google.com/java/bigtable/completed_ops`: The total count of
208+
method invocations. Tagged by operation name and final response status.
209+
210+
* `cloud.google.com/java/bigtable/read_rows_first_row_latency`: A
211+
distribution of the latency of receiving the first row in a ReadRows
212+
operation.
213+
214+
* `cloud.google.com/java/bigtable/attempt_latency`: A distribution of latency of
215+
each client RPC, tagged by operation name and the attempt status. Under normal
216+
circumstances, this will be identical to op_latency. However, when the client
217+
receives transient errors, op_latency will be the sum of all attempt_latencies
218+
and the exponential delays
219+
220+
* `cloud.google.com/java/bigtable/attempts_per_op`: A distribution of attempts that
221+
each operation required, tagged by operation name and final operation status.
222+
Under normal circumstances, this will be 1.
223+
224+
225+
By default, the functionality is disabled. For example to enable metrics using
226+
[Google Stackdriver](https://cloud.google.com/monitoring/docs/):
227+
228+
229+
[//]: # (TODO: figure out how to keep opencensus version in sync with pom.xml)
230+
231+
If you are using Maven, add this to your pom.xml file
232+
```xml
233+
<dependency>
234+
<groupId>io.opencensus</groupId>
235+
<artifactId>opencensus-impl</artifactId>
236+
<version>0.24.0</version>
237+
<scope>runtime</scope>
238+
</dependency>
239+
<dependency>
240+
<groupId>io.opencensus</groupId>
241+
<artifactId>opencensus-exporter-stats-stackdriver</artifactId>
242+
<version>0.24.0</version>
243+
<exclusions>
244+
<exclusion>
245+
<groupId>io.grpc</groupId>
246+
<artifactId>*</artifactId>
247+
</exclusion>
248+
<exclusion>
249+
<groupId>com.google.auth</groupId>
250+
<artifactId>*</artifactId>
251+
</exclusion>
252+
</exclusions>
253+
</dependency>
254+
```
255+
If you are using Gradle, add this to your dependencies
256+
```Groovy
257+
compile 'io.opencensus:opencensus-impl:0.24.0'
258+
compile 'io.opencensus:opencensus-exporter-stats-stackdriver:0.24.0'
259+
```
260+
If you are using SBT, add this to your dependencies
261+
```Scala
262+
libraryDependencies += "io.opencensus" % "opencensus-impl" % "0.24.0"
263+
libraryDependencies += "io.opencensus" % "opencensus-exporter-stats-stackdriver" % "0.24.0"
264+
```
265+
266+
At the start of your application configure the exporter and enable the Bigtable stats views:
267+
268+
```java
269+
import io.opencensus.exporter.stats.stackdriver.StackdriverStatsConfiguration;
270+
import io.opencensus.exporter.stats.stackdriver.StackdriverStatsExporter;
271+
272+
StackdriverStatsExporter.createAndRegister(
273+
StackdriverStatsConfiguration.builder()
274+
.setProjectId("YOUR_PROJECT_ID")
275+
.build()
276+
);
277+
278+
BigtableDataSettings.enableOpenCensusStats();
279+
```
280+
281+
## Version Conflicts
282+
283+
google-cloud-bigtable depends on gRPC directly which may conflict with the versions brought
284+
in by other libraries, for example Apache Beam. This happens because internal dependencies
285+
between gRPC libraries are pinned to an exact version of grpc-core
286+
(see [here](https://github.com/grpc/grpc-java/commit/90db93b990305aa5a8428cf391b55498c7993b6e)).
287+
If both google-cloud-bigtable and the other library bring in two gRPC libraries that depend
288+
on the different versions of grpc-core, then dependency resolution will fail.
289+
The easiest way to fix this is to depend on the gRPC bom, which will force all the gRPC
290+
transitive libraries to use the same version.
291+
292+
Add the following to your project's pom.xml.
293+
294+
```
295+
<dependencyManagement>
296+
<dependencies>
297+
<dependency>
298+
<groupId>io.grpc</groupId>
299+
<artifactId>grpc-bom</artifactId>
300+
<version>1.28.0</version>
301+
<type>pom</type>
302+
<scope>import</scope>
303+
</dependency>
304+
</dependencies>
305+
</dependencyManagement>
306+
```
307+
308+
## Container Deployment
309+
310+
While deploying this client in [Google Kubernetes Engine(GKE)](https://cloud.google.com/kubernetes-engine) with [CoS](https://cloud.google.com/container-optimized-os/docs/). Please make sure to provide CPU configuration in your deployment file. With default configuration JVM detects only 1 CPU, which affects the number of channels with the client, resulting in performance repercussion.
311+
312+
Also, The number of `grpc-nio-worker-ELG-1-#` thread is same as number of CPUs. These are managed by a single `grpc-default-executor-#` thread, which is shared among multiple client instances.
313+
314+
For example:
315+
```yaml
316+
appVersion: v1
317+
...
318+
spec:
319+
...
320+
container:
321+
resources:
322+
requests:
323+
cpu: "1" # Here 1 represents 100% of single node CPUs whereas other than 1 represents the number of CPU it would use from a node.
324+
```
325+
see [Assign CPU Resources to Containers](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit) for more information.

0 commit comments

Comments
 (0)