Report an Issue

Cluster#

Warning

gRPC is required for using the Cloud Bigtable API. As of May 2016, grpcio is only supported in Python 2.7, so importing gcloud.bigtable in other versions of Python will fail.

User friendly container for Google Cloud Bigtable Cluster.

class gcloud.bigtable.cluster.Cluster(cluster_id, instance, serve_nodes=3)[source]#

Bases: object

Representation of a Google Cloud Bigtable Cluster.

We can use a Cluster to:

Note

For now, we leave out the default_storage_type (an enum) which if not sent will end up as data_v2_pb2.STORAGE_SSD.

Parameters:
  • cluster_id (str) – The ID of the cluster.
  • instance (instance.Instance) – The instance where the cluster resides.
  • serve_nodes (int) – (Optional) The number of nodes in the cluster. Defaults to DEFAULT_SERVE_NODES.
copy()[source]#

Make a copy of this cluster.

Copies the local data stored as simple types and copies the client attached to this instance.

Return type:Cluster
Returns:A copy of the current cluster.
create()[source]#

Create this cluster.

Note

Uses the project, instance and cluster_id on the current Cluster in addition to the serve_nodes. To change them before creating, reset the values via

cluster.serve_nodes = 8
cluster.cluster_id = 'i-changed-my-mind'

before calling create().

Return type:Operation
Returns:The long-running operation corresponding to the create operation.
delete()[source]#

Delete this cluster.

Marks a cluster and all of its tables for permanent deletion in 7 days.

Immediately upon completion of the request:

  • Billing will cease for all of the cluster’s reserved resources.
  • The cluster’s delete_time field will be set 7 days in the future.

Soon afterward:

  • All tables within the cluster will become unavailable.

Prior to the cluster’s delete_time:

  • The cluster can be recovered with a call to UndeleteCluster.
  • All other attempts to modify or delete the cluster will be rejected.

At the cluster’s delete_time:

  • The cluster and all of its tables will immediately and irrevocably disappear from the API, and their data will be permanently deleted.
classmethod from_pb(cluster_pb, instance)[source]#

Creates a cluster instance from a protobuf.

Parameters:
  • cluster_pb (instance_pb2.Cluster) – A cluster protobuf object.
  • instance (instance.Instance>) – The instance that owns the cluster.
Return type:

Cluster

Returns:

The cluster parsed from the protobuf response.

Raises:

ValueError if the cluster name does not match projects/{project}/instances/{instance}/clusters/{cluster_id} or if the parsed project ID does not match the project ID on the client.

name#

Cluster name used in requests.

Note

This property will not change if _instance and cluster_id do not, but the return value is not cached.

The cluster name is of the form

"projects/{project}/instances/{instance}/clusters/{cluster_id}"
Return type:str
Returns:The cluster name.
reload()[source]#

Reload the metadata for this cluster.

update()[source]#

Update this cluster.

Note

Updates the serve_nodes. If you’d like to change them before updating, reset the values via

cluster.serve_nodes = 8

before calling update().

Return type:Operation
Returns:The long-running operation corresponding to the update operation.
gcloud.bigtable.cluster.DEFAULT_SERVE_NODES = 3#

Default number of nodes to use when creating a cluster.

class gcloud.bigtable.cluster.Operation(op_type, op_id, cluster=None)[source]#

Bases: object

Representation of a Google API Long-Running Operation.

In particular, these will be the result of operations on clusters using the Cloud Bigtable API.

Parameters:
  • op_type (str) – The type of operation being performed. Expect create, update or undelete.
  • op_id (int) – The ID of the operation.
  • cluster (Cluster) – The cluster that created the operation.
finished()[source]#

Check if the operation has finished.

Return type:bool
Returns:A boolean indicating if the current operation has completed.
Raises:ValueError if the operation has already completed.