Batches#
Create / interact with a batch of updates / deletes.
Batches provide the ability to execute multiple operations in a single request to the Cloud Datastore API.
See https://cloud.google.com/datastore/docs/concepts/entities#Datastore_Batch_operations
-
class
gcloud.datastore.batch.Batch(client)[source]# Bases:
objectAn abstraction representing a collected group of updates / deletes.
Used to build up a bulk mutuation.
For example, the following snippet of code will put the two
saveoperations and thedeleteoperation into the same mutation, and send them to the server in a single API request:>>> from gcloud import datastore >>> client = datastore.Client() >>> batch = client.batch() >>> batch.put(entity1) >>> batch.put(entity2) >>> batch.delete(key3) >>> batch.commit()
You can also use a batch as a context manager, in which case
commit()will be called automatically if its block exits without raising an exception:>>> with batch: ... batch.put(entity1) ... batch.put(entity2) ... batch.delete(key3)
By default, no updates will be sent if the block exits with an error:
>>> with batch: ... do_some_work(batch) ... raise Exception() # rolls back
Parameters: client ( gcloud.datastore.client.Client) – The client used to connect to datastore.-
begin()[source]# Begins a batch.
This method is called automatically when entering a with statement, however it can be called explicitly if you don’t want to use a context manager.
Overridden by
gcloud.datastore.transaction.Transaction.Raises: ValueErrorif the batch has already begun.
-
commit()[source]# Commits the batch.
This is called automatically upon exiting a with statement, however it can be called explicitly if you don’t want to use a context manager.
-
connection# Getter for connection over which the batch will run.
Return type: gcloud.datastore.connection.ConnectionReturns: The connection over which the batch will run.
-
delete(key)[source]# Remember a key to be deleted during
commit().Parameters: key ( gcloud.datastore.key.Key) – the key to be deleted.Raises: ValueError if key is not complete, or if the key’s projectdoes not match ours.
-
mutations# Getter for the changes accumulated by this batch.
Every batch is committed with a single commit request containing all the work to be done as mutations. Inside a batch, calling
put()with an entity, ordelete()with a key, builds up the request by adding a new mutation. This getter returns the protobuf that has been built-up so far.Return type: iterable Returns: The list of _generated.datastore_pb2.Mutationprotobufs to be sent in the commit request.
-
namespace# Getter for namespace in which the batch will run.
Return type: strReturns: The namespace in which the batch will run.
-
project# Getter for project in which the batch will run.
Return type: strReturns: The project in which the batch will run.
-
put(entity)[source]# Remember an entity’s state to be saved during
commit().Note
Any existing properties for the entity will be replaced by those currently set on this instance. Already-stored properties which do not correspond to keys set on this instance will be removed from the datastore.
Note
Property values which are “text” (‘unicode’ in Python2, ‘str’ in Python3) map to ‘string_value’ in the datastore; values which are “bytes” (‘str’ in Python2, ‘bytes’ in Python3) map to ‘blob_value’.
When an entity has a partial key, calling
commit()sends it as aninsertmutation and the key is completed. On return, the key for theentitypassed in is updated to match the key ID assigned by the server.Parameters: entity ( gcloud.datastore.entity.Entity) – the entity to be saved.Raises: ValueError if entity has no key assigned, or if the key’s projectdoes not match ours.
-
rollback()[source]# Rolls back the current batch.
Marks the batch as aborted (can’t be used again).
Overridden by
gcloud.datastore.transaction.Transaction.
-