cassandra.cluster - Clusters and Sessions

class cassandra.cluster.Cluster([contact_points=('127.0.0.1',)][, port=9042][, executor_threads=2], **attr_kwargs)[source]

The main class to use when interacting with a Cassandra cluster. Typically, one instance of this class will be created for each separate Cassandra cluster that your application interacts with.

Example usage:

>>> from cassandra.cluster import Cluster
>>> cluster = Cluster(['192.168.1.1', '192.168.1.2'])
>>> session = cluster.connect()
>>> session.execute("CREATE KEYSPACE ...")
>>> ...
>>> cluster.shutdown()

Any of the mutable Cluster attributes may be set as keyword arguments to the constructor.

Any of the mutable Cluster attributes may be set as keyword arguments to the constructor.

cql_version = None

If a specific version of CQL should be used, this may be set to that string version. Otherwise, the highest CQL version supported by the server will be automatically used.

protocol_version = 4

The maximum version of the native protocol to use.

The driver will automatically downgrade version based on a negotiation with the server, but it is most efficient to set this to the maximum supported by your version of Cassandra. Setting this will also prevent conflicting versions negotiated if your cluster is upgraded.

Version 2 of the native protocol adds support for lightweight transactions, batch operations, and automatic query paging. The v2 protocol is supported by Cassandra 2.0+.

Version 3 of the native protocol adds support for protocol-level client-side timestamps (see Session.use_client_timestamp), serial consistency levels for BatchStatement, and an improved connection pool.

Version 4 of the native protocol adds a number of new types, server warnings, new failure messages, and custom payloads. Details in the project docs

The following table describes the native protocol versions that are supported by each version of Cassandra:

Cassandra Version Protocol Versions
1.2 1
2.0 1, 2
2.1 1, 2, 3
2.2 1, 2, 3, 4
port = 9042

The server-side port to open connections to. Defaults to 9042.

compression = True

Controls compression for communications between the driver and Cassandra. If left as the default of True, either lz4 or snappy compression may be used, depending on what is supported by both the driver and Cassandra. If both are fully supported, lz4 will be preferred.

You may also set this to ‘snappy’ or ‘lz4’ to request that specific compression type.

Setting this to False disables compression.

auth_provider

When protocol_version is 2 or higher, this should be an instance of a subclass of AuthProvider, such as PlainTextAuthProvider.

When protocol_version is 1, this should be a function that accepts one argument, the IP address of a node, and returns a dict of credentials for that node.

When not using authentication, this should be left as None.

load_balancing_policy = None

An instance of policies.LoadBalancingPolicy or one of its subclasses.

Changed in version 2.6.0.

Defaults to TokenAwarePolicy (DCAwareRoundRobinPolicy). when using CPython (where the murmur3 extension is available). DCAwareRoundRobinPolicy otherwise. Default local DC will be chosen from contact points.

Please see DCAwareRoundRobinPolicy for a discussion on default behavior with respect to DC locality and remote nodes.

reconnection_policy = <cassandra.policies.ExponentialReconnectionPolicy object>

An instance of policies.ReconnectionPolicy. Defaults to an instance of ExponentialReconnectionPolicy with a base delay of one second and a max delay of ten minutes.

default_retry_policy = <cassandra.policies.RetryPolicy object>

A default policies.RetryPolicy instance to use for all Statement objects which do not have a retry_policy explicitly set.

conviction_policy_factory = <class 'cassandra.policies.SimpleConvictionPolicy'>

A factory function which creates instances of policies.ConvictionPolicy. Defaults to policies.SimpleConvictionPolicy.

address_translator = <cassandra.policies.IdentityTranslator object>

policies.AddressTranslator instance to be used in translating server node addresses to driver connection addresses.

connection_class = <class 'cassandra.io.asyncorereactor.AsyncoreConnection'>

This determines what event loop system will be used for managing I/O with Cassandra. These are the current options:

By default, AsyncoreConnection will be used, which uses the asyncore module in the Python standard library.

If libev is installed, LibevConnection will be used instead.

If gevent or eventlet monkey-patching is detected, the corresponding connection class will be used automatically.

metrics_enabled = False

Whether or not metric collection is enabled. If enabled, metrics will be an instance of Metrics.

metrics = None

An instance of cassandra.metrics.Metrics if metrics_enabled is True, else None.

metadata = None

An instance of cassandra.metadata.Metadata.

ssl_options = None

A optional dict which will be used as kwargs for ssl.wrap_socket() when new sockets are created. This should be used when client encryption is enabled in Cassandra.

By default, a ca_certs value should be supplied (the value should be a string pointing to the location of the CA certs file), and you probably want to specify ssl_version as ssl.PROTOCOL_TLSv1 to match Cassandra’s default protocol.

Changed in version 3.3.0.

In addition to wrap_socket kwargs, clients may also specify 'check_hostname': True to verify the cert hostname as outlined in RFC 2818 and RFC 6125. Note that this requires the certificate to be transferred, so should almost always require the option 'cert_reqs': ssl.CERT_REQUIRED. Note also that this functionality was not built into Python standard library until (2.7.9, 3.2). To enable this mechanism in earlier versions, patch ssl.match_hostname with a custom or back-ported function.

sockopts = None

An optional list of tuples which will be used as arguments to socket.setsockopt() for all created sockets.

Note: some drivers find setting TCPNODELAY beneficial in the context of their execution model. It was not found generally beneficial for this driver. To try with your own workload, set sockopts = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]

max_schema_agreement_wait = 10

The maximum duration (in seconds) that the driver will wait for schema agreement across the cluster. Defaults to ten seconds. If set <= 0, the driver will bypass schema agreement waits altogether.

control_connection_timeout = 2.0

A timeout, in seconds, for queries made by the control connection, such as querying the current schema and information about nodes in the cluster. If set to None, there will be no timeout for these queries.

idle_heartbeat_interval = 30

Interval, in seconds, on which to heartbeat idle connections. This helps keep connections open through network devices that expire idle connections. It also helps discover bad connections early in low-traffic scenarios. Setting to zero disables heartbeats.

schema_event_refresh_window = 2

Window, in seconds, within which a schema component will be refreshed after receiving a schema_change event.

The driver delays a random amount of time in the range [0.0, window) before executing the refresh. This serves two purposes:

1.) Spread the refresh for deployments with large fanout from C* to client tier, preventing a ‘thundering herd’ problem with many clients refreshing simultaneously.

2.) Remove redundant refreshes. Redundant events arriving within the delay period are discarded, and only one refresh is executed.

Setting this to zero will execute refreshes immediately.

Setting this negative will disable schema refreshes in response to push events (refreshes will still occur in response to schema change responses to DDL statements executed by Sessions of this Cluster).

topology_event_refresh_window = 10

Window, in seconds, within which the node and token list will be refreshed after receiving a topology_change event.

Setting this to zero will execute refreshes immediately.

Setting this negative will disable node refreshes in response to push events.

See schema_event_refresh_window for discussion of rationale

connect_timeout = 5

Timeout, in seconds, for creating new connections.

This timeout covers the entire connection negotiation, including TCP establishment, options passing, and authentication.

schema_metadata_enabled = True

Flag indicating whether internal schema metadata is updated.

When disabled, the driver does not populate Cluster.metadata.keyspaces on connect, or on schema change events. This can be used to speed initial connection, and reduce load on client and server during operation. Turning this off gives away token aware request routing, and programmatic inspection of the metadata model.

token_metadata_enabled = True

Flag indicating whether internal token metadata is updated.

When disabled, the driver does not query node token information on connect, or on topology change events. This can be used to speed initial connection, and reduce load on client and server during operation. It is most useful in large clusters using vnodes, where the token map can be expensive to compute. Turning this off gives away token aware request routing, and programmatic inspection of the token ring.

connect(keyspace=None)[source]

Creates and returns a new Session object. If keyspace is specified, that keyspace will be the default keyspace for operations on the Session.

shutdown()[source]

Closes all sessions and connection associated with this Cluster. To ensure all connections are properly closed, you should always call shutdown() on a Cluster instance when you are done with it.

Once shutdown, a Cluster should not be used for any purpose.

register_user_type(keyspace, user_type, klass)[source]

Registers a class to use to represent a particular user-defined type. Query parameters for this user-defined type will be assumed to be instances of klass. Result sets for this user-defined type will be instances of klass. If no class is registered for a user-defined type, a namedtuple will be used for result sets, and non-prepared statements may not encode parameters for this type correctly.

keyspace is the name of the keyspace that the UDT is defined in.

user_type is the string name of the UDT to register the mapping for.

klass should be a class with attributes whose names match the fields of the user-defined type. The constructor must accepts kwargs for each of the fields in the UDT.

This method should only be called after the type has been created within Cassandra.

Example:

cluster = Cluster(protocol_version=3)
session = cluster.connect()
session.set_keyspace('mykeyspace')
session.execute("CREATE TYPE address (street text, zipcode int)")
session.execute("CREATE TABLE users (id int PRIMARY KEY, location address)")

# create a class to map to the "address" UDT
class Address(object):

    def __init__(self, street, zipcode):
        self.street = street
        self.zipcode = zipcode

cluster.register_user_type('mykeyspace', 'address', Address)

# insert a row using an instance of Address
session.execute("INSERT INTO users (id, location) VALUES (%s, %s)",
                (0, Address("123 Main St.", 78723)))

# results will include Address instances
results = session.execute("SELECT * FROM users")
row = results[0]
print row.id, row.location.street, row.location.zipcode
register_listener(listener)[source]

Adds a cassandra.policies.HostStateListener subclass instance to the list of listeners to be notified when a host is added, removed, marked up, or marked down.

unregister_listener(listener)[source]

Removes a registered listener.

set_max_requests_per_connection(host_distance, max_requests)[source]

Sets a threshold for concurrent requests per connection, above which new connections will be created to a host (up to max connections; see set_max_connections_per_host()).

Pertains to connection pool management in protocol versions {1,2}.

get_max_requests_per_connection(host_distance)[source]
set_min_requests_per_connection(host_distance, min_requests)[source]

Sets a threshold for concurrent requests per connection, below which connections will be considered for disposal (down to core connections; see set_core_connections_per_host()).

Pertains to connection pool management in protocol versions {1,2}.

get_min_requests_per_connection(host_distance)[source]
get_core_connections_per_host(host_distance)[source]

Gets the minimum number of connections per Session that will be opened for each host with HostDistance equal to host_distance. The default is 2 for LOCAL and 1 for REMOTE.

This property is ignored if protocol_version is 3 or higher.

set_core_connections_per_host(host_distance, core_connections)[source]

Sets the minimum number of connections per Session that will be opened for each host with HostDistance equal to host_distance. The default is 2 for LOCAL and 1 for REMOTE.

Protocol version 1 and 2 are limited in the number of concurrent requests they can send per connection. The driver implements connection pooling to support higher levels of concurrency.

If protocol_version is set to 3 or higher, this is not supported (there is always one connection per host, unless the host is remote and connect_to_remote_hosts is False) and using this will result in an UnsupporteOperation.

get_max_connections_per_host(host_distance)[source]

Gets the maximum number of connections per Session that will be opened for each host with HostDistance equal to host_distance. The default is 8 for LOCAL and 2 for REMOTE.

This property is ignored if protocol_version is 3 or higher.

set_max_connections_per_host(host_distance, max_connections)[source]

Sets the maximum number of connections per Session that will be opened for each host with HostDistance equal to host_distance. The default is 2 for LOCAL and 1 for REMOTE.

If protocol_version is set to 3 or higher, this is not supported (there is always one connection per host, unless the host is remote and connect_to_remote_hosts is False) and using this will result in an UnsupporteOperation.

refresh_schema_metadata(max_schema_agreement_wait=None)[source]

Synchronously refresh all schema metadata.

By default, the timeout for this operation is governed by max_schema_agreement_wait and control_connection_timeout.

Passing max_schema_agreement_wait here overrides max_schema_agreement_wait.

Setting max_schema_agreement_wait <= 0 will bypass schema agreement and refresh schema immediately.

An Exception is raised if schema refresh fails for any reason.

refresh_keyspace_metadata(keyspace, max_schema_agreement_wait=None)[source]

Synchronously refresh keyspace metadata. This applies to keyspace-level information such as replication and durability settings. It does not refresh tables, types, etc. contained in the keyspace.

See refresh_schema_metadata() for description of max_schema_agreement_wait behavior

refresh_table_metadata(keyspace, table, max_schema_agreement_wait=None)[source]

Synchronously refresh table metadata. This applies to a table, and any triggers or indexes attached to the table.

See refresh_schema_metadata() for description of max_schema_agreement_wait behavior

refresh_user_type_metadata(keyspace, user_type, max_schema_agreement_wait=None)[source]

Synchronously refresh user defined type metadata.

See refresh_schema_metadata() for description of max_schema_agreement_wait behavior

refresh_user_function_metadata(keyspace, function, max_schema_agreement_wait=None)[source]

Synchronously refresh user defined function metadata.

function is a cassandra.UserFunctionDescriptor.

See refresh_schema_metadata() for description of max_schema_agreement_wait behavior

refresh_user_aggregate_metadata(keyspace, aggregate, max_schema_agreement_wait=None)[source]

Synchronously refresh user defined aggregate metadata.

aggregate is a cassandra.UserAggregateDescriptor.

See refresh_schema_metadata() for description of max_schema_agreement_wait behavior

refresh_nodes()[source]

Synchronously refresh the node list and token metadata

An Exception is raised if node refresh fails for any reason.

set_meta_refresh_enabled(enabled)[source]

Deprecated: set schema_metadata_enabled token_metadata_enabled instead

Sets a flag to enable (True) or disable (False) all metadata refresh queries. This applies to both schema and node topology.

Disabling this is useful to minimize refreshes during multiple changes.

Meta refresh must be enabled for the driver to become aware of any cluster topology changes or schema updates.

class cassandra.cluster.Session[source]

A collection of connection pools for each host in the cluster. Instances of this class should not be created directly, only using Cluster.connect().

Queries and statements can be executed through Session instances using the execute() and execute_async() methods.

Example usage:

>>> session = cluster.connect()
>>> session.set_keyspace("mykeyspace")
>>> session.execute("SELECT * FROM mycf")
default_timeout = 10.0

A default timeout, measured in seconds, for queries executed through execute() or execute_async(). This default may be overridden with the timeout parameter for either of those methods.

Setting this to None will cause no timeouts to be set by default.

Please see ResponseFuture.result() for details on the scope and effect of this timeout.

New in version 2.0.0.

default_consistency_level = LOCAL_ONE

The default ConsistencyLevel for operations executed through this session. This default may be overridden by setting the consistency_level on individual statements.

New in version 1.2.0.

Changed in version 3.0.0: default changed from ONE to LOCAL_ONE

default_serial_consistency_level = None

The default ConsistencyLevel for serial phase of conditional updates executed through this session. This default may be overridden by setting the serial_consistency_level on individual statements.

Only valid for protocol_version >= 2.

row_factory = <function named_tuple_factory>

The format to return row results in. By default, each returned row will be a named tuple. You can alternatively use any of the following:

default_fetch_size = 5000

By default, this many rows will be fetched at a time. Setting this to None will disable automatic paging for large query results. The fetch size can be also specified per-query through Statement.fetch_size.

This only takes effect when protocol version 2 or higher is used. See Cluster.protocol_version for details.

New in version 2.0.0.

use_client_timestamp = True

When using protocol version 3 or higher, write timestamps may be supplied client-side at the protocol level. (Normally they are generated server-side by the coordinator node.) Note that timestamps specified within a CQL query will override this timestamp.

New in version 2.1.0.

encoder = None

A Encoder instance that will be used when formatting query parameters for non-prepared statements. This is not used for prepared statements (because prepared statements give the driver more information about what CQL types are expected, allowing it to accept a wider range of python types).

The encoder uses a mapping from python types to encoder methods (for specific CQL types). This mapping can be be modified by users as they see fit. Methods of Encoder should be used for mapping values if possible, because they take precautions to avoid injections and properly sanitize data.

Example:

cluster = Cluster()
session = cluster.connect("mykeyspace")
session.encoder.mapping[tuple] = session.encoder.cql_encode_tuple

session.execute("CREATE TABLE mytable (k int PRIMARY KEY, col tuple<int, ascii>)")
session.execute("INSERT INTO mytable (k, col) VALUES (%s, %s)", [0, (123, 'abc')])

New in version 2.1.0.

client_protocol_handler = <class 'cassandra.protocol._ProtocolHandler'>

Specifies a protocol handler that will be used for client-initiated requests (i.e. no internal driver requests). This can be used to override or extend features such as message or type ser/des.

The default pure python implementation is cassandra.protocol.ProtocolHandler.

When compiled with Cython, there are also built-in faster alternatives. See Faster Deserialization

execute(statement[, parameters][, timeout][, trace][, custom_payload])[source]

Execute the given query and synchronously wait for the response.

If an error is encountered while executing the query, an Exception will be raised.

query may be a query string or an instance of cassandra.query.Statement.

parameters may be a sequence or dict of parameters to bind. If a sequence is used, %s should be used the placeholder for each argument. If a dict is used, %(name)s style placeholders must be used.

timeout should specify a floating-point timeout (in seconds) after which an OperationTimedOut exception will be raised if the query has not completed. If not set, the timeout defaults to default_timeout. If set to None, there is no timeout. Please see ResponseFuture.result() for details on the scope and effect of this timeout.

If trace is set to True, the query will be sent with tracing enabled. The trace details can be obtained using the returned ResultSet object.

custom_payload is a Custom Payloads dict to be passed to the server. If query is a Statement with its own custom_payload. The message payload will be a union of the two, with the values specified here taking precedence.

execute_async(statement[, parameters][, trace][, custom_payload])[source]

Execute the given query and return a ResponseFuture object which callbacks may be attached to for asynchronous response delivery. You may also call result() on the ResponseFuture to syncronously block for results at any time.

If trace is set to True, you may get the query trace descriptors using ResponseFuture.get_query_trace() or ResponseFuture.get_all_query_traces() on the future result.

custom_payload is a Custom Payloads dict to be passed to the server. If query is a Statement with its own custom_payload. The message payload will be a union of the two, with the values specified here taking precedence.

If the server sends a custom payload in the response message, the dict can be obtained following ResponseFuture.result() via ResponseFuture.custom_payload

Example usage:

>>> session = cluster.connect()
>>> future = session.execute_async("SELECT * FROM mycf")

>>> def log_results(results):
...     for row in results:
...         log.info("Results: %s", row)

>>> def log_error(exc):
>>>     log.error("Operation failed: %s", exc)

>>> future.add_callbacks(log_results, log_error)

Async execution with blocking wait for results:

>>> future = session.execute_async("SELECT * FROM mycf")
>>> # do other stuff...

>>> try:
...     results = future.result()
... except Exception:
...     log.exception("Operation failed:")
prepare(statement)[source]

Prepares a query string, returning a PreparedStatement instance which can be used as follows:

>>> session = cluster.connect("mykeyspace")
>>> query = "INSERT INTO users (id, name, age) VALUES (?, ?, ?)"
>>> prepared = session.prepare(query)
>>> session.execute(prepared, (user.id, user.name, user.age))

Or you may bind values to the prepared statement ahead of time:

>>> prepared = session.prepare(query)
>>> bound_stmt = prepared.bind((user.id, user.name, user.age))
>>> session.execute(bound_stmt)

Of course, prepared statements may (and should) be reused:

>>> prepared = session.prepare(query)
>>> for user in users:
...     bound = prepared.bind((user.id, user.name, user.age))
...     session.execute(bound)

Important: PreparedStatements should be prepared only once. Preparing the same query more than once will likely affect performance.

custom_payload is a key value map to be passed along with the prepare message. See Custom Payloads.

shutdown()[source]

Close all connections. Session instances should not be used for any purpose after being shutdown.

set_keyspace(keyspace)[source]

Set the default keyspace for all queries made through this Session. This operation blocks until complete.

class cassandra.cluster.ResponseFuture[source]

An asynchronous response delivery mechanism that is returned from calls to Session.execute_async().

There are two ways for results to be delivered:
query = None

The Statement instance that is being executed through this ResponseFuture.

result()[source]

Return the final result or raise an Exception if errors were encountered. If the final result or error has not been set yet, this method will block until it is set, or the timeout set for the request expires.

Timeout is specified in the Session request execution functions. If the timeout is exceeded, an cassandra.OperationTimedOut will be raised. This is a client-side timeout. For more information about server-side coordinator timeouts, see policies.RetryPolicy.

Example usage:

>>> future = session.execute_async("SELECT * FROM mycf")
>>> # do other stuff...

>>> try:
...     rows = future.result()
...     for row in rows:
...         ... # process results
... except Exception:
...     log.exception("Operation failed:")
get_query_trace()[source]

Fetches and returns the query trace of the last response, or None if tracing was not enabled.

Note that this may raise an exception if there are problems retrieving the trace details from Cassandra. If the trace is not available after max_wait, cassandra.query.TraceUnavailable will be raised.

query_cl is the consistency level used to poll the trace tables.

get_all_query_traces()[source]

Fetches and returns the query traces for all query pages, if tracing was enabled.

See note in get_query_trace() regarding possible exceptions.

custom_payload

The custom payload returned from the server, if any. This will only be set by Cassandra servers implementing a custom QueryHandler, and only for protocol_version 4+.

Ensure the future is complete before trying to access this property (call result(), or after callback is invoked). Otherwise it may throw if the response has not been received.

Returns:Custom Payloads.
is_schema_agreed = True

For DDL requests, this may be set False if the schema agreement poll after the response fails.

Always True for non-DDL requests.

has_more_pages

Returns True if there are more pages left in the query results, False otherwise. This should only be checked after the first page has been returned.

New in version 2.0.0.

warnings

Warnings returned from the server, if any. This will only be set for protocol_version 4+.

Warnings may be returned for such things as oversized batches, or too many tombstones in slice queries.

Ensure the future is complete before trying to access this property (call result(), or after callback is invoked). Otherwise it may throw if the response has not been received.

start_fetching_next_page()[source]

If there are more pages left in the query result, this asynchronously starts fetching the next page. If there are no pages left, QueryExhausted is raised. Also see has_more_pages.

This should only be called after the first page has been returned.

New in version 2.0.0.

add_callback(fn, *args, **kwargs)[source]

Attaches a callback function to be called when the final results arrive.

By default, fn will be called with the results as the first and only argument. If *args or **kwargs are supplied, they will be passed through as additional positional or keyword arguments to fn.

If an error is hit while executing the operation, a callback attached here will not be called. Use add_errback() or add_callbacks() if you wish to handle that case.

If the final result has already been seen when this method is called, the callback will be called immediately (before this method returns).

Note: in the case that the result is not available when the callback is added, the callback is executed by IO event thread. This means that the callback should not block or attempt further synchronous requests, because no further IO will be processed until the callback returns.

Important: if the callback you attach results in an exception being raised, the exception will be ignored, so please ensure your callback handles all error cases that you care about.

Usage example:

>>> session = cluster.connect("mykeyspace")

>>> def handle_results(rows, start_time, should_log=False):
...     if should_log:
...         log.info("Total time: %f", time.time() - start_time)
...     ...

>>> future = session.execute_async("SELECT * FROM users")
>>> future.add_callback(handle_results, time.time(), should_log=True)
add_errback(fn, *args, **kwargs)[source]

Like add_callback(), but handles error cases. An Exception instance will be passed as the first positional argument to fn.

add_callbacks(callback, errback, callback_args=(), callback_kwargs=None, errback_args=(), errback_args=None)[source]

A convenient combination of add_callback() and add_errback().

Example usage:

>>> session = cluster.connect()
>>> query = "SELECT * FROM mycf"
>>> future = session.execute_async(query)

>>> def log_results(results, level='debug'):
...     for row in results:
...         log.log(level, "Result: %s", row)

>>> def log_error(exc, query):
...     log.error("Query '%s' failed: %s", query, exc)

>>> future.add_callbacks(
...     callback=log_results, callback_kwargs={'level': 'info'},
...     errback=log_error, errback_args=(query,))
class cassandra.cluster.ResultSet[source]

An iterator over the rows from a query result. Also supplies basic equality and indexing methods for backward-compatability. These methods materialize the entire result set (loading all pages), and should only be used if the total result size is understood. Warnings are emitted when paged results are materialized in this fashion.

You can treat this as a normal iterator over rows:

>>> from cassandra.query import SimpleStatement
>>> statement = SimpleStatement("SELECT * FROM users", fetch_size=10)
>>> for user_row in session.execute(statement):
...     process_user(user_row)

Whenever there are no more rows in the current page, the next page will be fetched transparently. However, note that it is possible for an Exception to be raised while fetching the next page, just like you might see on a normal call to session.execute().

has_more_pages

True if the last response indicated more pages; False otherwise

current_rows

The list of current page rows. May be empty if the result was empty, or this is the last page.

fetch_next_page()[source]

Manually, synchronously fetch the next page. Supplied for manually retrieving pages and inspecting current_page(). It is not necessary to call this when iterating through results; paging happens implicitly in iteration.

get_query_trace(max_wait_sec=None)[source]

Gets the last query trace from the associated future. See ResponseFuture.get_query_trace() for details.

get_all_query_traces(max_wait_sec_per=None)[source]

Gets all query traces from the associated future. See ResponseFuture.get_all_query_traces() for details.

was_applied

For LWT results, returns whether the transaction was applied.

Result is indeterminate if called on a result that was not an LWT request.

Only valid when one of tne of the internal row factories is in use.

exception cassandra.cluster.QueryExhausted[source]

Raised when ResponseFuture.start_fetching_next_page() is called and there are no more pages. You can check ResponseFuture.has_more_pages before calling to avoid this.

New in version 2.0.0.

exception cassandra.cluster.NoHostAvailable[source]

Raised when an operation is attempted but all connections are busy, defunct, closed, or resulted in errors when used.

errors = None

A map of the form {ip: exception} which details the particular Exception that was caught for each host the operation was attempted against.

exception cassandra.cluster.UserTypeDoesNotExist[source]

An attempt was made to use a user-defined type that does not exist.

New in version 2.1.0.