• HOME
  • ACADEMY
  • DOCS
CONTACT US DOWNLOAD DATASTAX
DataStax Logo
  • GLOSSARY
  • SUPPORT
  • DEVELOPER BLOGS
This document is no longer maintained.
DataStax Enterprise 3.2 (EOSL)
  • About DataStax Enterprise
  • Upgrading
  • Installing
    • Installing on RHEL-based systems
    • Installing on Debian-based systems
    • Installing the binary tarball
    • Installing on SUSE
    • On cloud providers
      • Initializing a DSE cluster on EC2
    • Installing prior releases
  • Security
    • Security management
    • Authenticating with Kerberos
      • Creating users
      • Enabling Kerberos security
      • Using cqlsh with Kerberos security
    • Client-to-node encryption
    • Node-to-node encryption
    • Server certificates
    • Installing cqlsh security
      • Running cqlsh
    • Transparent data encryption
      • Encrypting data
      • Table encryption options
      • Migrating encrypted tables
    • Data auditing
      • Log formats
      • Configuring auditing
    • Internal authentication
      • Configuring internal authentication and authorization
      • Changing the default superuser
      • Enable internal security without downtime
      • cqlsh login
    • Managing object permissions
    • Configuring keyspace replication
    • Configuring firewall ports
  • DSE Analytics with Hadoop
    • Getting started
      • Hadoop getting started tutorial
      • Analytics node configuration
    • Using the job tracker node
      • Setting the job tracker node
      • Using common hadoop commands
      • Managing the job tracker using dsetool commands
      • Changing the job tracker client port
    • About the Cassandra File System
    • Using the cfs-archive to store huge files
    • Using Hive
      • Running Hive
      • Browsing through Cassandra tables in Hive
      • Creating or altering CQL data from Hive
      • Using a managed table to load local data
      • Using an external file system
      • Unsupported data type example
      • Example: Use a CQL composite partition key
      • Using CQL collections
      • Creating a Hive CQL output query
      • Using a custom UDF
      • Using pushdown predicates
      • Using count
      • Handling schema changes
      • MapReduce tuning
      • Starting server
      • Setting the Job Tracker node for Hive
      • Recreate metadata after decommission
    • Using the DataStax ODBC driver for Hive on Windows
      • Configuring the driver
      • Using the DataStax ODBC driver for Hive
    • Using Mahout
      • Using Mahout commands
    • Using Pig
      • CQL 3 pushdown filter
      • Running the Pig demo
      • Ex: Save Relations
      • Ex: Primary key
      • Ex: Library data
      • Data access
      • Using the CqlStorage handler
      • Saving a Pig relation to Cassandra
      • Creating a URL-encoded prepared statement
      • Formatting Pig data
    • Using Sqoop
      • Running the Sqoop demo
      • Checking imported data
      • Cassandra options to the import command
  • DSE Search with Solr
    • Getting Started with Solr
    • Solr support for CQL 3
    • Defining key Solr terms
    • Installing Solr nodes
    • Solr tutorial
      • Create Cassandra table
      • Import data
      • Create a search index
      • Exploring the Solr Admin
      • Simple search
      • Faceted search
      • Solr HTTP API
    • Configuring Solr
      • Mapping of Solr types
      • Legacy mapping of Solr types
      • Configuring the Solr type mapping version
      • Changing Solr Types
      • Configuring search components
      • Configuring multithreaded queries
      • Configuring the schema
      • Configuring the Solr library path
      • Configuring the Data Import Handler
    • Creating an index for searching
      • Uploading the schema and configuration
      • Creating a Solr core
      • Reloading a Solr core
      • Rebuilding an index using the UI
      • Checking indexing status
      • Adding and viewing index resources
    • Using DSE Search/Solr
      • Inserting, indexing, and searching data
      • Example: Using a CQL collection set
      • Inserting/updating data
      • Using dynamic fields
      • Deleting Solr data
      • Using copy fields
      • Viewing Solr core status
    • Querying search results
      • Using SolrJ and other Solr clients
      • Shard selection
      • Using the ShardRouter MBean
      • Using the Solr HTTP API
      • Delete by id
      • Joining cores
      • Limiting columns indexed and returned by a query
      • Querying multiple tables
      • Querying using autocomplete/spellcheck
      • Using CQL
      • Using eDisMax
    • Capacity planning
    • Mixing workloads
    • Common operations
      • Handling inconsistencies in query results
      • Adding, decommissioning, repairing a node
      • Shuffling shards to balance the load
      • Managing the location of Solr data
      • Solr log messages
      • Changing the Solr connector port
      • Securing a Solr cluster
      • Fast repair
      • Excluding hosts from Solr-distributed queries
      • Expiring a DSE Search column
      • Changing the HTTP interface to Apache JServe Protocol
    • Tuning DSE Search performance
      • Using table compression
      • Configuring the update handler and autoSoftCommit
      • Changing the stack size and memtable space
      • Managing the consistency level
      • Configuring the available indexing threads
      • Managing caching
      • Tuning index size and range query speed
      • Increasing performance
      • Changing replication factor
      • Configuring re-index
    • Transforming data
      • Reference implementation
    • DSE vs. Open source
  • Deploying
    • Production deployment planning
    • Configuring replication
    • Single data center deployment
    • Multiple data center deployment
    • Expanding an AMI cluster
  • Moving data to/from other databases
  • Reference
    • Analytics tools: dse commands and dsetool
    • Installing glibc on Oracle Linux
    • Tarball file locations
    • Package file locations
    • Configuration (dse.yaml)
    • Starting and stopping DSE
      • Starting as a service
      • Starting as a stand-alone process
      • Stopping a node
      • Verify DSE running
    • Pre-flight check tool
    • Troubleshooting
    • Cassandra Log4j appender
      • Log4j search demo
  • Release notes
  • Home
  • Academy
  • Docs home
  • Contact us
  • Download DataStax
  • Glossary
  • Support
  • Developer blogs
© DataStax, Inc. All rights reserved. Updated: 27 July 2017 Build time: 27 July 2017 11:02:49.066

DataStax, Titan, and TitanDB are registered trademark of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache Cassandra, Apache, Tomcat, Lucene, Solr, Hadoop, Spark, TinkerPop, and Cassandra are trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.