Using the Northwind demo graph with Spark OLAP jobs

Run OLAP queries against the Northwind demo graph data.

About this task

The Northwind demo included with the DSE demos has a script for creating a graph of the data for a fictional trading company.

In this task, you’ll use the Gremlin console to create the Northwind graph, snapshot part of the graph, and run a count operation on the subgraph using the SparkGraphComputer.

Prerequisites

Procedure

  1. Load the Northwind graph and supplemental data using the graphloader tool:

    graphloader -graph northwind -address localhost graph-examples/northwind/northwind-mapping.groovy  -inputpath graph-examples/northwind/data &&
    graphloader -graph northwind -address localhost graph-examples/northwind/supplemental-data-mapping.groovy -inputpath graph-examples/northwind/data/
  2. Start the Gremlin console using the dse gremlin-console command:

    dse gremlin-console
  3. Alias the traversal to Northwind graph using the default OLTP traversal source:

    :remote config alias g northwind.g
  4. Set the schema mode to Development.

    To allow modifying the schema for the connected graph database, you must set the mode to Development each session. The default schema mode for DataStax Graph is Production, which doesn’t allow you to modify the graph’s schema.

    schema.config().option('graph.schema_mode').set('Development')
  5. Enable the use of scans and lambdas.

    schema.config().option('graph.allow_scan').set('true')
    graph.schema().config().option('graph.traversal_sources.g.restrict_lambda').set(false)
  6. Look at the schema of the northwind graph:

    schema.describe()
  7. Alias the traversal to the Northwind analytics OLAP traversal source a. Alias g to the OLAP traversal source for one-off analytic queries:

    :remote config alias g northwind.a
    ==>g=northwind.a
  8. Count the number of vertices using the OLAP traversal source:

    g.V().count()
    ==>3294

    When you alias g to the OLAP traversal source database name.a, DSE Analytics is the workload back-end.

  9. Store subgraphs into snapshots using graph.snapshot().

    When you need to run multiple OLAP queries on a graph in one session, use snapshots of the graph as the traversal source.

    employees = graph.snapshot().vertices('employee').create()
    ==>graphtraversalsource[hadoopgraph[persistedinputrdd->persistedoutputrdd], sparkgraphcomputer]
    categories = graph.snapshot().vertices('category').create()
    ==>graphtraversalsource[hadoopgraph[persistedinputrdd->persistedoutputrdd], sparkgraphcomputer]

    The snapshot() method returns an OLAP traversal source using the SparkGraphComputer.

  10. Run an operation on the snapshot graphs.

    Count the number of employee vertices in the snapshot graph:

    employees.V().count()
    ==> 9

    Count the number of category vertices in the snapshot graph:

    categories.V().count()
    ==> 8

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com