Integrate Mistral AI as an embedding provider

Integrate Mistral AI as an external embedding provider for Astra DB vectorize to leverage Mistral AI’s embeddings API within Astra DB Serverless.

Prerequisites

To integrate Mistral AI as an external embedding provider, you need the following:

Create the Mistral AI API key

Log in to your Mistral AI account and create a new API key with unrestricted access to the API. Make sure to copy the API key to a secure location.

Don’t modify or delete the API key in your Mistral AI account after you’ve added it to Astra. This breaks the integration.

Add the Mistral AI integration to your organization

Use the Astra Portal to add the Mistral AI embedding provider integration to your Astra organization:

  1. In the Astra Portal navigation menu, click Integrations.

  2. In the All Integrations section, select Mistral AI Embedding provider.

  3. Click Add integration.

  4. In the Add Integration dialog, do the following:

    1. Enter a unique API key name.

      You can’t change API key names. Make sure the name is meaningful and that it helps you identify your Mistral AI API key in Astra DB.

    2. Enter your Mistral AI API key.

    3. In the Add databases to scope section, select a Serverless (Vector) database that you want to use the Mistral AI API key.

      When you create a collection in a scoped database, you can choose any of the API keys that are available to the database. Astra DB uses the API key to request embeddings from your embedding provider when you load data into the collection.

      You can add up to 10 databases at once, and you can add more databases later.

      For greater access control, you can add multiple API keys, and each API key can have different scoped databases. Additionally, you can add the same database to multiple API key scopes.

      For example, you can have a few broadly-scoped API keys or many narrowly-scoped API keys.

  5. Click Add Integration.

    The Mistral AI integration switches to ACTIVE rss_feed, and your API key and its scoped databases appear in the API keys section. If you want to add more API keys for this integration, click Add API key.

You can now use the Mistral AI integration to generate embeddings when you create collections in the scoped databases.

Add the Mistral AI integration to a new collection

Before you can use the Mistral AI integration to generate embeddings, you must add the integration to a new collection.

  • Astra Portal

  • Python

  • TypeScript

  • Java

Use the Astra Portal to add the Mistral AI integration to a new collection:

  1. In the Astra Portal, go to Databases, and then select your Serverless (Vector) database.

  2. Click Data Explorer.

  3. Optional: Use the Namespace dropdown to select the namespace where you want to create the collection. Otherwise, leave default_keyspace selected to create the collection in the default namespace.

  4. Click Create Collection.

  5. In the Create collection dialog, enter a name for the new collection in the Collection name field.

  6. Under Embedding generation method, select the Mistral AI embedding provider integration.

    If the integration isn’t listed, go to Integrations > Mistral AI, make sure the integration is ACTIVE rss_feed, and make sure that the database is scoped to the API key that you want to use for your collection.

  7. Complete the following fields:

    • API key: The API key that you want the collection to use to access your embedding provider and generate embeddings. This field is only active if the database is scoped to multiple Mistral AI API keys.

    • Embedding model: The model that you want to use to generate embeddings. The available models are: mistral-embed.

    • Dimensions: The number of dimensions that you want the generated vectors to have. Most models automatically populate the Dimensions. You can edit this field if the model supports a range of dimensions or the embedding provider integration uses an endpoint-defined model. Your chosen embedding model must support the specified number of dimensions.

    • Similarity metric: The method you want to use to calculate vector similarities.

      The available metrics are:

  8. Click Create collection.

    If you get a Collection Limit Reached message, you’ll need to delete a collection before you can create a new one.

An empty collection appears in the list of collections. You can now load data into this collection.

Use the Python client to create a collection that uses the Mistral AI integration.

Initialize the client

If you haven’t done so already, initialize the client before creating a collection:

import os
from astrapy import DataAPIClient
from astrapy.constants import VectorMetric
from astrapy.ids import UUID
from astrapy.info import CollectionVectorServiceOptions

# Initialize the client and get a "Database" object
client = DataAPIClient(os.environ["ASTRA_DB_APPLICATION_TOKEN"])
database = client.get_database(os.environ["ASTRA_DB_API_ENDPOINT"])
print(f"* Database: {database.info().name}\n")

Create a collection integrated with Mistral AI:

collection = database.create_collection(
    "COLLECTION_NAME",
    metric=VectorMetric.COSINE,
    service=CollectionVectorServiceOptions(
        provider="mistral",
        model_name="MODEL_NAME",
        authentication={
            "providerKey": "API_KEY_NAME",
        },
    ),
)
print(f"* Collection: {collection.full_name}\n")

Replace the following:

  • COLLECTION_NAME: The name for your collection.

  • API_KEY_NAME: The name of the Mistral AI API key that you want to use for your collection. Must be the name of an existing Mistral AI API key in the Astra Portal.

  • MODEL_NAME: The desired model to use to generate embeddings. For Mistral AI, you can choose from the following models: mistral-embed. Unless otherwise specified, the vector dimensions are automatically set based on the chosen model.

Use the TypeScript client to create a collection that uses the Mistral AI integration.

Initialize the client

If you haven’t done so already, initialize the client before creating a collection:

import { DataAPIClient, VectorDoc, UUID } from '@datastax/astra-db-ts';

const { ASTRA_DB_APPLICATION_TOKEN, ASTRA_DB_API_ENDPOINT } = process.env;

// Initialize the client and get a 'Db' object
const client = new DataAPIClient(ASTRA_DB_APPLICATION_TOKEN);
const db = client.db(ASTRA_DB_API_ENDPOINT);

console.log(`* Connected to DB ${db.id}`);

Create a collection integrated with Mistral AI:

(async function () {
  const collection = await db.createCollection('COLLECTION_NAME', {
    vector: {
      metric: 'cosine',
      service: {
        provider: 'mistral',
        modelName: 'MODEL_NAME',
        authentication: {
          providerKey: 'API_KEY_NAME',
        },
      },
    },
  });
  console.log(`* Created collection ${collection.namespace}.${collection.collectionName}`);

Replace the following:

  • COLLECTION_NAME: The name for your collection.

  • API_KEY_NAME: The name of the Mistral AI API key that you want to use for your collection. Must be the name of an existing Mistral AI API key in the Astra Portal.

  • MODEL_NAME: The desired model to use to generate embeddings. For Mistral AI, you can choose from the following models: mistral-embed. Unless otherwise specified, the vector dimensions are automatically set based on the chosen model.

Use the Java client to create a collection that uses the Mistral AI integration.

Initialize the client

If you haven’t done so already, initialize the client before creating a collection:

import com.datastax.astra.client.Collection;
import com.datastax.astra.client.DataAPIClient;
import com.datastax.astra.client.Database;
import com.datastax.astra.client.model.CollectionOptions;
import com.datastax.astra.client.model.Document;
import com.datastax.astra.client.model.FindIterable;
import com.datastax.astra.client.model.FindOptions;
import com.datastax.astra.client.model.SimilarityMetric;

import static com.datastax.astra.client.model.SimilarityMetric.COSINE;

public class Quickstart {

  public static void main(String[] args) {
    // Loading Arguments
    String astraToken = System.getenv("ASTRA_DB_APPLICATION_TOKEN");
    String astraApiEndpoint = System.getenv("ASTRA_DB_API_ENDPOINT");

    // Initialize the client
    DataAPIClient client = new DataAPIClient(astraToken);
    System.out.println("Connected to AstraDB");

    Database db = client.getDatabase(astraApiEndpoint);
    System.out.println("Connected to Database.");

Create a collection integrated with Mistral AI:

    Collection<Document> collection = db.createCollection("COLLECTION_NAME",
            CollectionOptions.builder()
                    .vectorSimilarity(SimilarityMetric.COSINE)
                    .vectorize("mistral", "MODEL_NAME", "API_KEY_NAME")
                    .build());
    System.out.println("Created a collection");

Replace the following:

  • COLLECTION_NAME: The name for your collection.

  • API_KEY_NAME: The name of the Mistral AI API key that you want to use for your collection. Must be the name of an existing Mistral AI API key in the Astra Portal.

  • MODEL_NAME: The desired model to use to generate embeddings. For Mistral AI, you can choose from the following models: mistral-embed. Unless otherwise specified, the vector dimensions are automatically set based on the chosen model.

You can’t change a collection’s embedding provider after you create it. To use a different embedding provider, you must create a new collection with a different embedding provider integration.

Load data using vectorize to auto-generate embeddings

Use the following methods to load vector data into a collection and use $vectorize to auto-generate embeddings.

  • Astra Portal

  • Python

  • TypeScript

  • Java

Use the Astra Portal to load a dataset from a JSON or a CSV file.

  1. In the Astra Portal, go to Databases, and then select a database that contains a collection that uses the Mistral AI integration.

  2. Click Data Explorer.

  3. Select the collection that uses the Mistral AI integration.

  4. Click Load Data.

  5. In the Load Data dialog, click Select File.

  6. Select the file on your computer that contains your dataset.

    Once the file upload is complete, the first ten rows of your data appear in the Data Preview section.

    If you get a Selected embedding does not match collection dimensions error, you need to create a new collection with vector dimensions that match your dataset.

  7. Use the Vector Field dropdown to select the field that you want to auto-generate embeddings for.

    The Load Data dialog with Vector Field dropdown expanded.

    The data importer will apply the top-level $vectorize key to the Vector Field, and automatically generate an embedding vector from its contents. The resulting documents in the collection will have the actual text stored in the special $vectorize field, and the resulting embedding stored in the $vector field. The original field name (such as reviewtext) isn’t preserved in the documents in the database.

  8. Optional: Configure field data types.

    In the Data Preview section, use the drop-down controls to change the data type for each field or column.

    The options are:

    • String

    • Number

    • Array

    • Object

    • Vector

    Data type selections you make in the Data Preview section only apply to the initial data that you load (with the exception of Vector, which permanently maps the field to the reserved key $vector). These selections aren’t fixed in the schema, and don’t apply to documents inserted later on. The same field can be a string in one document, and a number in another. You can also have different sets of fields in different documents in the same collection.

  9. Click Load Data.

Once your dataset has loaded, you can interact with it and do a vector search using the Data Explorer and the client APIs.

# Insert documents into the collection.
# (UUIDs here are version 7.)
documents = [
    {
        "_id": UUID("018e65c9-df45-7913-89f8-175f28bd7f74"),
        "$vectorize": "Chat bot integrated sneakers that talk to you",
    },
    {
        "_id": UUID("018e65c9-e1b7-7048-a593-db452be1e4c2"),
        "$vectorize": "An AI quilt to help you sleep forever",
    },
    {
        "_id": UUID("018e65c9-e33d-749b-9386-e848739582f0"),
        "$vectorize": "A deep learning display that controls your mood",
    },
]
insertion_result = collection.insert_many(documents)
print(f"* Inserted {len(insertion_result.inserted_ids)} items.\n")
  // Insert documents into the collection (using UUIDv7s)
  const documents = [
    {
      _id: new UUID('018e65c9-df45-7913-89f8-175f28bd7f74'),
      $vectorize: 'Chat bot integrated sneakers that talk to you',
    },
    {
      _id: new UUID('018e65c9-e1b7-7048-a593-db452be1e4c2'),
      $vectorize: 'An AI quilt to help you sleep forever',
    },
    {
      _id: new UUID('018e65c9-e33d-749b-9386-e848739582f0'),
      $vectorize: 'A deep learning display that controls your mood',
    },
  ];

  try {
    const inserted = await collection.insertMany(documents);
    console.log(`* Inserted ${inserted.insertedCount} items.`);
  } catch (e) {
    console.log('* Documents found on DB already. Let\'s move on!');
  }
// Insert documents into the collection
InsertManyResult insertResult = collection.insertMany(
  new Document()
   .id(UUID.fromString("018e65c9-df45-7913-89f8-175f28bd7f74"))
   .vectorize("Chat bot integrated sneakers that talk to you"),
  new Document()
   .id(UUID.fromString("018e65c9-e1b7-7048-a593-db452be1e4c2"))
   .vectorize("An AI quilt to help you sleep forever"),
  new Document()
   .id(UUID.fromString("018e65c9-e33d-749b-9386-e848739582f0"))
   .vectorize("A deep learning display that controls your mood")
);
System.out.println("Insert " + insertResult.getInsertedIds().size() + " items.");

Search your data with vectorize

Perform a similarity search using text, rather than a vector.

  • Astra Portal

  • Python

  • TypeScript

  • Java

Use the Astra Portal to perform a search with vectorize:

  1. In the Astra Portal, go to Databases, and then select your Serverless (Vector) database.

  2. Click Data Explorer.

  3. Select the Namespace and Collection that contain the data you want to view.

    Your data is displayed in the Collection Data section. The field you configured to auto-generate embeddings is notated with ($vectorize) in the column title. The $vector field contains the generated embeddings.

  4. Enter a text query into the Hybrid Search field, and then click Apply.

    Astra DB auto-generates a vector from the text query and performs a similarity search. The search uses the similarity metric that you chose when you created the collection.

  5. Optional: Use Add Filter to filter your search results by the other fields in the collection. For more information about using filters, see Add a metadata filter.

The Collection Data section updates to show the rows that match your search criteria.

Use the Python client to perform a search with vectorize:

# Perform a similarity search
query = "I'd like some talking shoes"
results = collection.find(
    sort={"$vectorize": query},
    limit=2,
    projection={"$vectorize": True},
    include_similarity=True,
)
print(f"Vector search results for '{query}':")
for document in results:
    print("    ", document)

Use the TypeScript client to perform a search with vectorize:

  // Perform a similarity search
  const cursor = await collection.find({}, {
    sort: { $vectorize: 'shoes' },
    limit: 2,
    includeSimilarity: true,
  });

  console.log('* Search results:')
  for await (const doc of cursor) {
    console.log('  ', doc.text, doc.$similarity);
  }

Use the Java client to perform a search with vectorize:

// Perform a similarity search
FindOptions findOptions = new FindOptions()
       .limit(2)
       .includeSimilarity()
       .sort("I'd like some talking shoes");
FindIterable<Document> results = collection.find(findOptions);
for (Document document : results) {
   System.out.println("Document: " + document);
}

Manage scoped databases

For each API key, you select the databases that can use that API key. These are referred to as scoped databases.

To change the scoped databases for an existing Mistral AI API key, do the following:

  1. In the Astra Portal navigation menu, click Integrations, and then select Mistral AI Embedding provider.

  2. In the API keys section, expand the API key to show the list of scoped databases.

  3. To remove a database from the API key’s scope, click deleteDelete. In the confirmation dialog, enter the Database name and click Remove scope.

  4. To add a database to the API key’s scope, click more_vertMore, and then select Add database. Select the Serverless (Vector) database that you want to add to the scope, and then click Add database.

Remove an existing API key from the Mistral AI integration

Removing an API key immediately disables $vectorize embedding generation for any collections that used the removed key. Make sure the API key is not used by any active collections before you remove it.

Removing an API key from Astra DB Serverless does not delete it from your Mistral AI account.

For more information, see Change providers or credentials.

To remove an existing Mistral AI API key:

  1. In the Astra Portal navigation menu, click Integrations, and then select Mistral AI Embedding provider.

  2. In the API keys section, locate the API key you want to remove, click more_vert More, and then select Remove API key.

  3. In the confirmation dialog, enter the API key name, and then click Remove key.

  4. Log in to your Mistral AI account and delete the API key if you don’t plan to reuse it.

Remove the Mistral AI integration from your organization

To remove the Mistral AI embedding provider integration from your Astra organization, remove all existing Mistral AI API keys.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com