Intro to vector databases
Hyper-Converged Database (HCD) 1.0 introduces a new type of database that enables you to store and search high-dimensional vectors. Vector databases enable use cases that require efficient similarity search.
Data stored in a database is useful, but the context of that data is critical to applications. Vector search does similarity comparison of stored database data to discover connections in data that may not be explicitly defined.
Data representation
Vector search relies on representing data points as high-dimensional vectors. The choice of vector representation depends on the nature of the data.
For data that consists of text documents, you can use specific techniques to convert text to vectors, such as the following:
More complex models can also be used to generate embeddings using Large Language Models (LLMs) like OpenAI GPT-4 or Meta LLaMA 2. Word2Vec is a relatively simple model that uses a shallow neural network to learn embeddings for words based on their context. The key concept is that Word2Vec generates a single fixed vector for each word, regardless of the context in which the word is used. LLMs are much more complex models that use deep neural networks, specifically transformer architectures, to learn embeddings for words based on their context. Unlike Word2Vec, these models generate contextual embeddings, meaning the same word can have different embeddings depending on the context in which it is used.
Images can be represented using deep learning techniques like convolutional neural networks (CNNs) or pre-trained models such as Contrastive Language Image Pre-training (CLIP). Select a vector representation that captures the essential features of the data.
Embeddings
Embeddings are vectors, often generated by machine learning models, that capture semantic relationships between concepts or objects. Related objects are positioned close to each other in the embedding space.
Preprocess embeddings
You may need to normalize or standardize your vectors before writing them to the database.
Method | Definition | Features |
---|---|---|
Normalizing |
Scale data to a length of one by dividing each element in a vector by the vector’s length, which is also known as its Euclidean norm or L2 norm. |
|
Standardizing |
Shift and scale data for a mean of zero and a standard deviation of one. |
|
If embeddings are not normalized, the dot product silently returns meaningless query results. When you use OpenAI, PaLM, or Simsce to generate your embeddings, they are normalized by default. If you use a different library, you may need to normalize your vectors to use the dot product. |
An example of normalizing a vector is shown below:
Original:
[0.1, 0.15, 0.3, 0.12, 0.05]
[0.45, 0.09, 0.01, 0.2, 0.11]
[0.1, 0.05, 0.08, 0.3, 0.6]
Results
Normalized:
[0.27, 0.40, 0.80, 0.32, 0.13]
[0.88, 0.18, 0.02, 0.39, 0.21]
[0.15, 0.07, 0.12, 0.44, 0.88]
Define a vector field
It’s important to define the right type and embedding model for your vector fields.
Type |
Vector fields use the |
Embedding model |
Select an embedding model for your dataset that creates good structure by ensuring related objects are near each other in the embedding space. You may need to test different embedding models. You must embed the query with the same embedding model you used for the data. |
Popular embedding models
There are many embedding models. Here are some of the most popular models to get you started:
Model | Dimensions | Link |
---|---|---|
bge-large-en-v1.5 |
1024 |
|
bge-base-en-v1.5 |
768 |
|
bge-small-en-v1.5 |
384 |
|
distiluse-base-multilingual-cased-v2 |
512 |
|
e5-small-v2 |
384 |
|
ember-v1 |
1024 |
|
glove.6B.300d |
300 |
|
gte-large |
1024 |
|
gte-base |
768 |
|
gte-small |
384 |
|
instructor-xl |
768 |
|
jina-embeddings-v2-base-en |
768 |
|
komninos |
300 |
|
text-embedding-ada-002 |
1536 |
Similarity metrics
Similarity metrics are used to compute the similarity of two vectors. When you create a collection, you can choose one of three metric types:
-
Cosine (default)
Cosine and dot product are equivalent for normalized vectors. However, if your embeddings are not normalized, then don’t use dot product as it will silently give you nonsense in queries. |
Cosine metric
When the metric is set to cosine
, the database uses cosine similarity to determine how similar two vectors are.
Cosine does not require vectors to be normalized.
Given two vectors A and B, the cosine similarity is computed as the dot product of the vectors divided by the product of their magnitudes (lengths). The formula for cosine similarity is:
Where:
-
A⋅B
is the dot product of vectors A and B. -
∥A∥
is the magnitude of vector A. -
∥B∥
is the magnitude of vector B.
When returned by HCD, the result is a similarity score which is a number between 0 and 1:
-
A value of 0 indicates that the vectors are diametrically opposed.
-
A value of 0.5 suggests the vectors are orthogonal (or perpendicular) and have no match.
-
A value of 1 indicates that the vectors are identical in direction.
Dot product metric
When the metric is set to dot_product
, the database uses the dot product to determine how similar two vectors are.
The dot product algorithm is about 50% faster than cosine, but it requires vectors to be normalized.
Given two vectors:
In an n-dimensional space, their dot product is calculated as:
The dot product gives a scalar (single number) result. It has important geometric implications: if the dot product is zero, the two vectors are orthogonal (perpendicular) to each other. When the vectors are normalized, the dot product represents the cosine of the angle between the two vectors.
In the context of an HCD database, the dot product can be used for similarity searches for the following reasons:
-
In high-dimensional vector spaces, such as those produced by embedding algorithms or neural networks, similar items are represented by vectors that are close to each other.
-
The cosine similarity between two vectors is a measure of their directional similarity, regardless of their magnitude. If you compute the dot product of two normalized vectors, you get the cosine similarity.
By computing the dot product between a query vector and the vectors in an database, you can efficiently find items in the database that are similar to the query.
Euclidean metric
When the metric is set to euclidean
, the database uses the Euclidean distance to determine how similar two vectors are.
The Euclidean distance is the most common way of measuring the "ordinary" straight-line distance between two points in Euclidean space.
Given two points P and Q in an n-dimensional space with the following coordinates:
The Euclidean distance between these two points is defined by the following formula:
The Euclidean similarity value is derived from the Euclidean distance with the following formula:
As the Euclidean distance increases from zero-to-infinity, the Euclidean similarity decreases from one-to-zero. |
In the context of an HCD database, the following apply:
Vectors as points |
Each vector in the database can be thought of as a point in some high-dimensional space. |
Distance between vectors |
When you want to find how "close" two vectors are, the Euclidean distance is one of the most intuitive and commonly used metrics. If two vectors have a small Euclidean distance between them, they are close in the vector space; if they have a large Euclidean distance, they are far apart. |
Querying and operations |
When you set the metric to |
Vector search
At its core, a vector database is about efficient vector search, which allows you to find similar content. Here’s how vector search works:
-
Create a collection of embeddings for some content.
-
Pick a new piece of content.
-
Generate an embedding for that piece of content.
-
Run a similarity search on the collection.
You’ll get a list of the content in your collection with embeddings that are most similar to this new content.
Best practices for vector search
To use vector search effectively, you need to pair it with metadata and the right embedding model.
-
Store relevant metadata about a vector in other fields in your table. For example, if your vector is an image, store a reference to the original image in the same table.
-
Select an embedding model based on your data and the queries you will make. Embedding models exist for text, images, audio, video, and more.
Limitations of vector search
While vector embeddings can replace or augment some functions of a traditional database, vector embeddings are not a replacement for other data types. Vector search is best used as a supplement to existing search techniques because of its limitations:
-
Vector embeddings are not human-readable.
-
Embeddings are not best for directly retrieving data from a table. However, you can pair a vector search with a traditional search. For example, you can find the most similar blog posts by a particular author.
-
The embedding model might not be able to capture all relevant information from the data, leading to incorrect or incomplete results.
Indexing
HCD uses multiple indexing techniques to speed up searches:
JVector |
The HCD database uses the JVector vector search engine to construct a graph index. JVector adds new documents to the graph immediately, so you can efficiently search right away. To save space and improve performance, JVector can compress vectors with quantization. |
Storage-Attached Index (SAI) |
SAI is an indexing technique to efficiently find rows that satisfy query predicates. HCD provides numeric-, text-, and vector-based indexes to support different kinds of searches. You can customize indexes based on your requirements (e.g. a specific similarity function or text transformation). When you run a search, SAI loads a superset of all possible results from storage based on the predicates you provide.
SAI then evaluates the search criteria and sorts the results by vector similarity.
The top |
Common use cases
Vector search is important for LLM use cases, including Retrieval-Augmented Generation (RAG) and AI agents.
Retrieval-Augmented Generation (RAG)
RAG is a technique for improving the accuracy of an LLM. RAG accomplishes this by adding relevant content directly to the LLM’s context window. Here’s how it works:
-
Pick an embedding model.
-
Generate embeddings from your data.
-
Store these embeddings in a vector database.
-
When the user submits a query, generate an embedding from the query using the same model.
-
Run a vector search to find data that’s similar to the user’s query.
-
Pass this data to the LLM so it’s available in the context window.
Now, when the LLM generates a response, it is less likely to make things up (hallucinate).
The RAGStack example using HCD demonstrates how to use vector search to improve the accuracy of an LLM.
AI agents
An AI agent provides an LLM with the ability to take different actions depending on the goal. In the preceding RAG example, a user might submit a query unrelated to your content. You can build an agent to take the necessary actions to fetch relevant content.
For example, you might design an agent to run a Google search with the user’s query. It can pass the results of that search to the LLM’s context window. It can also generate embeddings and store both the content and the embeddings in a vector database. In this way, your agent can build a persistent memory of the world and its actions.