Embeddings endpoint

The embeddings API endpoint allows you to easily convert some text into embedding vectors. This feature gives you the possibility to manage your own vector database while keeping the advantages of Paradigm.

Prerequisites

In order to use the embedding endpoint, here are the prerequisites:

  • Having a Paradigm API key: if you do not have one, go to your Paradigm profile and generate a new API key.
  • Having the desired embedding model available in Paradigm: By default the embedding model used for the chat with docs should be available. If you want to use another embedding model, you must add it to Paradigm from the admin interface.

Usage methods

There are several ways to call the endpoint:

  1. With the OpenAI python client (recommended)
  2. With the python requests package: to avoid having the OpenAI layer
  3. Through a curl request: good to do a quick check or a first test

OpenAI python client

Setup

Install the OpenAI python package with the following bash command:

pip install --upgrade openai
 

You can then setup the OpenAI python client in your script:

from openai import OpenAI as OpenAICompatibleClient
import os

# Get API key from environment
api_key = os.getenv("PARADIGM_API_KEY")
# Our base url
base_url = "https://paradigm.lighton.ai/api/v2"

# Configure the OpenAI client
client = OpenAICompatibleClient(api_key=api_key, base_url=base_url)
 

Here the Paradigm API key is set up in an environment variable, but you could perfectly forward it through argument parsing with argparse.

Usage

To convert an input string into an embedding vector, you can use the client.embeddings.create() method with an available model in Paradigm.
An example is described below:

embedding_response = client.embeddings.create(
model="multilingual-e5-large",
input="This a test string"
)
 

The given response would then follow the OpenAI format:

CreateEmbeddingResponse(
data=[
Embedding(
embedding=[
0.0071225623,
-0.008990367,
...,
-0.023343408,
0.016777039
],
index=0,
object='embedding'
)
],
model='multilingual-e5-large',
object='list',
usage=Usage(
prompt_tokens=6,
total_tokens=6
),
id='fe922faf-50bd-4f5f-90a0-4abd0de10a78'
)

Python requests package

You can also avoid using the OpenAI python class and directly send request to the API endpoint through the requests package.

import requests
import os

# Get API key from environment
api_key = os.getenv("PARADIGM_API_KEY")

response = requests.request(
method="POST",
url="https://paradigm.lighton.ai/api/v2/embeddings",
headers={
'accept': "application/json",
'Authorization': f"Bearer {api_key}"
},
json={
"model": "multilingual-e5-large",
"input": "This a test string"
}
)

print(response.json())
 

You would then get a JSON answer as a dictionnary:

{
'object': 'list',
'data': [
{
'object': 'embedding',
'embedding': [
0.0071225623,
-0.008990367,
...,
-0.023343408,
0.016777039
],
'index': 0
}
],
'model': 'multilingual-e5-large',
'usage': {
'prompt_tokens': 6,
'total_tokens': 6
},
'id': 'fe922faf-50bd-4f5f-90a0-4abd0de10a78'
}

cURL request

If you would prefer sending a request to Paradigm with a simple cURL command, here is an example:

curl --request POST \
--url https://paradigm.lighton.ai/api/v2/embeddings \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'accept: ' \
--data '{
"model": "multilingual-e5-large",
"input": "This a test string"
}'