Skip to main content
POST
/
api
/
v2
/
completions
Generate a text completion
curl --request POST \
  --url https://paradigm.lighton.ai/api/v2/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "prompt": "<string>",
  "max_tokens": 123,
  "temperature": 123,
  "top_p": 123,
  "n": 123,
  "stream": true,
  "logprobs": 123,
  "echo": true,
  "stop": [
    "<string>"
  ],
  "presence_penalty": 123,
  "frequency_penalty": 123,
  "best_of": 123,
  "logit_bias": {},
  "user": "<string>",
  "suffix": "<string>"
}
'
{
"model": "alfred-4.2",
"prompt": "Hello, "
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

Request serializer for completions endpoint (OpenAI-compatible).

model
string
required

Model to use for generating completions, must exist and be configured from the admin

prompt
string

The prompt to generate completions for

max_tokens
integer

Maximum number of tokens to generate

temperature
number<double>

Sampling temperature between 0 and 2

top_p
number<double>

Nucleus sampling parameter

n
integer

Number of completions to generate

stream
boolean

Whether to stream back partial progress

logprobs
integer | null

Include the log probabilities on the logprobs most likely tokens

echo
boolean

Echo back the prompt in addition to the completion

stop
string[]

Up to 4 sequences where the API will stop generating further tokens

presence_penalty
number<double>

Penalty for new tokens based on whether they appear in the text so far

frequency_penalty
number<double>

Penalty for new tokens based on their existing frequency in the text

best_of
integer

Generates multiple completions server-side and returns the best

logit_bias
object

Modify the likelihood of specified tokens appearing in the completion

user
string

A unique identifier representing your end-user

suffix
string

The suffix that comes after a completion of inserted text

Response

200 - application/json

Response serializer for completions endpoint results.

id
string
required

Unique identifier for the completion

object
string
required

Object type, always 'text_completion'

created
integer
required

Unix timestamp of when the completion was created

model
string
required

The model used for generating the completion

choices
object[]
required

List of completion choices generated by the model

usage
object

Usage statistics for the completion request