The Models section in Paradigm allows you to integrate any model into the platform, enabling users to interact with them.
You have the flexibility to configure open-source models hosted by yourself or a provider, or to use paid models accessible via API.
Viewing the List of Models
The list of models displays the following information:
- Name: The public name displayed in Paradigm for users.
- Model Type: You can integrate Large Language Models (LLMs) or embedding models. Embedding models are used to convert text into numerical representations to understand semantic similarity.
- Deployment Type: Indicates where the model is deployed (e.g., Sagemaker, OpenAI, or Self-hosted).
- Enabled: Shows whether the model is available for users through the interface or API (green = accessible, red = not accessible).
Viewing Model Details
By clicking on the model's name, you can view or modify its details:
- Name: The public name of the model displayed in the Paradigm interface for end users (this field can be edited).
- Technical Name: A technical identifier used by various APIs (this field should not be modified).
- Model Type: You can integrate VLMs, LLMs or embedding models.
- Deployment Type: The model's deployment type (e.g. OpenAI, Self-hosted).
- LoRA Model Name: Specify the name if you are using a LoRA (Low-Rank Adapter) model, a specific fine-tuning technique. We recommend using vLLM (https://docs.vllm.ai/en/latest/) to serve your models. If you've fine-tuned a model with LoRA (a specific finetuning technique) and want to use it in Paradigm, we suggest deploying it using vLLM and providing the Lora Model Name that matches the --lora-modules flag you used when starting the vLLM server.
- Model URL: used when your model is self-hosted. Think of it like a virtual address where your model is being served. For example, let's say you want to deploy a model called Llama-405 on a machine with the IP address 123.150.117.199. In this case, the Model URL would be http://123.150.117.199. You can also map this IP address to a domain name like https://llama-405b.lighton.ai for easier access.
- Start Message Template: This field defines a list of messages added at the beginning of a conversation with the model to customize interaction. It follows the Jinja templating style and allows for conditional operations and the use of variables. Available variables include:
- : The current date.
- : Instructions from the chat settings.
- : Instructions set by the user.
- : The user's first name.
- : The user's last name.
- Only these variables are available for now; using any other will result in an error when setting the start_messages_template field.
- Instruction: describes how the final model is going to be called.
i.e. for rewriter: "rephrase this question {question}", for gen model "answer this {question} according to this {context}
Using a Custom Model
In Paradigm, you can deploy and expose custom models tailored to specific needs and benefit from Paradigm's API and chat interface.
For deploying custom models, we recommend using the vLLM library. vLLM is designed for efficient model deployment and can significantly optimize the performance of your custom models. It supports a variety of model types and deployment scenarios, making it a versatile choice for integrating custom models into Paradigm.
To link your custom models to the Paradigm admin interface, follow these steps:
- Go to the admin interface, and click the `API > Models` section on the left.
- Click on "Add +" and provide the necessary details: public name, model type, and deployment type of your model.
- Specify the URL for self-hosted models, ensuring that the URL points to the service serving the model.
- (Optional) Customize the model instructions to define how the model should be called, for example, "Rephrase this question {question}" for a rewriter or "Answer this {question} according to this {context}" for a generative model.
- Ensure that the `is_models_choice_on` key is enabled in the `Settings>Config Keys` section in the admin. This is essential for the model to be recognized and integrated properly.
Once your custom models are set up, navigate to the chat interface and select the custom models from the available options, or send requests to the model through API. By following these steps, you can effectively integrate and use custom models within Paradigm, enhancing the platform's capabilities to meet your specific needs.
Published on October 14, 2024
Updated on July 31, 2025