Adding LLM models
1
Most providers require an
api_key
value to authenticate with the LLM service. For that, you can use a connection to store sensitive data:BASH
2
Defining a provider configuration
You must also provide a JSON string with the provider configuration, like the following example:Each provider supports different provider configurations. You can see a list of provider configurations as well as full examples of how to use these models in [Examples using the supported providers](./
managing_llm#examples-using-the-supported-providers).Alternatively, you can also create a model specification file, which contains all the details of your model and the provider configuration:
JSON
meta-llama-3-2-90b-vision-instruct.yaml
Although you can specify the API key directly in the provider configuration, this approach is not recommended if security is a priority.
3
Adding your model
There are two ways to add your model with the ADK:
-
Recommended: You can import the model from the model specification file:
-
You can add the model by passing the JSON string and model information directly into the CLI:
Adding model policies
1
Define your policy
The first thing that you need to do is to define how your models will behave.You can see a full reference for options in Model policies.
model_policy.yaml
2
Import the model policy
Run the following command to import the model policy:
BASH
Next steps
Managing LLMs
Learn how to manage your LLMs by using the ADK’s CLI.
Model policies
Learn how to create and manage model policies.
Examples with supported providers
See how to add OpenAI, Azure, AWS Bedrock, Ollama, and many more models from different providers.
Integrate the Developer Edition with SaaS
Learn how to integrate the watsonx Orchestrate Developer Edition with your watsonx Orchestrate SaaS tenant for LLM inferencing.