Install Alauda AI
Alauda AI now offers flexible deployment options. Starting with Alauda AI 1.4, the Knative capability is an optional feature, allowing for a more streamlined installation if it's not needed.
To begin, you will need to deploy the Alauda AI Operator. This is the core engine for all Alauda AI products. By default, it uses the KServe Standard mode for the inference backend, which is particularly recommended for resource-intensive generative workloads. This mode provides a straightforward way to deploy models and offers robust, customizable deployment capabilities by leveraging foundational Kubernetes functionalities.
If your use case requires Knative functionality, which enables advanced features like scaling to zero on demand for cost optimization, you can optionally install the Knative Operator. This operator is not part of the default installation and can be added at any time to enable Knative functionality.
Recommended deployment option: For generative inference workloads, the Standard approach (previously known as RawKubernetes Deployment) is recommended as it provides the most control over resource allocation and scaling.
TOC
DownloadingUploadingInstalling Alauda AI OperatorInstalling Alauda Build of KServe OperatorEnabling Knative Functionality1. Installing the Knative Operator2. Creating Knative Serving InstanceCreating Alauda AI InstanceImporting Built-in Model Images for CatalogFAQ1. Configure the audit output directory for aml-skipperDownloading
Operator Components:
-
Alauda AI Operator
Alauda AI Operator is the main engine that powers Alauda AI products. It focuses on two core functions: model management and inference services, and provides a flexible framework that can be easily expanded.
Download package: aml-operator.xxx.tgz
-
Knative Operator
Knative Operator provides serverless model inference.
Download package: knative-operator.ALL.v1.x.x-yymmdd.tgz
You can download the app named 'Alauda AI' and 'Knative Operator' from the Marketplace on the Customer Portal website.
Uploading
We need to upload both Alauda AI and Knative Operator to the cluster where Alauda AI is to be used.
Downloading the violet tool
First, we need to download the violet tool if not present on the machine.
Log into the Web Console and switch to the Administrator view:
- Click Marketplace / Upload Packages.
- Click Download Packaging and Listing Tool.
- Locate the right OS / CPU architecture under Execution Environment.
- Click Download to download the
violettool. - Run
chmod +x ${PATH_TO_THE_VIOLET_TOOL}to make the tool executable.
Uploading package
Save the following script in uploading-ai-cluster-packages.sh first, then read the comments below to update environment variables for configuration in that script.
${PLATFORM_ADDRESS}is your ACP platform address.${PLATFORM_ADMIN_USER}is the username of the ACP platform admin.${PLATFORM_ADMIN_PASSWORD}is the password of the ACP platform admin.${CLUSTER}is the name of the cluster to install the Alauda AI components into.${AI_CLUSTER_OPERATOR_NAME}is the path to the Alauda AI Cluster Operator package tarball.${KNATIVE_OPERATOR_PKG_NAME}is the path to the Knative Operator package tarball.${REGISTRY_ADDRESS}is the address of the external registry.${REGISTRY_USERNAME}is the username of the external registry.${REGISTRY_PASSWORD}is the password of the external registry.
After configuration, execute the script file using bash ./uploading-ai-cluster-packages.sh to upload both Alauda AI and Knative Operator.
Installing Alauda AI Operator
Procedure
In Administrator view:
-
Click Marketplace / OperatorHub.
-
At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install Alauda AI.
-
Select Alauda AI, then click Install.
Install Alauda AI window will pop up.
-
Then in the Install Alauda AI window.
-
Leave Channel unchanged.
-
Check whether the Version matches the Alauda AI version you want to install.
-
Leave Installation Location unchanged, it should be
aml-operatorby default. -
Select Manual for Upgrade Strategy.
-
Click Install.
Verification
Confirm that the Alauda AI tile shows one of the following states:
Installing: installation is in progress; wait for this to change toInstalled.Installed: installation is complete.
Installing Alauda Build of KServe Operator
For detailed installation steps, see Install KServe in Alauda Build of KServe.
Enabling Knative Functionality
Knative functionality is an optional capability that requires an additional operator and instance to be deployed.
If you plan to use Knative functionality, you MUST install the Knative Operator and create the Knative Serving instance BEFORE creating the Alauda AI instance to ensure the required CRDs are available in the cluster.
1. Installing the Knative Operator
Starting from Knative Operator, the Knative networking layer switches to Kourier, so installing Istio is no longer required.
Procedure
In Administrator view:
-
Click Marketplace / OperatorHub.
-
At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install.
-
Search for and select Knative Operator, then click Install.
Install Knative Operator window will pop up.
-
Then in the Install Knative Operator window.
-
Leave Channel unchanged.
-
Check whether the Version matches the Knative Operator version you want to install.
-
Leave Installation Location unchanged.
-
Select Manual for Upgrade Strategy.
-
Click Install.
Verification
Confirm that the Knative Operator tile shows one of the following states:
Installing: installation is in progress; wait for this to change toInstalled.Installed: installation is complete.
2. Creating Knative Serving Instance
Once Knative Operator is installed, you need to create the KnativeServing instance manually.
Procedure
-
Create the
knative-servingnamespace. -
In the Administrator view, navigate to Operators -> Installed Operators.
-
Select the Knative Operator.
-
Under Provided APIs, locate KnativeServing and click Create Instance.
-
Switch to YAML view.
-
Replace the content with the following YAML:
-
Click Create.
- For ACP 4.0, use version 1.18.1
- For ACP 4.1 and above, use version 1.19.6
-
Specify the version of Knative Serving to be deployed.
-
private-registryis a placeholder for your private registry address. You can find this in the Administrator view, then click Clusters, selectyour cluster, and check the Private Registry value in the Basic Info section.
Creating Alauda AI Instance
Once Alauda AI Operator (and optionally, Knative Operator) is installed, you can create an Alauda AI instance.
Procedure
In Administrator view:
-
Click Marketplace / OperatorHub.
-
At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install the Alauda AI Operator.
-
Select Alauda AI, then Click.
-
In the Alauda AI page, click All Instances from the tab.
-
Click Create.
Select Instance Type window will pop up.
-
Locate the AmlCluster tile in Select Instance Type window, then click Create.
Create AmlCluster form will show up.
-
Keep
defaultunchanged for Name. -
Select Deploy Flavor from dropdown:
single-nodefor non HA deployments.ha-clusterfor HA cluster deployments (Recommended for production).
-
Set KServe Mode to Managed.
-
Input a valid domain for Domain field.
INFOThis domain is used by ingress gateway for exposing model serving services. Most likely, you will want to use a wildcard name, like *.example.com.
You can specify the following certificate types by updating the Domain Certificate Type field:
ProvidedSelfSignedACPDefaultIngress
By default, the configuration uses
SelfSignedcertificate type for securing ingress traffic to your cluster, the certificate is stored in theknative-serving-certsecret that is specified in the Domain Certificate Secret field. -
(Optional) If you want to enable Knative functionality, in the Serverless Configuration section, set Knative Serving Provider to Operator.
INFOIf you installed the Knative Operator to enable Serverless functionality in the previous steps, provide the following parameters to integrate it:
- APIVersion:
operator.knative.dev/v1beta1 - Kind:
KnativeServing - Name:
knative-serving - Namespace:
knative-serving
If you are not using Knative functionality, leave the Knative Serving Provider as
Removed(or empty) and the remaining parameters blank. - APIVersion:
-
Under Gitlab section:
WARNINGGitLab-backed model storage is deprecated. It remains available for compatibility only and is planned for removal in a future Alauda AI release. For built-in models and new model delivery workflows, use Model Catalog with OCI model artifacts instead.
- Type the URL of self-hosted Gitlab for Base URL.
- Type
cpaas-systemfor Admin Token Secret Namespace. - Type
aml-gitlab-admin-tokenfor Admin Token Secret Name.
-
Under Model Catalog section, configure the following parameters:
-
Database Password Secret Namespace: Namespace of the secret containing the PostgreSQL password for Model Catalog.
-
Database Password Secret Name: Name of the secret containing the PostgreSQL password for Model Catalog.
Create the secret before creating the Alauda AI instance. If you use the following example, set Database Password Secret Namespace to
aml-operatorand Database Password Secret Name tomodel-catalog.metadata.nameis the value for Database Password Secret Name.metadata.namespaceis the value for Database Password Secret Namespace.stringData.passwordis the PostgreSQL password in plain text. Kubernetes stores it as base64-encodeddata.passwordafter the Secret is created.
After creation, the stored Secret has a base64-encoded
data.passwordfield, for example: -
Model OCI Registry Address: Registry address hosting model OCI artifacts for Model Catalog. The default value is
build-harbor.alauda.cn.This registry stores the model OCI images used by Model Catalog. Use Harbor or another production-mode OCI registry with HTTPS access enabled. The Harbor project or repository used for Model Catalog must allow anonymous pull access from inference cluster nodes.
If you cannot deploy a registry with HTTPS in the target environment, you can use an HTTP registry as a fallback. Configure the container runtime on every node in the inference cluster before deploying models. For containerd, add an insecure registry mirror for the registry address, for example by creating
/etc/containerd/certs.d/<registry-host:port>/hosts.toml:Then restart containerd or apply the equivalent node-runtime configuration through your cluster management system. This configuration must exist on the nodes where inference service pods are scheduled; otherwise the pod image pull will fail even if Model Catalog can list the model. The exact containerd configuration path can vary by Kubernetes distribution; after applying the configuration, verify that the node can pull a Model Catalog image, for example with
crictl pull <registry-host:port>/<repository>:<tag>. -
Source of PVC: Choose whether to reuse an existing PVC or create a new one. Use
CreateNewto let the installation create the PVC. -
StorageClass Name: StorageClass used when creating a new PVC.
-
-
Review above configurations and then click Create.
Verification
Check the status field from the AmlCluster resource which named default:
Should returns Ready:
Importing Built-in Model Images for Catalog
The Catalog feature in Alauda AI ships with a set of built-in model OCI images that users can deploy as inference services from the Web Console. These images must be imported into the OCI registry configured by Model Catalog before the Catalog can serve them. Without this step, the installation completes successfully, but deploying a built-in model from the Catalog will later fail with ImagePullBackOff.
Obtaining the OCI image tarballs
Built-in model images are delivered as OCI archive tarballs (.tar files compliant with the OCI Image Layout Specification). Each tarball contains a multi-architecture image (linux/amd64 + linux/arm64) for one model.
Download the tarballs from the Customer Portal Marketplace, or contact your Alauda support representative to obtain the package matching your Alauda AI version.
Pushing to Harbor
The recommended target is Harbor. The example below uses an HTTP Harbor registry. If your Harbor registry uses HTTPS, omit --plain-http and change the API URLs from http:// to https://.
Run the commands on a node that has ctr, curl, and jq installed and can reach Harbor.
First, set the environment variables:
- Harbor registry endpoint, without the URL scheme.
- Target repository path in Harbor, in the form
<project>/<image-name>. For example,mlops/modelcar-qwen3.5-0.8buses the Harbor projectmlopsand repositorymodelcar-qwen3.5-0.8b. - Image tag carried by the OCI archive. If you do not know it, extract it from the tarball with the command below.
- Path to the OCI archive tarball obtained in the previous step.
- Harbor credentials in the form
user:password. Contact your platform administrator if you do not have these.
The tarball usually carries its own tag (e.g. v0.1.0) inside the OCI image layout. If needed, extract it from the tarball:
Check whether the image tag already exists in Harbor:
If the Harbor project does not exist yet, create it before pushing:
If the project already exists, Harbor returns a non-2xx status code. After confirming the project exists, continue with the import and push.
Then run the import and push procedure:
Repeat this procedure for each built-in model tarball, varying $REPO, $TAG, and $TAR per model.
--all-platforms is critical at the import step: omitting it imports only the node's host architecture, and the subsequent push will silently miss the other platform's blobs. The flag is not needed on push — pushing the multi-arch index automatically pushes all platforms it references.
Verifying the Harbor import
Confirm that Harbor now serves the image:
HTTP=200 means the image was successfully imported into Harbor. Expected output includes the digest, size, push time, tag, and platform information:
Now, the core capabilities of Alauda AI have been successfully deployed. If you want to quickly experience the product, please refer to the Quick Start.
FAQ
1. Configure the audit output directory for aml-skipper
The default audit output path is /cpaas/audit on the host. However, on some operating systems (e.g., MicroOS), the root path of the host is read-only, and the /cpaas directory cannot be created. In this case, users need to modify the audit output path.
To modify the audit output path, update the AmlCluster default resource and add the amlSkipper.auditLogHostPath.path configuration under spec.values. For example:
The specific path should be consistent with the collection configuration of Alauda Container Platform Log Collector.