陈鹏,腾讯云容器服务产品架构师,拥有丰富的云原生技术实践经验,同时也是 Kubernetes、Istio 等云原生项目 Contributor,《Kubernetes 实践指南》等电子书作者。
本文介绍如何在 TKE 上部署 AI 大模型,以 DeepSeek-R1
为例,使用 Ollama
或 vLLM
运行大模型并暴露 API,然后使用 OpenWebUI
提供交互界面。
Ollama
提供是 Ollama API,部署架构:
vLLM
提供的是兼容 OpenAI 的 API,部署架构:
AI 大模型通常占用体积较大,直接打包到容器镜像不太现实,如果启动时通过 initContainers
自动下载又会导致启动时间过长,因此建议使用共享存储来挂载 AI 大模型(先下发一个 Job 将模型下载到共享存储,然后再将共享存储挂载到运行大模型的 Pod 中)。
在腾讯云上可使用 CFS 来作为共享存储,CFS 的性能和可用性都非常不错,适合 AI 大模型的存储。本文将使用 CFS 来存储 AI 大模型。
不同的机型使用的 GPU 型号不一样,机型与 GPU 型号的对照表参考 GPU 计算型实例 和 GPU 渲染型实例,Ollama 相比 vLLM,支持的 GPU 型号更广泛,兼容性更好,建议根据事先调研自己所使用的工具和大模型,选择合适的 GPU 型号,再根据前面的对照表确定要使用的 GPU 机型,另外也注意下选择的机型在哪些地域在售,以及是否售罄,可通过 购买云服务器页面 进行查询(实例族选择GPU机型)。
登录 容器服务控制台,创建一个集群,集群类型选择TKE 标准集群。详情请参见 创建集群。
支持选择 CFS(腾讯云文件存储) 或 CFS Turbo(腾讯云高性能并行文件系统),本文以 CFS(腾讯云文件存储)为例。 CFS-Turbo 的性能更强,读写速度更快,但成本也更高。如果希望大模型运行和下载速度更快,可以考虑使用 CFS-Turbo。
该步骤选择项较多,因此本文示例通过容器服务控制台来创建 PVC。若您希望通过 YAML 来创建,可以先用控制台创建一个测试 PVC,然后复制生成的 YAML 文件。
如果是新建 CFS-Turbo
StorageClass
,则需要在文件存储控制台先新建好 CFS-Turbo 文件系统,然后创建StorageClass
时引用对应的 CFS-Turbo 实例。
创建一个 CFS 类型的 PVC,用于存储 AI 大模型:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ai-model
labels:
app: ai-model
spec:
storageClassName: cfs-ai
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName
。storage
大小无所谓,可随意指定,按实际占用空间付费的。再创建一个 PVC 给 OpenWebUI 用,可使用同一个 storageClassName
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: webui
labels:
app: webui
spec:
accessModes:
- ReadWriteMany
storageClassName: cfs-ai
resources:
requests:
storage: 100Gi
GPU 插件无需显式安装,如果使用普通节点或原生节点,配置了 GPU 机型,会自动安装 GPU 插件;如果使用超级节点,则无需安装 GPU 插件。
下发一个 Job,将需要用的 AI 大模型下载到 CFS 共享存储中,以下分别是 vLLM 和 Ollama 的 Job 示例:
LLM_MODEL
以替换大语言模型。USE_MODELSCOPE
环境环境变量控制是否从 ModelScope 下载)。vLLM 的模型下载 Job:
apiVersion: batch/v1
kind: Job
metadata:
name: vllm-download-model
labels:
app: vllm-download-model
spec:
template:
metadata:
name: vllm-download-model
labels:
app: vllm-download-model
annotations:
eks.tke.cloud.tencent.com/root-cbs-size: '100' # 如果用超级节点,默认系统盘只有 20Gi,vllm 镜像解压后会撑爆磁盘,用这个注解自定义一下系统盘容量(超过20Gi的部分会收费)。
spec:
containers:
- name: vllm
image: vllm/vllm-openai:latest
env:
- name: LLM_MODEL
value: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- name: USE_MODELSCOPE
value: "1"
command:
- bash
- -c
- |
set -ex
if [[ "$USE_MODELSCOPE" == "1" ]]; then
modelscope download --local_dir=/data/$LLM_MODEL --model="$LLM_MODEL"
else
huggingface-cli download --local-dir=/data/$LLM_MODEL $LLM_MODEL
fi
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: ai-model
restartPolicy: OnFailure
Ollama 的模型下载 Job:
apiVersion: batch/v1
kind: Job
metadata:
name: ollama-download-model
labels:
app: ollama-download-model
spec:
template:
metadata:
name: ollama-download-model
labels:
app: ollama-download-model
spec:
containers:
- name: ollama
image: ollama/ollama:latest
env:
- name: LLM_MODEL
value: deepseek-r1:7b
command:
- bash
- -c
- |
set -ex
ollama serve &
sleep 5 # sleep 5 seconds to wait for ollama to start
ollama pull $LLM_MODEL
volumeMounts:
- name: data
mountPath: /root/.ollama # ollama 的模型数据存储在 `/root/.ollama` 目录下,挂载 CFS 类型的 PVC 到该路径。
volumes:
- name: data
persistentVolumeClaim:
claimName: ai-model
restartPolicy: OnFailure
下面分别给出通过 Deployment 部署 Ollama 和 vLLM 的示例。
nvidia.com/gpu
资源,以便让 Pod 调度到 GPU 机型并分配 GPU 卡使用。eks.tke.cloud.tencent.com/gpu-type
指定 GPU 类型,可选 V100
、T4
、A10*PNV4
、A10*GNV4
,具体可参考 使用注解指定超级节点 Pod 的 GPU 型号。eks.tke.cloud.tencent.com/root-cbs-size: '100'
这个 Pod 注解自定义一下系统盘容量(超过20Gi的部分会收费)。通过 Deployment 部署 vLLM:
apiVersion: apps/v1
kind: Deployment
metadata:
name: vllm
labels:
app: vllm
spec:
selector:
matchLabels:
app: vllm
replicas: 1
template:
metadata:
labels:
app: vllm
spec:
containers:
- name: vllm
image: vllm/vllm-openai:latest
imagePullPolicy: Always
env:
- name: PYTORCH_CUDA_ALLOC_CONF
value: expandable_segments:True
- name: LLM_MODEL
value: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
command:
- bash
- -c
- |
vllm serve /data/$LLM_MODEL \
--served-model-name $LLM_MODEL \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--enable-chunked-prefill \
--enforce-eager \
--tensor-parallel-size 1
securityContext:
runAsNonRoot: false
ports:
- containerPort: 8000
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 2000m
memory: 2Gi
nvidia.com/gpu: "1"
limits:
nvidia.com/gpu: "1"
volumeMounts:
- name: data
mountPath: /data
- name: shm
mountPath: /dev/shm
volumes:
- name: data
persistentVolumeClaim:
claimName: ai-model
# vLLM needs to access the host's shared memory for tensor parallel inference.
- name: shm
emptyDir:
medium: Memory
sizeLimit: "2Gi"
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: vllm-api
spec:
selector:
app: vllm
type: ClusterIP
ports:
- name: api
protocol: TCP
port: 8000
targetPort: 8000
--served-model-name
参数指定大模型名称,与前面下载 Job 中指定的名称要一致,注意替换。/data
目录下。LLM_MODEL
为模型名称,注意修改。通过 Deployment 部署 Ollama:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
labels:
app: ollama
spec:
selector:
matchLabels:
app: ollama
replicas: 1
template:
metadata:
labels:
app: ollama
spec:
containers:
- name: ollama
image: ollama/ollama:latest
imagePullPolicy: IfNotPresent
command: ["ollama", "serve"]
env:
- name: OLLAMA_HOST
value: ":11434"
resources:
requests:
cpu: 2000m
memory: 2Gi
nvidia.com/gpu: "1"
limits:
cpu: 4000m
memory: 4Gi
nvidia.com/gpu: "1"
ports:
- containerPort: 11434
name: ollama
volumeMounts:
- name: data
mountPath: /root/.ollama
volumes:
- name: data
persistentVolumeClaim:
claimName: ai-model
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: ollama
spec:
selector:
app: ollama
type: ClusterIP
ports:
- name: server
protocol: TCP
port: 11434
targetPort: 11434
/root/.ollama
目录下,挂载已经下载好 AI 大模型的 CFS 类型 PVC 到该路径。OLLAMA_HOST
环境变量,强制对外暴露 11434 端口。如果需要对 GPU 资源进行弹性伸缩,可以按照下面的方法进行配置。
GPU 的 Pod 会有一些监控指标,参考 GPU 监控指标,可以根据这些监控指标配置 HPA 实现 GPU Pod 的弹性伸缩,比如按照 GPU 利用率:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: vllm
spec:
minReplicas: 1
maxReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: vllm
metrics: # 更多 GPU 指标参考 https://cloud.tencent.com/document/product/457/38929#gpu
- pods:
metric:
name: k8s_pod_rate_gpu_used_request # GPU利用率 (占 Request)
target:
averageValue: "80"
type: AverageValue
type: Pods
behavior:
scaleDown:
policies:
- periodSeconds: 15
type: Percent
value: 100
selectPolicy: Max
stabilizationWindowSeconds: 300
scaleUp:
policies:
- periodSeconds: 15
type: Percent
value: 100
- periodSeconds: 15
type: Pods
value: 4
selectPolicy: Max
stabilizationWindowSeconds: 0
需要注意的是,GPU 资源通常比较紧张,缩容后不一定还能再买回来,如不希望缩容,可以给 HPA 配置下禁止缩容:
behavior:
scaleDown:
selectPolicy: Disabled
如果使用原生节点或普通节点,还需对节点池启动弹性伸缩,否则 GPU Pod 扩容后没相应的 GPU 节点会导致 Pod 一直处于 Pending 状态。
节点池启用弹性伸缩的方法是编辑节点池,然后勾选弹性伸缩,配置一下节点数量范围,最后点击确认:
使用 Deployment 部署 OpenWebUI,并定义 Service 方便后续对外暴露访问。后端 API 可以由 vLLM 或 Ollama 提供,以下提供这两种情况的 OpenWebUI 部署示例。
OpenWebUI 的数据存储在
/app/backend/data
目录(如账号密码、聊天历史等数据),我们挂载 PVC 到这个路径。
API 后端是 vLLM 的 OpenWebUI 部署示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- name: webui
image: imroc/open-webui:main # docker hub 中的 mirror 镜像,长期自动同步,可放心使用
env:
- name: OPENAI_API_BASE_URL
value: http://vllm-api:8000/v1 # vllm 的地址
- name: ENABLE_OLLAMA_API # 禁用 Ollama API,只保留 OpenAI API
value: "False"
tty: true
ports:
- containerPort: 8080
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "1000m"
memory: "1Gi"
volumeMounts:
- name: webui-volume
mountPath: /app/backend/data
volumes:
- name: webui-volume
persistentVolumeClaim:
claimName: webui
---
apiVersion: v1
kind: Service
metadata:
name: webui
labels:
app: webui
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webui
API 后端是 Ollama 的 OpenWebUI 部署示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- name: webui
image: imroc/open-webui:main # docker hub 中的 mirror 镜像,长期自动同步,可放心使用
env:
- name: OLLAMA_BASE_URL
value: http://ollama:11434 # ollama 的地址
- name: ENABLE_OPENAI_API # 禁用 OpenAI API,只保留 Ollama API
value: "False"
tty: true
ports:
- containerPort: 8080
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "1000m"
memory: "1Gi"
volumeMounts:
- name: webui-volume
mountPath: /app/backend/data
volumes:
- name: webui-volume
persistentVolumeClaim:
claimName: webui
---
apiVersion: v1
kind: Service
metadata:
name: webui
labels:
app: webui
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webui
如果只是本地测试,可以使用 kubectl port-forward
暴露服务:
kubectl port-forward service/webui 8080:8080
在浏览器中访问 http://127.0.0.1:8080
即可。
你还可以通过 Ingress 或 Gateway API 来暴露。
Gateway API 可使用 HTTPRoute
暴露:
使用 Gateway API 需要集群中装有 Gateway API 的实现,如 TKE 应用市场中的 EnvoyGateway,具体用法参考 Gateway API 官方文档。
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ai
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: envoy-gateway-system
name: imroc
hostnames:
- "ai.imroc.cc"
rules:
- backendRefs:
- group: ""
kind: Service
name: webui
port: 8080
parentRefs
引用定义好的 Gateway
(通常一个 Gateway 对应一个 CLB)。hostnames
替换为你自己的域名,确保域名能正常解析到 Gateway 对应的 CLB 地址。backendRefs
指定 OpenWebUI 的 Service。用 Ingress 暴露的示例:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ai
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: envoy-gateway-system
name: imroc
hostnames:
- "ai.imroc.cc"
rules:
- backendRefs:
- group: ""
kind: Service
name: webui
port: 8080
host
替换为你自己的域名,确保域名能正常解析到 Ingress 对应的 CLB 地址。backend.service
指定 OpenWebUI 的 Service。最后在浏览器访问相应的地址即可进入 OpenWebUI 页面。
首次进入 OpenWebUI 会提示创建管理员账号密码,创建完毕后即可登录,然后默认会使用前面下载好的大模型进行对话。
通常 Ollama
和 vLLM
官方的 latest
容器镜像中的 CUDA 版本能兼容很大部分 GPU 卡和驱动,但要将大模型顺利跑起来,跟 CUDA、GPU卡及其驱动、PyTorch(vLLM)以及大模型本身都可能有关系,很难枚举所有情况,特别是 vLLM,并不是所有大模型都支持,且依赖 PyTorch,而不同 PyTorch 版本能兼容的 CUDA 版本也不一样,不同 CUDA 版本能兼容的 GPU 驱动版本也不一样。
vLLM 启动或运行过程中可能报错,如:
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 230, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
ERROR 02-07 02:51:31 client.py:300] RuntimeError('Engine process (pid 20) died.')
ERROR 02-07 02:51:31 client.py:300] NoneType: None
ERROR 02-07 02:51:34 serving_chat.py:661] Error in chat completion stream generator.
ERROR 02-07 02:51:34 serving_chat.py:661] Traceback (most recent call last):
ERROR 02-07 02:51:34 serving_chat.py:661] File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 359, in chat_completion_stream_generator
ERROR 02-07 02:51:34 serving_chat.py:661] async for res in result_generator:
ERROR 02-07 02:51:34 serving_chat.py:661] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 658, in _process_request
ERROR 02-07 02:51:34 serving_chat.py:661] raise request_output
ERROR 02-07 02:51:34 serving_chat.py:661] vllm.engine.multiprocessing.MQEngineDeadError: Engine loop is not running. Inspect the stacktrace to find the original error: RuntimeError('Engine process (pid 20) died.').
CRITICAL 02-07 02:51:34 launcher.py:101] MQLLMEngine is already dead, terminating server process
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]
RuntimeError: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 230, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
ERROR 02-06 23:41:11 engine.py:389] RuntimeError: CUDA error: no kernel image is available for execution on the device
ERROR 02-06 23:41:11 engine.py:389] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 02-06 23:41:11 engine.py:389] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
ERROR 02-06 23:41:11 engine.py:389] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ERROR 02-06 23:41:11 engine.py:389]
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
raise e
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 380, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 123, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 75, in __init__
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 276, in __init__
self._initialize_kv_caches()
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 416, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 101, in determine_num_available_blocks
results = self.collective_rpc("determine_num_available_blocks")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 51, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2220, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks
self.model_runner.profile_run()
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1235, in profile_run
self._dummy_run(max_num_batched_tokens, max_num_seqs)
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1346, in _dummy_run
self.execute_model(model_input, kv_caches, intermediate_tensors)
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1719, in execute_model
hidden_or_intermediate_states = model_executable(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 486, in forward
hidden_states = self.model(input_ids, positions, kv_caches,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 172, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 348, in forward
hidden_states, residual = layer(
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 247, in forward
hidden_states = self.self_attn(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 176, in forward
qkv, _ = self.qkv_proj(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py", line 382, in forward
output_parallel = self.quant_method.apply(self, input_, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/linear.py", line 142, in apply
return F.linear(x, layer.weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
[rank0]:[W206 23:41:12.978693132 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 230, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
遇到这些情况建议是先调研和确认下各种版本信息,看能否兼容。不行则尝试换 GPU 卡或换 CUDA 版本(GPU 驱动是自动装的,一般无法改变),下面有如何指定最佳 CUDA 版本的方法。
如果希望精确控制 CUDA 版本以达到最佳效果或规避一些兼容性问题,可按照下面的方法来指定最佳的 CUDA 版本。
确认 GPU 驱动版本:
后台自动安装GPU驱动
的时候就会提示 GPU 驱动版本,如果没有也可以登录节点执行 nvidia-smni
查看。nvidia-smi
命令查看 GPU 驱动版本。确认 CUDA 版本:在 NVIDIA 官网的 CUDA Toolkit and Corresponding Driver Versions 中,查找适合前面确认到的 GPU 驱动版本的 CUDA 版本,用于后面打包镜像时选择对应版本的基础镜像。
如果使用 Ollama 运行大模型,按照下面的方法编译指定 CUDA 版本的 Ollama 镜像。
准备 Dockerfile
:
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
RUN apt update -y && apt install -y curl
RUN curl -fsSL https://ollama.com/install.sh | sh
基础镜像使用
nvidia/cuda
,具体使用哪个 tag 可根据前面确认的 cuda 版本来定。这里 是所有 tag 的列表。
编译并上传镜像:
docker build -t ccr.ccs.tencentyun.com/imroc/ollama:cuda11.8-ubuntu22.04 .
docker push ccr.ccs.tencentyun.com/imroc/ollama:cuda11.8-ubuntu22.04
注意修改成自己的镜像名称。
如果使用 vLLM 运行大模型,按照下面的方法编译指定 CUDA 版本的 vLLM 镜像。
git clone --depth=1 https://github.com/vllm-project/vllm.git
cd vllm
docker build --build-arg CUDA_VERSION=11.8.0 -t ccr.ccs.tencentyun.com/imroc/vllm-openai:cuda-11.8.0 .
docker push ccr.ccs.tencentyun.com/imroc/vllm-openai:cuda-11.8.0
通过
CUDA_VERSION
参数指定 CUDA 版本;注意替换成自己的镜像名称。 该方法只能微调 CUDA 版本,不要跨大版本,比如官方 Dockerfile 中使用的CUDA_VERSION
是 12.x,那么指定的CUDA_VERSION
就不要低于 12,因为 vLLM、PyTorch、CUDA 这几个的版本需要在兼容范围内,否则会有兼容性问题。如要编译更低版本的 CUDA,建议参考官方文档的方法(通过 pip 命令安装低版本编译好的 vLLM 二进制),然后编写相应的 Dockerfile 来编译镜像。
最后在部署 Ollama
或 vLLM
的 Deplioyment
中,将镜像替换成自己指定了 CUDA 版本编译上传的镜像名称,即可完成指定最佳的 CUDA 版本。
通常是没有开公网,下面是开通公网的方法。
如果使用普通节点或原生节点,可以在创建节点池的时候指定公网带宽:
如果使用超级节点,Pod 默认没有公网,可以使用 NAT 网关来访问外网,详情请参考 通过 NAT 网关访问外网,当然这个也适用于普通节点和原生节点。
Ollama 和 vLLM 默认将模型部署到单张 GPU 卡上,如果是多人使用,并发请求,或者模型太大,可以配置下 Ollama 和 vLLM,将模型部署到多张 GPU 卡上并行计算来提升推理速度和吞吐量。
首先在定义 Ollama 或 vLLM 的 Deployment 时,需声明 GPU 的数量大于 1,示例:
resources:
requests:
nvidia.com/gpu: "2"
limits:
nvidia.com/gpu: "2"
对于 Ollama, 指定环境变量 OLLAMA_SCHED_SPREAD
为 1
表示将模型部署到所有 GPU 卡上,示例:
env:
- name: OLLAMA_SCHED_SPREAD # 多卡部署
value: "1"
对于 vLLM, 则需显示指定 --tensor-parallel-size
参数,表示将模型部署到多少张 GPU 卡上,示例:
command:
- bash
- -c
- |
set -ex
exec vllm serve /data/DeepSeek-R1-Distill-Qwen-7B \
--served-model-name DeepSeek-R1-Distill-Qwen-7B \
--host 0.0.0.0 --port 8000 \
--trust-remote-code \
--enable-chunked-prefill \
--tensor-parallel-size 2 # 指定 N 张卡并行,与 requests 中指定的 GPU 数量一致
前面说的多卡部署仅限单台机器内的多卡,如果单个模型实在太大,而单台机器的 GPU 推理太慢,可以考虑用多机多卡分布式部署。
如何做到多机部署?如果只是简单增加副本数,各个节点的 GPU 并不能协同处理同一个任务,只能提升并发量,不能提升单个任务的推理速度。下面给出实现多机多卡分布式部署的思路,具体方案可参考相关链接,结合本文给出的示例 YAML 并进行相关修改。
对于 vLLM 来说,在 Kubernetes 环境中推荐使用 lws 来实现多机分布式部署,下面给出部署实例。
首先,按照 lws 官方文档 安装 lws 到集群,需要注意的是,默认使用镜像是 registry.k8s.io/lws/lws
,这个在国内环境下载不了,需修改 Deployment 的镜像地址为 docker.io/k8smirror/lws
,该镜像为 lws 在 DockerHub 上的 mirror 镜像,长期自动同步,可放心使用(TKE 环境可直接拉取 DockerHub 的镜像)。
然后,下载 ray_init.sh 脚本,制作 vLLM+Ray 的镜像:
FROM docker.io/vllm/vllm-openai:latest
COPY ray_init.sh /vllm-workspace/ray_init.sh
RUN chmod +x /vllm-workspace/ray_init.sh
编译镜像并推送到镜像仓库:
docker build -t ccr.ccs.tencentyun.com/imroc/vllm-lws:latest .
docker push ccr.ccs.tencentyun.com/imroc/vllm-lws:latest
然后编写 LeaderWorkerSet
的 YAML 文件并将其部署到集群中:
这里假设每台 GPU 节点至少有 2 张 GPU 算卡,每个 Pod 使用 2 张卡,leader + worker 一共 2 个 Pod。
apiVersion: leaderworkerset.x-k8s.io/v1
kind: LeaderWorkerSet
metadata:
name: vllm
spec:
replicas: 1
leaderWorkerTemplate:
size: 2
restartPolicy: RecreateGroupOnPodRestart
leaderTemplate:
metadata:
labels:
role: leader
spec:
containers:
- name: vllm-leader
image: ccr.ccs.tencentyun.com/imroc/vllm-lws:latest
env:
- name: RAY_CLUSTER_SIZE
valueFrom:
fieldRef:
fieldPath: metadata.annotations['leaderworkerset.sigs.k8s.io/size']
command:
- sh
- -c
- |
/vllm-workspace/ray_init.sh leader --ray_cluster_size=$RAY_CLUSTER_SIZE
python3 -m vllm.entrypoints.openai.api_server \
--port 8000 \
--model /data/DeepSeek-R1-Distill-Qwen-32B \
--served-model-name DeepSeek-R1-Distill-Qwen-32B \
--tensor-parallel-size 2 \
--pipeline-parallel-size 2 \
--enforce-eager
resources:
limits:
nvidia.com/gpu: "2"
ports:
- containerPort: 8000
readinessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 15
periodSeconds: 10
volumeMounts:
- mountPath: /dev/shm
name: dshm
- mountPath: /data
name: data
volumes:
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 15Gi
- name: data
persistentVolumeClaim:
claimName: ai-model
workerTemplate:
spec:
containers:
- name: vllm-worker
image: ccr.ccs.tencentyun.com/imroc/vllm-lws:latest
command:
- sh
- -c
- "/vllm-workspace/ray_init.sh worker --ray_address=$(LWS_LEADER_ADDRESS)"
resources:
limits:
nvidia.com/gpu: "2"
volumeMounts:
- mountPath: /dev/shm
name: dshm
- mountPath: /data
name: data
volumes:
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 15Gi
- name: data
persistentVolumeClaim:
claimName: ai-model
---
apiVersion: v1
kind: Service
metadata:
name: vllm-api
spec:
ports:
- name: http
port: 8000
protocol: TCP
targetPort: 8000
selector:
leaderworkerset.sigs.k8s.io/name: vllm
role: leader
type: ClusterIP
nvidia.com/gpu
和 --tensor-parallel-size
指定每台节点有多少张 GPU 卡。--pipeline-parallel-size
指定有多少台节点。--model
指定模型文件在容器内的路径。--served-model-name
指定模型名称。Pod 成功跑起来后进入 leader Pod:
kubectl exec -it vllm-0 -- bash
测试 API:
curl -v http://127.0.0.1:8000/v1/completions -H "Content-Type: application/json" -d '{
"model": "DeepSeek-R1-Distill-Qwen-32B",
"prompt": "你是谁?",
"max_tokens": 100,
"temperature": 0
}'
如果部署了 OpenWebUI,确保 OPENAI_API_BASE_URL
指向上面示例 YAML 中 Service 的地址,如 http://vllm-api:8000/v1
。
vLLM 分布式多机部署要求每台节点 GPU 数量一致,且要事先规划好节点数量,如果要扩容,只有再建新 GPU 集群,如何让不同的 GPU 集群进行负载均衡呢?
可以用同一个 Service 选中多个不同 GPU 集群的所有 Pod 来实现。
比如用 lws 部署 vllm,让所有 LeaderWorkerSet
在同一命名空间,且所有 LeaderWorkerSet
的 leaderTemplate
下要声明一个相同的 label,比如用 role: leader
:
apiVersion: leaderworkerset.x-k8s.io/v1
kind: LeaderWorkerSet
metadata:
name: vllm
spec:
replicas: 1
leaderWorkerTemplate:
size: 2
leaderTemplate:
metadata:
labels:
role: leader
spec:
然后确保 vllm 的 Service 的 selector 选中该 label:
apiVersion: v1
kind: Service
metadata:
name: vllm-api
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
role: leader
type: ClusterIP
配置好后,该 Service 就选中了所有 GPU 集群的 leader Pod,API 请求就可以在多个 GPU 集群之间负载均衡了。
如果出于成本和性能的权衡考虑,或者测试阶段先不引入 CFS,降低复杂度,希望直接用本地系统盘存储大模型,而大模型占用又空间太大,希望能用超过 2T 的系统盘,则需要操作系统支持才可以,名称中带 UEFI
字样的系统镜像才支持超过 2T 的系统盘,默认不可用,如有需要可联系官方开通使用。
vLLM 启动时报这个错:
ERROR 02-06 18:29:55 engine.py:389] ValueError: invalid literal for int() with base 10: 'tcp://172.16.168.90:8000'
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
raise e
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 380, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 123, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 75, in __init__
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 51, in __init__
self._init_executor()
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 29, in _init_executor
get_ip(), get_open_port())
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 506, in get_open_port
port = envs.VLLM_PORT
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/envs.py", line 583, in __getattr__
return environment_variables[name]()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/envs.py", line 188, in <lambda>
lambda: int(os.getenv('VLLM_PORT', '0'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VLLM_PORT
这个环境变量,vLLM 会解析这个环境变量,它期望是个数字但实际得到的不是所以才报错,但我没定义这个环境变量,这个环境变量是 K8S 根据 Service 自动生成注入到 Pod 中的。vllm
,换成其它名字。vLLM 启动时报这个错:
ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla V100-SXM2-32GB GPU has compute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 230, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
--dtype
类型(bfloat16
),并指定 --dtype=half
的建议。--dtype
的值指定为 half
。退出前日志:
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1512, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1505, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1379, in uvloop.loop.Loop.run_forever
File "uvloop/loop.pyx", line 557, in uvloop.loop.Loop._run
File "uvloop/handles/poll.pyx", line 216, in uvloop.loop.__on_uvpoll_event
File "uvloop/cbhandles.pyx", line 83, in uvloop.loop.Handle._run
File "uvloop/cbhandles.pyx", line 66, in uvloop.loop.Handle._run
File "uvloop/loop.pyx", line 399, in uvloop.loop.Loop._read_from_self
File "uvloop/loop.pyx", line 404, in uvloop.loop.Loop._invoke_signals
File "uvloop/loop.pyx", line 379, in uvloop.loop.Loop._ceval_process_signals
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 871, in signal_handler
raise KeyboardInterrupt("terminated")
KeyboardInterrupt: terminated
livenessProbe
的 initialDelaySeconds
,避免因 vLLM 启动慢被终止,或者去掉 livenessProbe
。报错日志:
ValueError: The model's max seq len (131072) is larger than the maximum number of tokens that can be stored in KV cache (93760). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
[rank0]:[W207 01:57:35.912382100 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
Traceback (most recent call last):
File "/usr/local/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/usr/local/lib/python3.12/dist-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 230, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
解决办法: vllm 启动参数指定下 --max-model-len
,如 --max-model-len 1024
。
报错日志:
ERROR 02-07 03:25:19 engine.py:389] CUDA out of memory. Tried to allocate 150.00 MiB. GPU 0 has a total capacity of 14.58 GiB of which 95.56 MiB is free. Process 81610 has 14.48 GiB memory in use. Of the allocated memory 14.30 GiB is allocated by PyTorch, and 34.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ERROR 02-07 03:25:19 engine.py:389] Traceback (most recent call last):
Process SpawnProcess-1:
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 380, in run_mp_engine
ERROR 02-07 03:25:19 engine.py:389] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 123, in from_engine_args
ERROR 02-07 03:25:19 engine.py:389] return cls(ipc_path=ipc_path,
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 75, in __init__
ERROR 02-07 03:25:19 engine.py:389] self.engine = LLMEngine(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 276, in __init__
ERROR 02-07 03:25:19 engine.py:389] self._initialize_kv_caches()
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 416, in _initialize_kv_caches
ERROR 02-07 03:25:19 engine.py:389] self.model_executor.determine_num_available_blocks())
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 101, in determine_num_available_blocks
ERROR 02-07 03:25:19 engine.py:389] results = self.collective_rpc("determine_num_available_blocks")
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 51, in collective_rpc
ERROR 02-07 03:25:19 engine.py:389] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2220, in run_method
ERROR 02-07 03:25:19 engine.py:389] return func(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 02-07 03:25:19 engine.py:389] return func(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks
ERROR 02-07 03:25:19 engine.py:389] self.model_runner.profile_run()
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 02-07 03:25:19 engine.py:389] return func(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1235, in profile_run
ERROR 02-07 03:25:19 engine.py:389] self._dummy_run(max_num_batched_tokens, max_num_seqs)
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1346, in _dummy_run
ERROR 02-07 03:25:19 engine.py:389] self.execute_model(model_input, kv_caches, intermediate_tensors)
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 02-07 03:25:19 engine.py:389] return func(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1775, in execute_model
ERROR 02-07 03:25:19 engine.py:389] output: SamplerOutput = self.model.sample(
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 505, in sample
ERROR 02-07 03:25:19 engine.py:389] next_tokens = self.sampler(logits, sampling_metadata)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
ERROR 02-07 03:25:19 engine.py:389] return self._call_impl(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
ERROR 02-07 03:25:19 engine.py:389] return forward_call(*args, **kwargs)
ERROR 02-07 03:25:19 engine.py:389] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-07 03:25:19 engine.py:389] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/sampler.py", line 271, in forward
ray_init.sh
下载地址: https://raw.githubusercontent.com/kubernetes-sigs/lws/refs/heads/main/docs/examples/vllm/build/ray_init.sh