

Vultr 托管集群
3 个 worker 节点,kubectl get nodes。k8s-paas-71a68ebbc45b Ready <none> 12d v1.23.14
k8s-paas-dbbd42d034e6 Ready <none> 12d v1.23.14
k8s-paas-f7788d4f4a38 Ready <none> 12d v1.23.14
全栈的 Kubernetes 容器云 PaaS 解决方案。

Kubernetes 的云原生分布式块存储。

非官方 k8s helm charts,大规模吞吐需建设微服务集群/中间件集群/边缘存储集群。
helm repo add sentry https://sentry-kubernetes.github.io/charts
kubectl create ns sentry
helm install sentry sentry/sentry -f values.yaml -n sentry
# helm install sentry sentry/sentry -n sentry
这里我们创建 3 个 PostgreSQL 数据卷快照,分别对应 Sentry 后台面板的不同状态。




用于访问备份存储的端点。支持 NFS 和 S3 协议的服务器。



备份卷创建时间取决于你的卷大小和网络带宽。

官方文档:https://longhorn.io/docs/1.4.0/snapshots-and-backups/backup-and-restore/restore-statefulset/
Longhorn 支持恢复备份,此功能的一个用例是恢复用于 Kubernetes StatefulSet 的数据,这需要为备份的每个副本恢复一个卷。
要恢复,请按照以下说明进行操作。下面的示例使用了一个 StatefulSet,其中一个卷附加到每个 Pod 和两个副本。
Longhorn UI 页面。在 Backup 选项卡下,选择 StatefulSet 卷的名称。单击卷条目的下拉菜单并将其还原。将卷命名为稍后可以轻松引用的 Persistent Volumes。StatefulSet,这些副本的卷名为 pvc-01a 和 pvc-02b,则恢复可能如下所示:Backup Name | Restored Volume |
|---|---|
pvc-01a | statefulset-vol-0 |
pvc-02b | statefulset-vol-1 |
Persistent Volume。将卷命名为以后可以轻松引用的 Persistent Volume Claims。下面必须替换 storage 容量、numberOfReplicas、storageClassName 和 volumeHandle。在示例中,我们在 Longhorn 中引用 statefulset-vol-0 和 statefulset-vol-1,并使用 longhorn 作为我们的 storageClassName。apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-0
spec:
capacity:
storage: <size> # must match size of Longhorn volume
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: driver.longhorn.io # driver must match this
fsType: ext4
volumeAttributes:
numberOfReplicas: <replicas> # must match Longhorn volume value
staleReplicaTimeout: '30' # in minutes
volumeHandle: statefulset-vol-0 # must match volume name from Longhorn
storageClassName: longhorn # must be same name that we will use later
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-1
spec:
capacity:
storage: <size> # must match size of Longhorn volume
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: driver.longhorn.io # driver must match this
fsType: ext4
volumeAttributes:
numberOfReplicas: <replicas> # must match Longhorn volume value
staleReplicaTimeout: '30'
volumeHandle: statefulset-vol-1 # must match volume name from Longhorn
storageClassName: longhorn # must be same name that we will use later
StatefulSet 的 namespace 中,为每个 Persistent Volume 创建 PersistentVolume Claims。Persistent Volume Claim 的名称必须遵循以下命名方案:<name of Volume Claim Template>-<name of StatefulSet>-<index>
StatefulSet Pod 是零索引的。在这个例子中,Volume Claim Template 的名称是 data,StatefulSet 的名称是 webapp,并且有两个副本,分别是索引 0 和 1。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-webapp-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi # must match size from earlier
storageClassName: longhorn # must match name from earlier
volumeName: statefulset-vol-0 # must reference Persistent Volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-webapp-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi # must match size from earlier
storageClassName: longhorn # must match name from earlier
volumeName: statefulset-vol-1 # must reference Persistent Volume
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: webapp # match this with the PersistentVolumeClaim naming scheme
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: data # match this with the PersistentVolumeClaim naming scheme
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: longhorn # must match name from earlier
resources:
requests:
storage: 2Gi # must match size from earlier
结果: 现在应该可以从 StatefulSet Pod 内部访问恢复的数据。
# 删除 release
helm uninstall sentry -n sentry
# 删除 namespace
kubectl delete ns sentry
kubectl get ns,已无 sentry。


statefulset-vol-sentry-postgresql-02,卷副本会被自动调度到不同节点,保证卷高可用。

注意:这里我们需要重新创建 namespace:sentry
kubectl create ns sentry

helm install sentry sentry/sentry -f values.yaml -n sentry


ok,成功恢复。
K8S-PaaS 云原生中间件 https://k8s-paas.hacker-linner.com/