Compare commits
7 Commits
mergeDevCo
...
docs/kuber
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a12c0eca04 | ||
|
|
a65e368522 | ||
|
|
4a0b04af17 | ||
|
|
8c98fc96d7 | ||
|
|
130bb879f2 | ||
|
|
9071a754f4 | ||
|
|
95db97af94 |
@@ -96,4 +96,4 @@ jobs:
|
||||
- name: unit-tests
|
||||
run: make test-backend
|
||||
env:
|
||||
GITEA_I_AM_BEING_UNSAFE_RUNNING_AS_ROOT: "true"
|
||||
GITEA_I_AM_BEING_UNSAFE_RUNNING_AS_ROOT: "true"
|
||||
|
||||
@@ -65,6 +65,7 @@ After building, a binary file named `gitea` will be generated in the root of the
|
||||
./gitea web
|
||||
|
||||
> [!NOTE]
|
||||
> devcontainer相关功能不能在localhost域名下正常工作,调试环境请在custom/conf/app.ini中修改为IP地址
|
||||
> If you're interested in using our APIs, we have experimental support with [documentation](https://docs.gitea.com/api).
|
||||
|
||||
Start from Container Image:
|
||||
|
||||
@@ -73,7 +73,7 @@ RUN chown git:git /var/lib/gitea /etc/gitea
|
||||
COPY --from=build-env /tmp/local /
|
||||
COPY --from=build-env --chown=root:root /go/src/code.gitea.io/gitea/gitea /app/gitea/gitea
|
||||
COPY --from=build-env --chown=root:root /go/src/code.gitea.io/gitea/environment-to-ini /usr/local/bin/environment-to-ini
|
||||
COPY --from=build-env --chown=root:root /go/src/code.gitea.io/gitea/webTerminal.sh /app/gitea/webTerminal.sh
|
||||
|
||||
# git:git
|
||||
USER 1000:1000
|
||||
ENV GITEA_WORK_DIR=/var/lib/gitea
|
||||
|
||||
87
docs/kubernetes/README.md
Normal file
87
docs/kubernetes/README.md
Normal file
@@ -0,0 +1,87 @@
|
||||
## Kubernetes 文档入口
|
||||
|
||||
本目录提供从零到一的 Kubernetes 集群安装与常用脚本。建议:先阅读本 README 的概览与快速开始,再按需查看详细版文档。
|
||||
|
||||
### 文档索引
|
||||
|
||||
- **Kubernetes 安装**:`k8s-installtion.md`(分步说明、完整命令与排错)
|
||||
- **Istio 配置**:`istio-hostnetwork-notes.md`(将 Istio IngressGateway 切换为 hostNetwork 模式指南)
|
||||
|
||||
### 快速开始
|
||||
|
||||
在 Master 节点:
|
||||
|
||||
```bash
|
||||
./k8s-step1-prepare-env.sh
|
||||
./k8s-step2-install-containerd.sh
|
||||
./k8s-step3-install-components.sh
|
||||
./k8s-step4-init-cluster.sh
|
||||
./k8s-step5-install-flannel.sh
|
||||
```
|
||||
|
||||
在工作节点加入(参考 `node-join-command.txt` 或运行第 6 步脚本):
|
||||
|
||||
```bash
|
||||
./k8s-step6-join-nodes.sh
|
||||
```
|
||||
|
||||
验证:
|
||||
|
||||
```bash
|
||||
kubectl get nodes -o wide
|
||||
kubectl get pods -A
|
||||
```
|
||||
|
||||
### 脚本总览
|
||||
|
||||
- 安装流程
|
||||
- `k8s-step1-prepare-env.sh`:环境准备(关闭 swap、内核参数、基础工具)
|
||||
- `k8s-step2-install-containerd.sh`:安装与配置 containerd
|
||||
- `k8s-step3-install-components.sh`:安装 kubeadm/kubelet/kubectl
|
||||
- `k8s-step4-init-cluster.sh`:Master 初始化集群
|
||||
- `k8s-step5-install-flannel.sh`:安装 Flannel CNI(或直接 `kubectl apply -f kube-flannel.yml`)
|
||||
- `k8s-step6-join-nodes.sh`:节点加入集群(使用 `node-join-command.txt`)
|
||||
- `k8s-install-all.sh`:一键顺序执行上述步骤(熟悉流程后使用)
|
||||
|
||||
- 网络与工具
|
||||
- `setup-master-gateway.sh`:Master 网关/NAT 示例配置(按需修改)
|
||||
- `setup-node1.sh`、`setup-node2.sh`:节点路由示例
|
||||
- `k8s-image-pull-and-import.sh`:镜像预拉取/导入(离线或网络慢场景)
|
||||
- `install-kubectl-nodes.sh`:为其他节点安装与配置 kubectl
|
||||
|
||||
### 常见问题
|
||||
|
||||
- 节点 `NotReady`:检查 CNI 是否就绪(`kubectl -n kube-flannel get pods`)、确认已 `swapoff -a`,并查看 `journalctl -u kubelet -f`。
|
||||
- 无法拉取镜像:检查网络/镜像源,可用 `k8s-image-pull-and-import.sh` 预拉取。
|
||||
- `kubectl` 连接异常:确认 `$HOME/.kube/config` 配置与权限。
|
||||
|
||||
### Istio 服务网格配置
|
||||
|
||||
本目录的 Kubernetes 集群安装完成后,如需使用 Istio 作为服务网格和入口网关,请参考:
|
||||
|
||||
#### Istio hostNetwork 模式配置
|
||||
|
||||
**适用场景**:
|
||||
- 只有 master 节点有公网 IP
|
||||
- 需要 Istio IngressGateway 替代 nginx-ingress-controller
|
||||
- 需要 Istio 直接监听宿主机的 80/443 端口
|
||||
|
||||
**详细指南**:请参阅 [`istio-hostnetwork-notes.md`](./istio-hostnetwork-notes.md)
|
||||
|
||||
**快速概览**:
|
||||
1. 安装 Istio(使用 `istioctl install` 或 Helm)
|
||||
2. 按照指南将 `istio-ingressgateway` 切换为 hostNetwork 模式
|
||||
3. 配置 Gateway 和 VirtualService 进行流量路由
|
||||
4. 配置 TLS 证书 Secret
|
||||
|
||||
**注意事项**:
|
||||
- 迁移前确保停止 nginx 或其他占用 80/443 的服务
|
||||
- 需要将 TLS 证书 Secret 复制到 `istio-system` 命名空间
|
||||
- hostNetwork 模式下,Service 类型可以是 `ClusterIP` 或 `LoadBalancer`
|
||||
|
||||
#### 其他 Istio 文档
|
||||
|
||||
- Istio 官方文档:https://istio.io/latest/docs/
|
||||
- Istio 安装指南:https://istio.io/latest/docs/setup/install/
|
||||
|
||||
|
||||
98
docs/kubernetes/install-kubectl-nodes.sh
Normal file
98
docs/kubernetes/install-kubectl-nodes.sh
Normal file
@@ -0,0 +1,98 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# 为 node1 和 node2 安装 kubectl 脚本
|
||||
# 功能: 从 master 传输 kubectl 二进制文件到其他节点
|
||||
|
||||
echo "==== 为 node1 和 node2 安装 kubectl ===="
|
||||
|
||||
# 定义节点列表
|
||||
NODES=("172.17.0.15:master" "172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 本机 IP 与 SSH 选项
|
||||
LOCAL_IP=$(ip route get 1 | awk '{print $7; exit}')
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
# SSH 私钥(可用环境变量 SSH_KEY 覆盖),存在则自动携带
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
|
||||
# 函数:在指定节点执行命令
|
||||
execute_on_node() {
|
||||
local ip="$1"
|
||||
local hostname="$2"
|
||||
local command="$3"
|
||||
local description="$4"
|
||||
|
||||
echo "==== $description on $hostname ($ip) ===="
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
bash -lc "$command"
|
||||
else
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@$ip "$command"
|
||||
fi
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 函数:传输文件到指定节点
|
||||
copy_to_node() {
|
||||
local ip="$1"
|
||||
local hostname="$2"
|
||||
local file="$3"
|
||||
echo "传输 $file 到 $hostname ($ip)"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
cp -f "$file" ~/
|
||||
else
|
||||
scp $SSH_OPTS $SSH_ID "$file" ubuntu@$ip:~/
|
||||
fi
|
||||
}
|
||||
|
||||
# 创建 kubectl 安装脚本
|
||||
cat > kubectl-install.sh << 'EOF_INSTALL'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 安装 kubectl ===="
|
||||
|
||||
# 1. 检查是否已安装
|
||||
if command -v kubectl &> /dev/null; then
|
||||
echo "kubectl 已安装,版本: $(kubectl version --client 2>/dev/null | grep 'Client Version' || echo 'unknown')"
|
||||
echo "跳过安装"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# 2. 安装 kubectl
|
||||
echo "安装 kubectl..."
|
||||
sudo apt update
|
||||
sudo apt install -y apt-transport-https ca-certificates curl
|
||||
|
||||
# 添加 Kubernetes 官方 GPG key
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
|
||||
# 添加 Kubernetes apt 仓库
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
|
||||
# 更新包列表并安装 kubectl
|
||||
sudo apt update
|
||||
sudo apt install -y kubectl
|
||||
|
||||
# 3. 验证安装
|
||||
echo "验证 kubectl 安装..."
|
||||
kubectl version --client
|
||||
|
||||
echo "==== kubectl 安装完成 ===="
|
||||
EOF_INSTALL
|
||||
|
||||
chmod +x kubectl-install.sh
|
||||
|
||||
# 为 node1 和 node2 安装 kubectl
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
if [ "$hostname" != "master" ]; then
|
||||
copy_to_node "$ip" "$hostname" "kubectl-install.sh"
|
||||
execute_on_node "$ip" "$hostname" "./kubectl-install.sh" "安装 kubectl"
|
||||
fi
|
||||
done
|
||||
|
||||
# 清理临时文件
|
||||
rm -f kubectl-install.sh
|
||||
|
||||
echo "==== 所有节点 kubectl 安装完成 ===="
|
||||
454
docs/kubernetes/istio-hostnetwork-notes.md
Normal file
454
docs/kubernetes/istio-hostnetwork-notes.md
Normal file
@@ -0,0 +1,454 @@
|
||||
# Istio IngressGateway 切换为 hostNetwork 模式指南
|
||||
|
||||
## 概述
|
||||
|
||||
本指南适用于以下场景:
|
||||
- 只有 master 节点有公网 IP
|
||||
- 需要 Istio IngressGateway 替代 nginx-ingress-controller
|
||||
- 需要 Istio 直接监听宿主机的 80/443 端口
|
||||
|
||||
### 为什么选择 hostNetwork?
|
||||
|
||||
1. **公网 IP 限制**:只有 master 节点有公网 IP,流量入口必须在 master
|
||||
2. **端口一致性**:需要监听标准端口 80/443,与 nginx 保持一致
|
||||
3. **无缝迁移**:无需修改 DNS 或负载均衡器配置
|
||||
|
||||
## 安装 Istio 1.27.1
|
||||
|
||||
### 1. 下载 istioctl
|
||||
|
||||
```bash
|
||||
# 下载 Istio 1.27.1
|
||||
# 根据系统架构选择:x86_64 或 arm64
|
||||
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.27.1 TARGET_ARCH=x86_64 sh -
|
||||
|
||||
# 进入目录
|
||||
cd istio-1.27.1
|
||||
|
||||
# 临时添加到 PATH(当前会话有效)
|
||||
export PATH=$PWD/bin:$PATH
|
||||
|
||||
# 或永久安装到系统路径
|
||||
sudo cp bin/istioctl /usr/local/bin/
|
||||
sudo chmod +x /usr/local/bin/istioctl
|
||||
|
||||
# 验证安装
|
||||
istioctl version
|
||||
```
|
||||
|
||||
**说明**:
|
||||
- `TARGET_ARCH` 根据系统架构选择:`x86_64`(Intel/AMD)或 `arm64`(ARM)
|
||||
- 如果使用临时 PATH,每次新终端会话都需要重新设置
|
||||
- 推荐将 `istioctl` 复制到 `/usr/local/bin` 以便全局使用
|
||||
|
||||
### 2. 安装 Istio
|
||||
|
||||
使用 `default` profile 安装 Istio:
|
||||
|
||||
```bash
|
||||
# 安装 Istio(使用 default profile)
|
||||
istioctl install --set profile=default -y
|
||||
|
||||
# 验证安装
|
||||
kubectl get pods -n istio-system
|
||||
kubectl get svc -n istio-system
|
||||
```
|
||||
|
||||
**预期输出**:
|
||||
- `istiod` Pod 应该处于 `Running` 状态
|
||||
- `istio-ingressgateway` Pod 应该处于 `Running` 状态
|
||||
- `istio-egressgateway` Pod 应该处于 `Running` 状态(可选)
|
||||
|
||||
### 3. 验证安装
|
||||
|
||||
```bash
|
||||
# 检查 Istio 组件状态
|
||||
istioctl verify-install
|
||||
|
||||
# 查看 Istio 版本
|
||||
istioctl version
|
||||
|
||||
# 检查所有命名空间的 Istio 资源
|
||||
kubectl get crd | grep istio
|
||||
```
|
||||
|
||||
|
||||
### 4. 卸载 Istio(如需要)
|
||||
|
||||
如果需要卸载 Istio:
|
||||
|
||||
```bash
|
||||
# 卸载 Istio
|
||||
istioctl uninstall --purge -y
|
||||
|
||||
# 删除命名空间
|
||||
kubectl delete namespace istio-system
|
||||
|
||||
# 删除 CRD(可选,会删除所有 Istio 配置)
|
||||
kubectl get crd | grep istio | awk '{print $1}' | xargs kubectl delete crd
|
||||
```
|
||||
|
||||
## 前置检查
|
||||
|
||||
**注意**:如果尚未安装 Istio,请先完成上述"安装 Istio 1.27.1"章节的步骤。
|
||||
|
||||
### 1. 确认集群状态
|
||||
|
||||
```bash
|
||||
# 检查节点
|
||||
kubectl get nodes
|
||||
|
||||
# 检查 Istio 组件(如果已安装)
|
||||
kubectl get pods -n istio-system
|
||||
|
||||
# 检查当前 Service 配置(如果已安装)
|
||||
kubectl get svc istio-ingressgateway -n istio-system
|
||||
|
||||
# 检查 Deployment 配置(如果已安装)
|
||||
kubectl get deploy istio-ingressgateway -n istio-system -o yaml | head -n 50
|
||||
```
|
||||
|
||||
### 2. 释放端口(避免冲突)
|
||||
|
||||
**k3s 环境**:
|
||||
- 如有 traefik,需要停止或释放 80/443
|
||||
- 检查是否有其他服务占用端口:`ss -tlnp | grep -E ':(80|443) '`
|
||||
|
||||
**标准 Kubernetes 环境**:
|
||||
```bash
|
||||
# 停止 nginx-ingress-controller(如果存在)
|
||||
kubectl scale deployment my-release-nginx-ingress-controller \
|
||||
-n nginx-ingress-controller --replicas=0
|
||||
|
||||
# 验证端口已释放
|
||||
ss -tlnp | grep -E ':(80|443) ' || echo "80/443 not listening"
|
||||
```
|
||||
|
||||
## 完整操作步骤
|
||||
|
||||
### 步骤 1:调整 Service(可选)
|
||||
|
||||
如果后续需要接真实 LB,可保留 `LoadBalancer` 类型;为便于本地测试,可先改为 `ClusterIP`:
|
||||
|
||||
```bash
|
||||
# 修改 Service 类型为 ClusterIP
|
||||
kubectl patch svc istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"replace","path":"/spec/type","value":"ClusterIP"}]'
|
||||
|
||||
# 调整端口映射(直通 80/443/15021)
|
||||
kubectl patch svc istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"replace","path":"/spec/ports","value":[
|
||||
{"name":"http","port":80,"targetPort":80,"protocol":"TCP"},
|
||||
{"name":"https","port":443,"targetPort":443,"protocol":"TCP"},
|
||||
{"name":"status-port","port":15021,"targetPort":15021,"protocol":"TCP"}]}]'
|
||||
```
|
||||
|
||||
### 步骤 2:启用 hostNetwork 模式
|
||||
|
||||
```bash
|
||||
# 1. 启用 hostNetwork
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/hostNetwork","value":true}]'
|
||||
|
||||
# 2. 设置 DNS 策略
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/dnsPolicy","value":"ClusterFirstWithHostNet"}]'
|
||||
|
||||
# 3. 绑定到 master 节点(根据实际节点名调整)
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"kubernetes.io/hostname":"master"}}]'
|
||||
|
||||
# 4. 添加容忍(如果 master 节点有 control-plane taint)
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/tolerations","value":[{"key":"node-role.kubernetes.io/control-plane","operator":"Exists","effect":"NoSchedule"}]}]'
|
||||
```
|
||||
|
||||
### 步骤 3:配置容器端口
|
||||
|
||||
```bash
|
||||
# 让容器直接监听宿主机的 80/443/15021
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"replace","path":"/spec/template/spec/containers/0/ports","value":[
|
||||
{"containerPort":80,"hostPort":80,"protocol":"TCP","name":"http"},
|
||||
{"containerPort":443,"hostPort":443,"protocol":"TCP","name":"https"},
|
||||
{"containerPort":15021,"hostPort":15021,"protocol":"TCP","name":"status-port"},
|
||||
{"containerPort":15090,"protocol":"TCP","name":"http-envoy-prom"}]}]'
|
||||
```
|
||||
|
||||
### 步骤 4:配置安全上下文(解决权限问题)
|
||||
|
||||
```bash
|
||||
# 1. 添加 NET_BIND_SERVICE 能力
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/containers/0/securityContext/capabilities/add","value":["NET_BIND_SERVICE"]}]'
|
||||
|
||||
# 2. 以 root 身份运行(允许绑定特权端口)
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"replace","path":"/spec/template/spec/securityContext/runAsNonRoot","value":false},\
|
||||
{"op":"replace","path":"/spec/template/spec/securityContext/runAsUser","value":0},\
|
||||
{"op":"replace","path":"/spec/template/spec/securityContext/runAsGroup","value":0}]'
|
||||
|
||||
# 3. 设置环境变量(告知 Istio 这是特权 Pod)
|
||||
kubectl set env deployment/istio-ingressgateway -n istio-system ISTIO_META_UNPRIVILEGED_POD=false
|
||||
```
|
||||
|
||||
### 步骤 5:重启 Deployment
|
||||
|
||||
```bash
|
||||
# 先缩容到 0,避免 hostPort 冲突
|
||||
kubectl scale deployment istio-ingressgateway -n istio-system --replicas=0
|
||||
|
||||
# 等待 Pod 完全终止
|
||||
kubectl rollout status deployment/istio-ingressgateway -n istio-system --timeout=60s || true
|
||||
sleep 3
|
||||
|
||||
# 扩容到 1
|
||||
kubectl scale deployment istio-ingressgateway -n istio-system --replicas=1
|
||||
|
||||
# 等待新 Pod 就绪
|
||||
kubectl rollout status deployment/istio-ingressgateway -n istio-system --timeout=120s
|
||||
```
|
||||
|
||||
## 验证配置
|
||||
|
||||
### 1. 检查 Pod 状态
|
||||
|
||||
```bash
|
||||
# 查看 Pod 状态和 IP(hostNetwork 模式下 IP 应为节点 IP)
|
||||
kubectl get pods -n istio-system -o wide
|
||||
|
||||
# 确认 hostNetwork 已启用
|
||||
kubectl get pod -n istio-system -l app=istio-ingressgateway \
|
||||
-o jsonpath='{.items[0].spec.hostNetwork}'
|
||||
# 应该输出: true
|
||||
```
|
||||
|
||||
### 2. 检查端口监听
|
||||
|
||||
```bash
|
||||
# 在 master 节点上检查端口监听
|
||||
ss -tlnp | grep -E ':(80|443|15021) '
|
||||
|
||||
# 或在 Pod 内部检查
|
||||
kubectl exec -n istio-system deploy/istio-ingressgateway -- \
|
||||
ss -tlnp | grep -E ':(80|443|15021) '
|
||||
```
|
||||
|
||||
### 3. 检查 Istio 配置
|
||||
|
||||
```bash
|
||||
# 查看 Envoy listener 配置
|
||||
istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
|
||||
|
||||
# 检查配置分析
|
||||
istioctl analyze -A
|
||||
```
|
||||
|
||||
## 配置 Gateway 和 VirtualService
|
||||
|
||||
### 1. 准备 TLS 证书 Secret
|
||||
|
||||
如果证书 Secret 在其他命名空间,需要复制到 `istio-system`:
|
||||
|
||||
```bash
|
||||
# 复制 Secret(示例)
|
||||
kubectl get secret <your-tls-secret> -n <source-namespace> -o yaml | \
|
||||
sed "s/namespace: <source-namespace>/namespace: istio-system/" | \
|
||||
kubectl apply -f -
|
||||
|
||||
# 验证
|
||||
kubectl get secret <your-tls-secret> -n istio-system
|
||||
```
|
||||
|
||||
**注意**:证书文件(`.crt`)如果包含多个 `BEGIN CERTIFICATE` 块是正常的,这是证书链(服务器证书 + 中间证书)。Kubernetes Secret 和 Istio Gateway 都支持这种格式。
|
||||
|
||||
### 2. 创建 Gateway
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1beta1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: devstar-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- devstar.cn
|
||||
- www.devstar.cn
|
||||
- port:
|
||||
number: 443
|
||||
name: https
|
||||
protocol: HTTPS
|
||||
tls:
|
||||
mode: SIMPLE
|
||||
credentialName: devstar-studio-tls-secret-devstar-cn
|
||||
hosts:
|
||||
- devstar.cn
|
||||
- www.devstar.cn
|
||||
```
|
||||
|
||||
### 3. 创建 VirtualService
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1beta1
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: devstar-studio-gitea
|
||||
namespace: devstar-studio-ns
|
||||
spec:
|
||||
hosts:
|
||||
- devstar.cn
|
||||
- www.devstar.cn
|
||||
gateways:
|
||||
- istio-system/devstar-gateway
|
||||
http:
|
||||
# www.devstar.cn 重定向到 devstar.cn (308 永久重定向)
|
||||
- match:
|
||||
- headers:
|
||||
host:
|
||||
exact: www.devstar.cn
|
||||
redirect:
|
||||
authority: devstar.cn
|
||||
redirectCode: 308
|
||||
# devstar.cn 路由到后端服务
|
||||
- match:
|
||||
- uri:
|
||||
prefix: /
|
||||
route:
|
||||
- destination:
|
||||
host: devstar-studio-gitea-http
|
||||
port:
|
||||
number: 3000
|
||||
```
|
||||
|
||||
### 4. 验证 Gateway 和 VirtualService
|
||||
|
||||
```bash
|
||||
# 检查 Gateway
|
||||
kubectl get gateway -n istio-system
|
||||
|
||||
# 检查 VirtualService
|
||||
kubectl get virtualservice -A
|
||||
|
||||
# 查看详细配置
|
||||
kubectl describe gateway devstar-gateway -n istio-system
|
||||
kubectl describe virtualservice devstar-studio-gitea -n devstar-studio-ns
|
||||
```
|
||||
|
||||
## 测试访问
|
||||
|
||||
```bash
|
||||
# HTTP 测试
|
||||
curl -H "Host: devstar.cn" http://<master-ip> -I
|
||||
|
||||
# HTTPS 测试
|
||||
curl -k --resolve devstar.cn:443:<master-ip> https://devstar.cn -I
|
||||
|
||||
# 测试重定向(www.devstar.cn -> devstar.cn)
|
||||
curl -I -H "Host: www.devstar.cn" http://<master-ip>
|
||||
# 应该返回: HTTP/1.1 308 Permanent Redirect
|
||||
```
|
||||
|
||||
## 启用服务网格(可选)
|
||||
|
||||
如果需要为其他命名空间启用自动 sidecar 注入:
|
||||
|
||||
```bash
|
||||
# 为命名空间启用自动注入
|
||||
kubectl label namespace <namespace> istio-injection=enabled
|
||||
|
||||
# 验证
|
||||
kubectl get namespace -L istio-injection
|
||||
|
||||
# 重启现有 Pod 以注入 sidecar
|
||||
kubectl rollout restart deployment -n <namespace>
|
||||
```
|
||||
|
||||
## 常见问题
|
||||
|
||||
### 1. Pod 一直 Pending
|
||||
|
||||
**原因**:旧 Pod 仍占用 hostPort,新 Pod 无法调度。
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 手动删除旧 Pod
|
||||
kubectl delete pod -n istio-system -l app=istio-ingressgateway
|
||||
|
||||
# 或先缩容再扩容
|
||||
kubectl scale deployment istio-ingressgateway -n istio-system --replicas=0
|
||||
kubectl scale deployment istio-ingressgateway -n istio-system --replicas=1
|
||||
```
|
||||
|
||||
### 2. Envoy 报 "Permission denied" 无法绑定 80/443
|
||||
|
||||
**原因**:容器没有足够权限绑定特权端口。
|
||||
|
||||
**解决**:
|
||||
- 确认已添加 `NET_BIND_SERVICE` capability
|
||||
- 确认 `runAsUser: 0` 和 `runAsNonRoot: false`
|
||||
- 确认 `ISTIO_META_UNPRIVILEGED_POD=false`
|
||||
|
||||
### 3. Istiod 日志显示 "skipping privileged gateway port"
|
||||
|
||||
**原因**:Istio 认为 Pod 是无特权模式。
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
kubectl set env deployment/istio-ingressgateway -n istio-system ISTIO_META_UNPRIVILEGED_POD=false
|
||||
kubectl rollout restart deployment istio-ingressgateway -n istio-system
|
||||
```
|
||||
|
||||
### 4. Gateway 冲突(IST0145)
|
||||
|
||||
**原因**:多个 Gateway 使用相同的 selector 和端口,但 hosts 冲突。
|
||||
|
||||
**解决**:
|
||||
- 合并多个 Gateway 到一个,在 `hosts` 中列出所有域名
|
||||
- 或确保不同 Gateway 的 `hosts` 不重叠
|
||||
|
||||
## 回滚方案
|
||||
|
||||
如果需要回滚到默认配置:
|
||||
|
||||
```bash
|
||||
# 1. 恢复 nginx(如果之前使用)
|
||||
kubectl scale deployment my-release-nginx-ingress-controller \
|
||||
-n nginx-ingress-controller --replicas=1
|
||||
|
||||
# 2. 恢复 Istio 为默认配置
|
||||
istioctl install --set profile=default -y
|
||||
|
||||
# 3. 或手动删除 hostNetwork 相关配置
|
||||
kubectl patch deployment istio-ingressgateway -n istio-system --type='json' \
|
||||
-p='[{"op":"remove","path":"/spec/template/spec/hostNetwork"}]'
|
||||
```
|
||||
|
||||
## 端口映射说明
|
||||
|
||||
### Istio 默认端口配置
|
||||
|
||||
- **容器内部端口**:Istio 默认让 Envoy 监听 8080(HTTP)和 8443(HTTPS)
|
||||
- **Service 端口映射**:Service 的 80 端口映射到容器的 8080(targetPort: 8080),443 映射到 8443
|
||||
- **为什么不是 80/443**:这是 Istio 的设计,避免与主机上的其他服务冲突
|
||||
|
||||
### hostNetwork 模式下的端口配置
|
||||
|
||||
使用 hostNetwork 模式时:
|
||||
- 容器直接使用主机网络,需要监听主机的 80/443 端口
|
||||
- 因此需要修改容器端口配置,让容器监听 80/443 而不是 8080/8443
|
||||
- 同时需要配置 IstioOperator 的 values,让 Envoy 实际监听 80/443
|
||||
|
||||
## 注意事项
|
||||
|
||||
1. **端口冲突**:迁移前确保停止 nginx 或其他占用 80/443 的服务
|
||||
2. **Sidecar 资源**:每个 Pod 会增加 ~100MB 内存和 ~100m CPU
|
||||
3. **TLS 证书**:需要将证书 Secret 复制到 istio-system 命名空间,或通过 Gateway 配置指定命名空间
|
||||
4. **性能影响**:sidecar 会增加少量延迟(通常 <1ms)
|
||||
5. **Service 类型**:hostNetwork 模式下,Service 类型可以是 `ClusterIP` 或 `LoadBalancer`,不影响功能
|
||||
122
docs/kubernetes/k8s-image-pull-and-import.sh
Normal file
122
docs/kubernetes/k8s-image-pull-and-import.sh
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# 说明:
|
||||
# 在 master、node1、node2 三台节点上分别拉取指定镜像, 并导入到 containerd (k8s.io 命名空间)
|
||||
# 不通过主机分发镜像归档, 而是每台节点各自拉取/导入。
|
||||
#
|
||||
# 使用示例:
|
||||
# chmod +x k8s-image-pull-and-import.sh
|
||||
# ./k8s-image-pull-and-import.sh beppeb/devstar-controller-manager:3.0.0.without_istio
|
||||
#
|
||||
# 可选环境变量:
|
||||
# SSH_KEY 指定私钥路径 (默认: ~/.ssh/id_rsa, 若存在自动携带)
|
||||
|
||||
echo "==== K8s 镜像拉取并导入 containerd ===="
|
||||
|
||||
if [ $# -lt 1 ]; then
|
||||
echo "用法: $0 <IMAGE[:TAG]>"
|
||||
echo "示例: $0 beppeb/devstar-controller-manager:3.0.0.without_istio"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
IMAGE_INPUT="$1"
|
||||
|
||||
# 规范化镜像名, 若无 registry 前缀则补全 docker.io/
|
||||
normalize_image() {
|
||||
local img="$1"
|
||||
if [[ "$img" != */*/* ]]; then
|
||||
# 只有一个斜杠(如 library/nginx 或 beppeb/devstar-...): 仍可能缺少 registry
|
||||
# Docker 的默认 registry 是 docker.io
|
||||
echo "docker.io/${img}"
|
||||
else
|
||||
echo "$img"
|
||||
fi
|
||||
}
|
||||
|
||||
CANONICAL_IMAGE=$(normalize_image "$IMAGE_INPUT")
|
||||
echo "目标镜像: ${CANONICAL_IMAGE}"
|
||||
|
||||
# 节点列表: 与 k8s-step1-prepare-env.sh 风格一致
|
||||
NODES=("172.17.0.15:master" "172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 本机 IP 与 SSH 选项
|
||||
LOCAL_IP=$(ip route get 1 | awk '{print $7; exit}')
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
|
||||
run_remote() {
|
||||
local ip="$1"; shift
|
||||
local cmd="$*"
|
||||
if [ "$ip" = "$LOCAL_IP" ]; then
|
||||
bash -lc "$cmd"
|
||||
else
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@"$ip" "$cmd"
|
||||
fi
|
||||
}
|
||||
|
||||
# 在远端节点执行: 使用 docker 或 containerd 拉取镜像, 并确保导入到 containerd k8s.io
|
||||
remote_pull_and_import_cmd() {
|
||||
local image="$1"
|
||||
# 注意: 使用单引号包裹, 传到远端后再展开变量
|
||||
cat <<'EOF_REMOTE'
|
||||
set -euo pipefail
|
||||
|
||||
IMAGE_REMOTE="$IMAGE_PLACEHOLDER"
|
||||
|
||||
has_cmd() { command -v "$1" >/dev/null 2>&1; }
|
||||
|
||||
echo "[\"$(hostname)\"] 处理镜像: ${IMAGE_REMOTE}"
|
||||
|
||||
# 优先尝试 docker 拉取, 成功后直接导入 containerd (无需落盘)
|
||||
if has_cmd docker; then
|
||||
echo "[\"$(hostname)\"] 使用 docker pull"
|
||||
sudo docker pull "${IMAGE_REMOTE}"
|
||||
echo "[\"$(hostname)\"] 导入到 containerd (k8s.io)"
|
||||
sudo docker save "${IMAGE_REMOTE}" | sudo ctr -n k8s.io images import - >/dev/null
|
||||
else
|
||||
echo "[\"$(hostname)\"] 未检测到 docker, 尝试使用 containerd 拉取"
|
||||
# containerd 直接拉取到 k8s.io 命名空间
|
||||
sudo ctr -n k8s.io images pull --all-platforms "${IMAGE_REMOTE}"
|
||||
fi
|
||||
|
||||
# 规范化 tag: 若镜像缺少 docker.io 前缀, 在 containerd 内补齐一份别名
|
||||
NEED_PREFIX=0
|
||||
if [[ "${IMAGE_REMOTE}" != docker.io/* ]]; then
|
||||
NEED_PREFIX=1
|
||||
fi
|
||||
|
||||
if [ "$NEED_PREFIX" -eq 1 ]; then
|
||||
# 仅当不存在 docker.io/ 前缀时, 补一个 docker.io/ 的 tag, 方便与清单匹配
|
||||
# 计算补齐后的名字
|
||||
if [[ "${IMAGE_REMOTE}" == */*/* ]]; then
|
||||
# 已有显式 registry, 不重复打 tag
|
||||
:
|
||||
else
|
||||
FIXED="docker.io/${IMAGE_REMOTE}"
|
||||
echo "[\"$(hostname)\"] 为 containerd 打标签: ${FIXED}"
|
||||
sudo ctr -n k8s.io images tag "${IMAGE_REMOTE}" "${FIXED}" || true
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "[\"$(hostname)\"] 验证镜像是否存在于 containerd:"
|
||||
sudo ctr -n k8s.io images ls | grep -E "$(printf '%s' "${IMAGE_REMOTE}" | sed 's/[\/.\-]/\\&/g')" || true
|
||||
EOF_REMOTE
|
||||
}
|
||||
|
||||
# 遍历节点执行
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "==== 在 ${hostname} (${ip}) 执行镜像拉取与导入 ===="
|
||||
# 将占位符替换为实际镜像并远程执行
|
||||
remote_script=$(remote_pull_and_import_cmd "$CANONICAL_IMAGE")
|
||||
# 安全替换占位符为镜像名
|
||||
remote_script=${remote_script//\$IMAGE_PLACEHOLDER/$CANONICAL_IMAGE}
|
||||
run_remote "$ip" "$remote_script"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo "==== 完成 ===="
|
||||
|
||||
|
||||
69
docs/kubernetes/k8s-install-all.sh
Normal file
69
docs/kubernetes/k8s-install-all.sh
Normal file
@@ -0,0 +1,69 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 集群一键安装脚本
|
||||
# 功能: 按顺序执行所有安装步骤
|
||||
|
||||
echo "==== Kubernetes 集群一键安装 ===="
|
||||
echo "集群信息:"
|
||||
echo "- Master: 172.17.0.15"
|
||||
echo "- Node1: 172.17.0.43"
|
||||
echo "- Node2: 172.17.0.34"
|
||||
echo "- Kubernetes 版本: v1.32.3"
|
||||
echo "- 网络插件: Flannel"
|
||||
echo "- 容器运行时: containerd"
|
||||
echo ""
|
||||
|
||||
# 检查脚本文件是否存在
|
||||
SCRIPTS=(
|
||||
"k8s-step1-prepare-env.sh"
|
||||
"k8s-step2-install-containerd.sh"
|
||||
"k8s-step3-install-components.sh"
|
||||
"k8s-step4-init-cluster.sh"
|
||||
"k8s-step5-install-flannel.sh"
|
||||
"k8s-step6-join-nodes.sh"
|
||||
)
|
||||
|
||||
for script in "${SCRIPTS[@]}"; do
|
||||
if [ ! -f "$script" ]; then
|
||||
echo "错误: 找不到脚本文件 $script"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "所有脚本文件检查完成,开始安装..."
|
||||
echo ""
|
||||
|
||||
# 执行安装步骤
|
||||
echo "==== 步骤 1: 环境准备 ===="
|
||||
./k8s-step1-prepare-env.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 步骤 2: 安装 containerd ===="
|
||||
./k8s-step2-install-containerd.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 步骤 3: 安装 Kubernetes 组件 ===="
|
||||
./k8s-step3-install-components.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 步骤 4: 初始化集群 ===="
|
||||
./k8s-step4-init-cluster.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 步骤 5: 安装 Flannel 网络插件 ===="
|
||||
./k8s-step5-install-flannel.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 步骤 6: 节点加入集群 ===="
|
||||
./k8s-step6-join-nodes.sh
|
||||
echo ""
|
||||
|
||||
echo "==== 安装完成 ===="
|
||||
echo "集群状态:"
|
||||
kubectl get nodes
|
||||
echo ""
|
||||
kubectl get pods -A
|
||||
echo ""
|
||||
echo "集群已就绪,可以开始部署应用!"
|
||||
|
||||
524
docs/kubernetes/k8s-installtion.md
Normal file
524
docs/kubernetes/k8s-installtion.md
Normal file
@@ -0,0 +1,524 @@
|
||||
# Kubernetes 集群安装文档
|
||||
|
||||
## 📋 集群信息
|
||||
- **Master**: 172.17.0.15 (master)
|
||||
- **Node1**: 172.17.0.43 (node1)
|
||||
- **Node2**: 172.17.0.34 (node2)
|
||||
- **Kubernetes 版本**: v1.32.3
|
||||
- **容器运行时**: containerd
|
||||
- **网络插件**: Flannel
|
||||
- **镜像仓库**: 阿里云镜像
|
||||
|
||||
## 🎯 安装方式
|
||||
**模块化安装**: 每个脚本功能清晰,可以单独执行或按顺序执行
|
||||
|
||||
## 📋 安装脚本
|
||||
|
||||
### 🔧 脚本列表
|
||||
1. **`k8s-step1-prepare-env.sh`** - 环境准备 (所有节点)
|
||||
2. **`k8s-step2-install-containerd.sh`** - 容器运行时安装 (所有节点)
|
||||
3. **`k8s-step3-install-components.sh`** - Kubernetes 组件安装 (所有节点)
|
||||
4. **`k8s-step4-init-cluster.sh`** - 集群初始化 (Master 节点)
|
||||
5. **`k8s-step5-install-flannel.sh`** - 网络插件安装 (Master 节点)
|
||||
6. **`k8s-step6-join-nodes.sh`** - 节点加入集群 (Node1, Node2)
|
||||
7. **`k8s-install-all.sh`** - 主控制脚本 (按顺序执行所有步骤)
|
||||
|
||||
### 🌐 网络配置脚本
|
||||
- **`setup-master-gateway.sh`** - Master 节点网关配置
|
||||
- **`setup-node1.sh`** - Node1 网络路由配置
|
||||
- **`setup-node2.sh`** - Node2 网络路由配置
|
||||
|
||||
### 🔧 辅助工具脚本
|
||||
- **`install-kubectl-nodes.sh`** - 为其他节点安装 kubectl
|
||||
|
||||
### 🚀 使用方法
|
||||
|
||||
#### 方法 1: 一键安装
|
||||
```bash
|
||||
# 在 Master 节点运行
|
||||
./k8s-install-all.sh
|
||||
```
|
||||
|
||||
#### 方法 2: 分步安装
|
||||
```bash
|
||||
# 按顺序执行每个步骤
|
||||
./k8s-step1-prepare-env.sh
|
||||
./k8s-step2-install-containerd.sh
|
||||
./k8s-step3-install-components.sh
|
||||
./k8s-step4-init-cluster.sh
|
||||
./k8s-step5-install-flannel.sh
|
||||
./k8s-step6-join-nodes.sh
|
||||
|
||||
# 可选:为其他节点安装 kubectl
|
||||
./install-kubectl-nodes.sh
|
||||
```
|
||||
|
||||
## 📋 安装步骤
|
||||
|
||||
### ✅ 步骤 1: 环境准备(已完成)
|
||||
- [x] 云主机重装系统:确认系统盘数据清空,无残留 kube 目录与服务
|
||||
- [x] 主机名设置:`master`、`node1`、`node2`(不在节点脚本中写入 hosts)
|
||||
- [x] Master 配置 NAT 网关:开启 `net.ipv4.ip_forward`,设置 `iptables` MASQUERADE 并持久化
|
||||
- [x] 基础内核与网络:开启 `overlay`、`br_netfilter`;`sysctl` 应用桥接与转发参数
|
||||
- [x] 关闭 swap:禁用并注释 `/etc/fstab` 对应项
|
||||
- [x] 防火墙:禁用 `ufw`,确保必要端口不被拦截
|
||||
- [x] SSH 信任:在 master 生成密钥并分发到 `node1/node2`,验证免密可达
|
||||
|
||||
### ✅ 步骤 2: 容器运行时准备(所有节点,已完成)
|
||||
- [x] 更新系统包,安装依赖工具:`curl`、`wget`、`gnupg`、`ca-certificates`、`apt-transport-https` 等
|
||||
- [x] 安装 containerd 并生成默认配置 `/etc/containerd/config.toml`
|
||||
- [x] 配置镜像加速:docker.io/quay.io 使用腾讯云镜像,其他使用高校镜像
|
||||
- [x] 安装 CNI 插件 v1.3.0(在 master 预下载并分发至 node1/node2)
|
||||
- [x] 启用并开机自启 `containerd`,确认服务状态正常
|
||||
|
||||
### ✅ 步骤 3: 安装 Kubernetes 组件(所有节点,已完成)
|
||||
- [x] 添加 Kubernetes APT 仓库(pkgs.k8s.io v1.32),修复 GPG key 与源配置问题
|
||||
- [x] 安装并锁定版本:`kubelet`、`kubeadm`、`kubectl` 为 `v1.32.3`
|
||||
- [x] 配置 kubelet:使用 `systemd` cgroup,与 containerd 对齐,写入完整配置文件
|
||||
- [x] 启用并启动 `kubelet` 服务
|
||||
|
||||
### ✅ 步骤 4: 集群初始化(Master 节点,已完成)
|
||||
- [x] 执行 `kubeadm init` 完成初始化:包含 `controlPlaneEndpoint=172.17.0.15:6443`、Networking(ServiceCIDR `10.96.0.0/12`、PodCIDR `10.244.0.0/16`)、`imageRepository`(Aliyun)
|
||||
- [x] 拷贝 `admin.conf` 到 `~/.kube/config` 并验证控制面组件:`etcd`、`kube-apiserver`、`kube-controller-manager`、`kube-scheduler`、`kube-proxy` 均 Running;`coredns` Pending(等待安装网络插件)
|
||||
- [x] 生成并使用 `kubeadm token create --print-join-command` 生成 join 命令
|
||||
|
||||
### ✅ 步骤 5: 网络插件安装 (Master 节点,已完成)
|
||||
- [x] 下载并应用 Flannel v0.27.4 清单
|
||||
- [x] 匹配 Pod CIDR `10.244.0.0/16`,等待组件 Ready
|
||||
- [x] 配置 Flannel 使用国内镜像源(registry-k8s-io.mirrors.sjtug.sjtu.edu.cn、ghcr.tencentcloudcr.com)
|
||||
- [x] 预拉取所有 Flannel 镜像并打标签
|
||||
- [x] 等待所有网络组件就绪:kube-flannel-ds、coredns
|
||||
|
||||
### ✅ 步骤 6: 节点加入集群(已完成)
|
||||
- [x] 读取 `node-join-command.txt` 文件中的 join 命令
|
||||
- [x] 在 `node1/node2` 执行 join,加入成功后验证 `Ready`
|
||||
- [x] 验证所有节点状态:master (Ready, control-plane)、node1 (Ready)、node2 (Ready)
|
||||
|
||||
### ✅ 步骤 7: 集群验证(已完成)
|
||||
- [x] `kubectl get nodes/pods -A` 基线检查
|
||||
- [x] 所有 Pod 状态为 Running:控制面组件、网络组件、系统组件
|
||||
- [x] 集群完全就绪,可以部署应用
|
||||
|
||||
### ✅ 步骤 8: 为其他节点安装 kubectl(已完成)
|
||||
- [x] 在 node1 和 node2 上安装 kubectl v1.32.3
|
||||
- [x] 复制 master 的 kubeconfig 配置文件到其他节点
|
||||
- [x] 验证所有节点都能正常访问 Kubernetes 集群
|
||||
|
||||
## 📝 详细安装过程记录
|
||||
|
||||
### 步骤 1: 系统环境准备
|
||||
|
||||
#### 1.1 系统重装与清理
|
||||
- 腾讯云服务器实例重装系统,确保硬盘完全清空
|
||||
- 验证无残留 Kubernetes 相关目录和服务
|
||||
|
||||
#### 1.2 主机名配置
|
||||
```bash
|
||||
# Master 节点
|
||||
sudo hostnamectl set-hostname master
|
||||
|
||||
# Node1 节点
|
||||
sudo hostnamectl set-hostname node1
|
||||
|
||||
# Node2 节点
|
||||
sudo hostnamectl set-hostname node2
|
||||
```
|
||||
|
||||
#### 1.3 网络配置
|
||||
|
||||
> **提示**: 可以使用提供的脚本自动配置网络:
|
||||
> - `./setup-master-gateway.sh` - 在 Master 节点执行
|
||||
> - `./setup-node1.sh` - 在 Node1 节点执行
|
||||
> - `./setup-node2.sh` - 在 Node2 节点执行
|
||||
|
||||
**Master 节点配置为 NAT 网关:**
|
||||
```bash
|
||||
# 启用 IP 转发
|
||||
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
|
||||
sudo sysctl -p
|
||||
|
||||
# 清空现有 iptables 规则
|
||||
sudo iptables -F
|
||||
sudo iptables -t nat -F
|
||||
sudo iptables -t mangle -F
|
||||
sudo iptables -X
|
||||
sudo iptables -t nat -X
|
||||
sudo iptables -t mangle -X
|
||||
|
||||
# 设置默认策略
|
||||
sudo iptables -P INPUT ACCEPT
|
||||
sudo iptables -P FORWARD ACCEPT
|
||||
sudo iptables -P OUTPUT ACCEPT
|
||||
|
||||
# 配置 NAT 规则 - 允许内网节点通过 master 访问外网
|
||||
sudo iptables -t nat -A POSTROUTING -s 172.17.0.0/20 -o eth0 -j MASQUERADE
|
||||
|
||||
# 允许转发来自内网的流量
|
||||
sudo iptables -A FORWARD -s 172.17.0.0/20 -j ACCEPT
|
||||
sudo iptables -A FORWARD -d 172.17.0.0/20 -j ACCEPT
|
||||
|
||||
# 保存 iptables 规则
|
||||
sudo apt update && sudo apt install -y iptables-persistent
|
||||
sudo netfilter-persistent save
|
||||
```
|
||||
|
||||
**Node1 和 Node2 配置路由:**
|
||||
```bash
|
||||
# 删除默认网关(如果存在)
|
||||
sudo ip route del default 2>/dev/null || true
|
||||
|
||||
# 添加默认网关指向 master
|
||||
sudo ip route add default via 172.17.0.15
|
||||
|
||||
# 验证网络连通性
|
||||
ping -c 2 172.17.0.15 && echo "✓ 可以访问 master" || echo "✗ 无法访问 master"
|
||||
ping -c 2 8.8.8.8 && echo "✓ 可以访问外网" || echo "✗ 无法访问外网"
|
||||
```
|
||||
|
||||
#### 1.4 SSH 密钥配置
|
||||
```bash
|
||||
# Master 节点生成 SSH 密钥
|
||||
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
|
||||
|
||||
# 将公钥复制到 Node1 和 Node2
|
||||
ssh-copy-id ubuntu@172.17.0.43
|
||||
ssh-copy-id ubuntu@172.17.0.34
|
||||
```
|
||||
|
||||
### 步骤 2: 基础环境准备(所有节点)
|
||||
|
||||
#### 2.1 系统更新
|
||||
```bash
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt install -y curl wget vim net-tools gnupg lsb-release ca-certificates apt-transport-https
|
||||
```
|
||||
|
||||
#### 2.2 内核参数配置
|
||||
```bash
|
||||
# 加载内核模块
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
# 配置内核参数
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF
|
||||
|
||||
sudo sysctl --system
|
||||
```
|
||||
|
||||
#### 2.3 禁用 Swap
|
||||
```bash
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
```
|
||||
|
||||
#### 2.4 防火墙配置
|
||||
```bash
|
||||
sudo ufw disable
|
||||
```
|
||||
|
||||
### 步骤 3: 容器运行时安装(所有节点)
|
||||
|
||||
#### 3.1 安装 containerd
|
||||
```bash
|
||||
# 安装 containerd
|
||||
sudo apt update
|
||||
sudo apt install -y containerd
|
||||
|
||||
# ① 停止 containerd
|
||||
sudo systemctl stop containerd
|
||||
|
||||
# ② 生成默认配置
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
|
||||
|
||||
# ③ 注入镜像加速配置(docker.io/quay.io:腾讯云,其它:高校镜像优先)
|
||||
sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/a\
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n endpoint = ["https://mirror.ccs.tencentyun.com"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]\n endpoint = ["https://quay.tencentcloudcr.com"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]\n endpoint = ["https://ghcr.nju.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]\n endpoint = ["https://gcr.nju.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]\n endpoint = ["https://registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]\n endpoint = ["https://gcr.nju.edu.cn"]' /etc/containerd/config.toml
|
||||
|
||||
# ④ 重新加载并启动 containerd
|
||||
sudo systemctl daemon-reexec
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart containerd
|
||||
|
||||
# ⑤ 检查服务状态
|
||||
sudo systemctl status containerd --no-pager -l
|
||||
```
|
||||
|
||||
#### 3.2 安装 CNI 插件
|
||||
```bash
|
||||
# 下载 CNI 插件
|
||||
CNI_VERSION="v1.3.0"
|
||||
CNI_TGZ="cni-plugins-linux-amd64-${CNI_VERSION}.tgz"
|
||||
|
||||
# 下载 CNI 插件
|
||||
curl -L --fail --retry 3 --connect-timeout 10 \
|
||||
-o "$CNI_TGZ" \
|
||||
"https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/$CNI_TGZ"
|
||||
|
||||
# 安装 CNI 插件
|
||||
sudo mkdir -p /opt/cni/bin
|
||||
sudo tar -xzf "$CNI_TGZ" -C /opt/cni/bin/
|
||||
rm -f "$CNI_TGZ"
|
||||
```
|
||||
|
||||
### 步骤 4: Kubernetes 组件安装(所有节点)
|
||||
|
||||
#### 4.1 添加 Kubernetes 仓库
|
||||
```bash
|
||||
# 添加 Kubernetes 仓库 (pkgs.k8s.io v1.32)
|
||||
# 确保 keyrings 目录存在并可读
|
||||
sudo install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
sudo chmod a+r /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list >/dev/null
|
||||
|
||||
# 更新包列表
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
#### 4.2 安装 Kubernetes 组件
|
||||
```bash
|
||||
# 安装 kubelet, kubeadm, kubectl
|
||||
sudo apt install -y kubelet kubeadm kubectl
|
||||
|
||||
# 锁定版本防止自动更新
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
|
||||
#### 4.3 配置 kubelet
|
||||
```bash
|
||||
# 配置 kubelet
|
||||
sudo mkdir -p /var/lib/kubelet
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/config.yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/pki/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
clusterDomain: cluster.local
|
||||
clusterDNS:
|
||||
- 10.96.0.10
|
||||
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
|
||||
cgroupDriver: systemd
|
||||
failSwapOn: false
|
||||
hairpinMode: promiscuous-bridge
|
||||
healthzBindAddress: 127.0.0.1
|
||||
healthzPort: 10248
|
||||
httpCheckFrequency: 20s
|
||||
imageMinimumGCAge: 2m0s
|
||||
imageGCHighThresholdPercent: 85
|
||||
imageGCLowThresholdPercent: 80
|
||||
iptablesDropBit: 15
|
||||
iptablesMasqueradeBit: 15
|
||||
kubeAPIBurst: 10
|
||||
kubeAPIQPS: 5
|
||||
makeIPTablesUtilChains: true
|
||||
maxOpenFiles: 1000000
|
||||
maxPods: 110
|
||||
nodeStatusUpdateFrequency: 10s
|
||||
oomScoreAdj: -999
|
||||
podCIDR: 10.244.0.0/16
|
||||
registryBurst: 10
|
||||
registryPullQPS: 5
|
||||
resolvConf: /etc/resolv.conf
|
||||
rotateCertificates: true
|
||||
runtimeRequestTimeout: 2m0s
|
||||
serializeImagePulls: true
|
||||
serverTLSBootstrap: true
|
||||
streamingConnectionIdleTimeout: 4h0m0s
|
||||
syncFrequency: 1m0s
|
||||
volumeStatsAggPeriod: 1m0s
|
||||
EOF
|
||||
|
||||
# 启动 kubelet
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable kubelet
|
||||
sudo systemctl start kubelet
|
||||
```
|
||||
|
||||
### 步骤 5: 集群初始化(Master 节点)
|
||||
|
||||
#### 5.1 初始化集群
|
||||
```bash
|
||||
# 初始化 Kubernetes 集群
|
||||
sudo kubeadm init \
|
||||
--apiserver-advertise-address=172.17.0.15 \
|
||||
--control-plane-endpoint=172.17.0.15:6443 \
|
||||
--kubernetes-version=v1.32.3 \
|
||||
--service-cidr=10.96.0.0/12 \
|
||||
--pod-network-cidr=10.244.0.0/16 \
|
||||
--image-repository=registry.aliyuncs.com/google_containers \
|
||||
--upload-certs \
|
||||
--ignore-preflight-errors=Swap
|
||||
|
||||
# 配置 kubectl
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
#### 5.2 生成节点加入命令
|
||||
```bash
|
||||
# 生成节点加入命令
|
||||
JOIN_COMMAND=$(kubeadm token create --print-join-command)
|
||||
echo "节点加入命令:"
|
||||
echo "$JOIN_COMMAND"
|
||||
echo "$JOIN_COMMAND" > node-join-command.txt
|
||||
```
|
||||
|
||||
### 步骤 6: 网络插件安装(Master 节点)
|
||||
|
||||
#### 6.1 下载 Flannel 清单
|
||||
```bash
|
||||
# 下载 Flannel v0.27.4
|
||||
FLANNEL_VER="v0.27.4"
|
||||
curl -fsSL https://raw.githubusercontent.com/flannel-io/flannel/${FLANNEL_VER}/Documentation/kube-flannel.yml -O
|
||||
|
||||
# 修改 Flannel 配置
|
||||
sed -i 's|"Network": "10.244.0.0/16"|"Network": "10.244.0.0/16"|g' kube-flannel.yml
|
||||
```
|
||||
|
||||
#### 6.2 预拉取 Flannel 镜像
|
||||
```bash
|
||||
# 预拉取并打标签
|
||||
REGISTRY_K8S_MIRROR="registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"
|
||||
GHCR_MIRROR="ghcr.tencentcloudcr.com"
|
||||
|
||||
# 预拉取 pause 镜像
|
||||
sudo ctr -n k8s.io images pull ${REGISTRY_K8S_MIRROR}/pause:3.8 || true
|
||||
sudo ctr -n k8s.io images tag ${REGISTRY_K8S_MIRROR}/pause:3.8 registry.k8s.io/pause:3.8 || true
|
||||
|
||||
# 预拉取 flannel 镜像
|
||||
sudo ctr -n k8s.io images pull ${GHCR_MIRROR}/flannel-io/flannel:${FLANNEL_VER} || true
|
||||
sudo ctr -n k8s.io images tag ${GHCR_MIRROR}/flannel-io/flannel:${FLANNEL_VER} ghcr.io/flannel-io/flannel:${FLANNEL_VER} || true
|
||||
```
|
||||
|
||||
#### 6.3 安装 Flannel
|
||||
```bash
|
||||
# 安装 Flannel
|
||||
kubectl apply -f kube-flannel.yml
|
||||
|
||||
# 等待 Flannel 组件就绪
|
||||
kubectl -n kube-flannel rollout status daemonset/kube-flannel-ds --timeout=600s
|
||||
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=600s
|
||||
|
||||
# 等待 CoreDNS 就绪
|
||||
kubectl -n kube-system rollout status deploy/coredns --timeout=600s
|
||||
```
|
||||
|
||||
### 步骤 7: 节点加入集群
|
||||
|
||||
#### 7.1 节点加入
|
||||
```bash
|
||||
# 检查是否存在加入命令文件
|
||||
if [ ! -f "node-join-command.txt" ]; then
|
||||
echo "错误: 找不到 node-join-command.txt 文件"
|
||||
echo "请先运行 k8s-step4-init-cluster.sh 初始化集群"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 读取加入命令
|
||||
JOIN_COMMAND=$(cat node-join-command.txt)
|
||||
echo "使用加入命令: $JOIN_COMMAND"
|
||||
|
||||
# Node1 加入集群
|
||||
ssh ubuntu@172.17.0.43 "sudo $JOIN_COMMAND"
|
||||
|
||||
# Node2 加入集群
|
||||
ssh ubuntu@172.17.0.34 "sudo $JOIN_COMMAND"
|
||||
|
||||
# 等待节点加入
|
||||
sleep 30
|
||||
|
||||
# 验证集群状态
|
||||
kubectl get nodes
|
||||
kubectl get pods -n kube-system
|
||||
kubectl get pods -n kube-flannel
|
||||
```
|
||||
|
||||
### 步骤 8: 集群验证
|
||||
|
||||
#### 8.1 验证节点状态
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
#### 8.2 验证 Pod 状态
|
||||
```bash
|
||||
kubectl get pods -A
|
||||
```
|
||||
|
||||
#### 8.3 验证集群功能
|
||||
```bash
|
||||
# 检查集群信息
|
||||
kubectl cluster-info
|
||||
|
||||
# 检查节点详细信息
|
||||
kubectl describe nodes
|
||||
```
|
||||
|
||||
### 步骤 9: 为其他节点安装 kubectl
|
||||
|
||||
#### 9.1 在 node1 和 node2 安装 kubectl
|
||||
```bash
|
||||
# 检查是否已安装
|
||||
if command -v kubectl &> /dev/null; then
|
||||
echo "kubectl 已安装,版本: $(kubectl version --client 2>/dev/null | grep 'Client Version' || echo 'unknown')"
|
||||
echo "跳过安装"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# 安装 kubectl
|
||||
sudo apt update
|
||||
sudo apt install -y apt-transport-https ca-certificates curl
|
||||
|
||||
# 添加 Kubernetes 官方 GPG key
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
|
||||
# 添加 Kubernetes apt 仓库
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
|
||||
# 更新包列表并安装 kubectl
|
||||
sudo apt update
|
||||
sudo apt install -y kubectl
|
||||
```
|
||||
|
||||
#### 9.2 复制 kubeconfig 配置文件
|
||||
```bash
|
||||
# 在 master 节点执行
|
||||
# 为 node1 创建 .kube 目录
|
||||
ssh ubuntu@172.17.0.43 "mkdir -p ~/.kube"
|
||||
|
||||
# 为 node2 创建 .kube 目录
|
||||
ssh ubuntu@172.17.0.34 "mkdir -p ~/.kube"
|
||||
|
||||
# 复制 kubeconfig 到 node1
|
||||
scp ~/.kube/config ubuntu@172.17.0.43:~/.kube/config
|
||||
|
||||
# 复制 kubeconfig 到 node2
|
||||
scp ~/.kube/config ubuntu@172.17.0.34:~/.kube/config
|
||||
```
|
||||
|
||||
#### 9.3 验证 kubectl 连接
|
||||
```bash
|
||||
# 验证 node1 kubectl 连接
|
||||
ssh ubuntu@172.17.0.43 "kubectl get nodes"
|
||||
|
||||
# 验证 node2 kubectl 连接
|
||||
ssh ubuntu@172.17.0.34 "kubectl get nodes"
|
||||
```
|
||||
|
||||
48
docs/kubernetes/k8s-prepare-env.sh
Normal file
48
docs/kubernetes/k8s-prepare-env.sh
Normal file
@@ -0,0 +1,48 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== Kubernetes 环境准备 ===="
|
||||
|
||||
# 1. 更新系统包
|
||||
echo "更新系统包..."
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
|
||||
# 2. 安装必要的工具
|
||||
echo "安装必要工具..."
|
||||
sudo apt install -y curl wget gnupg lsb-release ca-certificates apt-transport-https software-properties-common
|
||||
|
||||
# 3. 禁用 swap
|
||||
echo "禁用 swap..."
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
|
||||
# 4. 配置内核参数
|
||||
echo "配置内核参数..."
|
||||
cat <<EOF_MODULES | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF_MODULES
|
||||
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
# 5. 配置 sysctl 参数
|
||||
echo "配置 sysctl 参数..."
|
||||
cat <<EOF_SYSCTL | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF_SYSCTL
|
||||
|
||||
sudo sysctl --system
|
||||
|
||||
# 6. 配置防火墙
|
||||
echo "配置防火墙..."
|
||||
sudo ufw --force disable || true
|
||||
|
||||
# 按你的要求,不在节点上修改 /etc/hosts
|
||||
|
||||
echo "==== 环境准备完成 ===="
|
||||
echo "当前主机名: $(hostname)"
|
||||
echo "当前 IP: $(ip route get 1 | awk '{print $7; exit}')"
|
||||
echo "Swap 状态: $(swapon --show | wc -l) 个 swap 分区"
|
||||
109
docs/kubernetes/k8s-step1-prepare-env.sh
Normal file
109
docs/kubernetes/k8s-step1-prepare-env.sh
Normal file
@@ -0,0 +1,109 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 环境准备脚本
|
||||
# 功能: 在所有节点准备 Kubernetes 运行环境
|
||||
|
||||
echo "==== Kubernetes 环境准备 ===="
|
||||
|
||||
# 定义节点列表
|
||||
NODES=("172.17.0.15:master" "172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 本机 IP 与 SSH 选项
|
||||
LOCAL_IP=$(ip route get 1 | awk '{print $7; exit}')
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
# SSH 私钥(可用环境变量 SSH_KEY 覆盖),存在则自动携带
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
|
||||
# 函数:在所有节点执行命令
|
||||
execute_on_all_nodes() {
|
||||
local command="$1"
|
||||
local description="$2"
|
||||
|
||||
echo "==== $description ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "在 $hostname ($ip) 执行: $command"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
bash -lc "$command"
|
||||
else
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@$ip "$command"
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 函数:传输文件到所有节点
|
||||
copy_to_all_nodes() {
|
||||
local file="$1"
|
||||
echo "==== 传输文件 $file 到所有节点 ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "传输到 $hostname ($ip)"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
cp -f "$file" ~/
|
||||
else
|
||||
scp $SSH_OPTS $SSH_ID "$file" ubuntu@$ip:~/
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 创建环境准备脚本
|
||||
cat > k8s-prepare-env.sh << 'EOF_OUTER'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== Kubernetes 环境准备 ===="
|
||||
|
||||
# 1. 更新系统包
|
||||
echo "更新系统包..."
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
|
||||
# 2. 安装必要的工具
|
||||
echo "安装必要工具..."
|
||||
sudo apt install -y curl wget gnupg lsb-release ca-certificates apt-transport-https software-properties-common
|
||||
|
||||
# 3. 禁用 swap
|
||||
echo "禁用 swap..."
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
|
||||
# 4. 配置内核参数
|
||||
echo "配置内核参数..."
|
||||
cat <<EOF_MODULES | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF_MODULES
|
||||
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
# 5. 配置 sysctl 参数
|
||||
echo "配置 sysctl 参数..."
|
||||
cat <<EOF_SYSCTL | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF_SYSCTL
|
||||
|
||||
sudo sysctl --system
|
||||
|
||||
# 6. 配置防火墙
|
||||
echo "配置防火墙..."
|
||||
sudo ufw --force disable || true
|
||||
|
||||
# 按你的要求,不在节点上修改 /etc/hosts
|
||||
|
||||
echo "==== 环境准备完成 ===="
|
||||
echo "当前主机名: $(hostname)"
|
||||
echo "当前 IP: $(ip route get 1 | awk '{print $7; exit}')"
|
||||
echo "Swap 状态: $(swapon --show | wc -l) 个 swap 分区"
|
||||
EOF_OUTER
|
||||
|
||||
chmod +x k8s-prepare-env.sh
|
||||
copy_to_all_nodes k8s-prepare-env.sh
|
||||
execute_on_all_nodes "./k8s-prepare-env.sh" "环境准备"
|
||||
|
||||
echo "==== 环境准备完成 ===="
|
||||
133
docs/kubernetes/k8s-step2-install-containerd.sh
Normal file
133
docs/kubernetes/k8s-step2-install-containerd.sh
Normal file
@@ -0,0 +1,133 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 容器运行时安装脚本
|
||||
# 功能: 在所有节点安装 containerd 和 CNI 插件
|
||||
|
||||
echo "==== 安装容器运行时 (containerd) ===="
|
||||
|
||||
# 定义节点列表
|
||||
NODES=("172.17.0.15:master" "172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 本机 IP 与 SSH 选项
|
||||
LOCAL_IP=$(ip route get 1 | awk '{print $7; exit}')
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
# SSH 私钥(可用环境变量 SSH_KEY 覆盖),存在则自动携带
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
|
||||
# 统一的工件目录与文件名(在 master 上下载一次后分发)
|
||||
ARTIFACTS_DIR="$HOME/k8s-artifacts"
|
||||
CNI_VERSION="v1.3.0"
|
||||
CNI_TGZ="cni-plugins-linux-amd64-${CNI_VERSION}.tgz"
|
||||
|
||||
# 函数:在所有节点执行命令
|
||||
execute_on_all_nodes() {
|
||||
local command="$1"
|
||||
local description="$2"
|
||||
|
||||
echo "==== $description ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "在 $hostname ($ip) 执行: $command"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
bash -lc "$command"
|
||||
else
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@$ip "$command"
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 函数:传输文件到所有节点
|
||||
copy_to_all_nodes() {
|
||||
local file="$1"
|
||||
echo "==== 传输文件 $file 到所有节点 ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "传输到 $hostname ($ip)"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
cp -f "$file" ~/
|
||||
else
|
||||
scp $SSH_OPTS $SSH_ID "$file" ubuntu@$ip:~/
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 创建容器运行时安装脚本
|
||||
cat > k8s-install-containerd.sh << 'EOF_OUTER'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 安装容器运行时 (containerd) ===="
|
||||
|
||||
# 1. 安装 containerd
|
||||
echo "安装 containerd..."
|
||||
sudo apt update
|
||||
sudo apt install -y containerd
|
||||
|
||||
# 2. 配置 containerd
|
||||
echo "配置 containerd..."
|
||||
# ① 停止 containerd
|
||||
sudo systemctl stop containerd
|
||||
|
||||
# ② 生成默认配置
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
|
||||
|
||||
# ③ 注入镜像加速配置(docker.io/quay.io:腾讯云,其它:高校镜像优先)
|
||||
sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/a\
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n endpoint = ["https://mirror.ccs.tencentyun.com"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]\n endpoint = ["https://quay.tencentcloudcr.com"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]\n endpoint = ["https://ghcr.nju.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]\n endpoint = ["https://gcr.nju.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]\n endpoint = ["https://registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"]\n [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]\n endpoint = ["https://gcr.nju.edu.cn"]' /etc/containerd/config.toml
|
||||
|
||||
# ④ 重新加载并启动 containerd
|
||||
sudo systemctl daemon-reexec
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart containerd
|
||||
|
||||
## 4. 在 master 预下载 CNI 压缩包并分发到各节点
|
||||
echo "准备 CNI 工件并分发..."
|
||||
if [ "$LOCAL_IP" = "172.17.0.15" ]; then
|
||||
mkdir -p "$ARTIFACTS_DIR"
|
||||
if [ ! -f "$ARTIFACTS_DIR/$CNI_TGZ" ]; then
|
||||
echo "在 master 下载 $CNI_TGZ ..."
|
||||
curl -L --fail --retry 3 --connect-timeout 10 \
|
||||
-o "$ARTIFACTS_DIR/$CNI_TGZ" \
|
||||
"https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/$CNI_TGZ"
|
||||
else
|
||||
echo "已存在 $ARTIFACTS_DIR/$CNI_TGZ,跳过下载"
|
||||
fi
|
||||
# 分发到所有节点 home 目录
|
||||
copy_to_all_nodes "$ARTIFACTS_DIR/$CNI_TGZ"
|
||||
fi
|
||||
|
||||
# 5. 安装 CNI 插件(优先使用已分发的本地文件)
|
||||
echo "安装 CNI 插件..."
|
||||
sudo mkdir -p /opt/cni/bin
|
||||
if [ -f "$CNI_TGZ" ]; then
|
||||
echo "使用已分发的 $CNI_TGZ 进行安装"
|
||||
sudo tar -xzf "$CNI_TGZ" -C /opt/cni/bin/
|
||||
rm -f "$CNI_TGZ"
|
||||
else
|
||||
echo "未找到本地 $CNI_TGZ,尝试在线下载(网络慢时可能用时较长)..."
|
||||
curl -L --fail --retry 3 --connect-timeout 10 \
|
||||
-o "$CNI_TGZ" \
|
||||
"https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/$CNI_TGZ"
|
||||
sudo tar -xzf "$CNI_TGZ" -C /opt/cni/bin/
|
||||
rm -f "$CNI_TGZ"
|
||||
fi
|
||||
|
||||
# 6. 验证安装
|
||||
echo "==== 验证 containerd 安装 ===="
|
||||
sudo systemctl status containerd --no-pager -l
|
||||
sudo ctr version
|
||||
|
||||
echo "==== containerd 安装完成 ===="
|
||||
EOF_OUTER
|
||||
|
||||
chmod +x k8s-install-containerd.sh
|
||||
copy_to_all_nodes k8s-install-containerd.sh
|
||||
execute_on_all_nodes "./k8s-install-containerd.sh" "安装容器运行时"
|
||||
|
||||
echo "==== 容器运行时安装完成 ===="
|
||||
|
||||
149
docs/kubernetes/k8s-step3-install-components.sh
Normal file
149
docs/kubernetes/k8s-step3-install-components.sh
Normal file
@@ -0,0 +1,149 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 组件安装脚本
|
||||
# 功能: 在所有节点安装 kubelet, kubeadm, kubectl
|
||||
|
||||
echo "==== 安装 Kubernetes 组件 ===="
|
||||
|
||||
# 定义节点列表
|
||||
NODES=("172.17.0.15:master" "172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 本机 IP 与 SSH 选项
|
||||
LOCAL_IP=$(ip route get 1 | awk '{print $7; exit}')
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
# SSH 私钥(可用环境变量 SSH_KEY 覆盖),存在则自动携带
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
|
||||
# 函数:在所有节点执行命令
|
||||
execute_on_all_nodes() {
|
||||
local command="$1"
|
||||
local description="$2"
|
||||
|
||||
echo "==== $description ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "在 $hostname ($ip) 执行: $command"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
bash -lc "$command"
|
||||
else
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@$ip "$command"
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 函数:传输文件到所有节点
|
||||
copy_to_all_nodes() {
|
||||
local file="$1"
|
||||
echo "==== 传输文件 $file 到所有节点 ===="
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "传输到 $hostname ($ip)"
|
||||
if [ "$ip" = "$LOCAL_IP" ] || [ "$hostname" = "master" ]; then
|
||||
cp -f "$file" ~/
|
||||
else
|
||||
scp $SSH_OPTS $SSH_ID "$file" ubuntu@$ip:~/
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 创建 Kubernetes 组件安装脚本
|
||||
cat > k8s-install-components.sh << 'EOF_OUTER'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 安装 Kubernetes 组件 ===="
|
||||
|
||||
# 1. 添加 Kubernetes 仓库
|
||||
echo "添加 Kubernetes 仓库 (pkgs.k8s.io v1.32)..."
|
||||
# 确保 keyrings 目录存在并可读
|
||||
sudo install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
sudo chmod a+r /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list >/dev/null
|
||||
|
||||
# 2. 更新包列表
|
||||
echo "更新包列表..."
|
||||
sudo apt update
|
||||
|
||||
# 3. 安装 Kubernetes 组件(使用 v1.32 通道的最新补丁版本)
|
||||
echo "安装 Kubernetes 组件..."
|
||||
sudo apt install -y kubelet kubeadm kubectl
|
||||
|
||||
# 4. 锁定版本防止自动更新
|
||||
echo "锁定 Kubernetes 版本..."
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
|
||||
# 5. 配置 kubelet
|
||||
echo "配置 kubelet..."
|
||||
sudo mkdir -p /var/lib/kubelet
|
||||
cat <<EOF_KUBELET | sudo tee /var/lib/kubelet/config.yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/pki/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
clusterDomain: cluster.local
|
||||
clusterDNS:
|
||||
- 10.96.0.10
|
||||
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
|
||||
cgroupDriver: systemd
|
||||
failSwapOn: false
|
||||
hairpinMode: promiscuous-bridge
|
||||
healthzBindAddress: 127.0.0.1
|
||||
healthzPort: 10248
|
||||
httpCheckFrequency: 20s
|
||||
imageMinimumGCAge: 2m0s
|
||||
imageGCHighThresholdPercent: 85
|
||||
imageGCLowThresholdPercent: 80
|
||||
iptablesDropBit: 15
|
||||
iptablesMasqueradeBit: 15
|
||||
kubeAPIBurst: 10
|
||||
kubeAPIQPS: 5
|
||||
makeIPTablesUtilChains: true
|
||||
maxOpenFiles: 1000000
|
||||
maxPods: 110
|
||||
nodeStatusUpdateFrequency: 10s
|
||||
oomScoreAdj: -999
|
||||
podCIDR: 10.244.0.0/16
|
||||
registryBurst: 10
|
||||
registryPullQPS: 5
|
||||
resolvConf: /etc/resolv.conf
|
||||
rotateCertificates: true
|
||||
runtimeRequestTimeout: 2m0s
|
||||
serializeImagePulls: true
|
||||
serverTLSBootstrap: true
|
||||
streamingConnectionIdleTimeout: 4h0m0s
|
||||
syncFrequency: 1m0s
|
||||
volumeStatsAggPeriod: 1m0s
|
||||
EOF_KUBELET
|
||||
|
||||
# 6. 启动 kubelet
|
||||
echo "启动 kubelet..."
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable kubelet
|
||||
sudo systemctl start kubelet
|
||||
|
||||
# 7. 验证安装
|
||||
echo "==== 验证 Kubernetes 组件安装 ===="
|
||||
kubelet --version
|
||||
kubeadm version
|
||||
kubectl version --client
|
||||
|
||||
echo "==== Kubernetes 组件安装完成 ===="
|
||||
EOF_OUTER
|
||||
|
||||
chmod +x k8s-install-components.sh
|
||||
copy_to_all_nodes k8s-install-components.sh
|
||||
execute_on_all_nodes "./k8s-install-components.sh" "安装 Kubernetes 组件"
|
||||
|
||||
echo "==== Kubernetes 组件安装完成 ===="
|
||||
40
docs/kubernetes/k8s-step4-init-cluster.sh
Normal file
40
docs/kubernetes/k8s-step4-init-cluster.sh
Normal file
@@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 集群初始化脚本
|
||||
# 功能: 在 Master 节点初始化 Kubernetes 集群
|
||||
|
||||
echo "==== 初始化 Kubernetes 集群 ===="
|
||||
|
||||
# 1. 初始化集群
|
||||
echo "初始化 Kubernetes 集群..."
|
||||
sudo kubeadm init \
|
||||
--apiserver-advertise-address=172.17.0.15 \
|
||||
--control-plane-endpoint=172.17.0.15:6443 \
|
||||
--kubernetes-version=v1.32.3 \
|
||||
--service-cidr=10.96.0.0/12 \
|
||||
--pod-network-cidr=10.244.0.0/16 \
|
||||
--image-repository=registry.aliyuncs.com/google_containers \
|
||||
--upload-certs \
|
||||
--ignore-preflight-errors=Swap
|
||||
|
||||
# 2. 配置 kubectl
|
||||
echo "配置 kubectl..."
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
|
||||
# 3. 生成节点加入命令
|
||||
echo "生成节点加入命令..."
|
||||
JOIN_COMMAND=$(kubeadm token create --print-join-command)
|
||||
echo "节点加入命令:"
|
||||
echo "$JOIN_COMMAND"
|
||||
echo "$JOIN_COMMAND" > node-join-command.txt
|
||||
|
||||
# 4. 验证集群状态
|
||||
echo "==== 验证集群状态 ===="
|
||||
kubectl get nodes
|
||||
kubectl get pods -n kube-system
|
||||
|
||||
echo "==== 集群初始化完成 ===="
|
||||
echo "请保存节点加入命令,稍后用于将 node1 和 node2 加入集群"
|
||||
77
docs/kubernetes/k8s-step5-install-flannel.sh
Normal file
77
docs/kubernetes/k8s-step5-install-flannel.sh
Normal file
@@ -0,0 +1,77 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 网络插件安装脚本
|
||||
# 功能: 在 Master 节点安装 Flannel 网络插件
|
||||
|
||||
echo "==== 安装 Flannel 网络插件 ===="
|
||||
|
||||
# 1. 下载 Flannel 配置文件
|
||||
echo "下载 Flannel 配置文件..."
|
||||
FLANNEL_VER="v0.27.4"
|
||||
curl -fsSL https://raw.githubusercontent.com/flannel-io/flannel/${FLANNEL_VER}/Documentation/kube-flannel.yml -O
|
||||
|
||||
# 2. 修改 Flannel 配置
|
||||
echo "修改 Flannel 配置..."
|
||||
sed -i 's|"Network": "10.244.0.0/16"|"Network": "10.244.0.0/16"|g' kube-flannel.yml
|
||||
|
||||
echo "预拉取 Flannel 相关镜像(优先国内镜像域名,拉取后回标官方名)..."
|
||||
DOCKER_MIRROR="docker.m.daocloud.io"
|
||||
REGISTRY_K8S_MIRROR="registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"
|
||||
GHCR_MIRROR="ghcr.tencentcloudcr.com"
|
||||
|
||||
IMAGES=(
|
||||
"registry.k8s.io/pause:3.8"
|
||||
"ghcr.io/flannel-io/flannel:${FLANNEL_VER}"
|
||||
)
|
||||
|
||||
pull_and_tag() {
|
||||
local origin_ref="$1" # e.g. registry.k8s.io/pause:3.8
|
||||
local mirror_ref="$2" # e.g. registry-k8s-io.mirrors.sjtug.sjtu.edu.cn/pause:3.8
|
||||
echo "尝试从镜像 ${mirror_ref} 预拉取..."
|
||||
for i in $(seq 1 5); do
|
||||
if sudo ctr -n k8s.io images pull "${mirror_ref}"; then
|
||||
echo "打官方标签: ${origin_ref} <- ${mirror_ref}"
|
||||
sudo ctr -n k8s.io images tag "${mirror_ref}" "${origin_ref}" || true
|
||||
return 0
|
||||
fi
|
||||
echo "pull 失败,重试 ${i}/5..."; sleep 2
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# 预拉取 pause 镜像
|
||||
echo "预拉取: registry.k8s.io/pause:3.8"
|
||||
if pull_and_tag "registry.k8s.io/pause:3.8" "${REGISTRY_K8S_MIRROR}/pause:3.8"; then
|
||||
echo "pause 镜像拉取成功"
|
||||
else
|
||||
echo "WARN: pause 镜像拉取失败,将由 kubelet 重试"
|
||||
fi
|
||||
|
||||
# 预拉取 flannel 镜像
|
||||
echo "预拉取: ghcr.io/flannel-io/flannel:${FLANNEL_VER}"
|
||||
if pull_and_tag "ghcr.io/flannel-io/flannel:${FLANNEL_VER}" "${GHCR_MIRROR}/flannel-io/flannel:${FLANNEL_VER}"; then
|
||||
echo "flannel 镜像拉取成功"
|
||||
else
|
||||
echo "WARN: flannel 镜像拉取失败,将由 kubelet 重试"
|
||||
fi
|
||||
|
||||
# 3. 安装 Flannel
|
||||
echo "安装 Flannel..."
|
||||
kubectl apply -f kube-flannel.yml
|
||||
|
||||
# 4. 等待 Flannel 启动
|
||||
echo "等待 Flannel 组件就绪..."
|
||||
kubectl -n kube-flannel rollout status daemonset/kube-flannel-ds --timeout=600s || true
|
||||
kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=600s || true
|
||||
|
||||
echo "等待 CoreDNS 由 Pending 变为 Ready..."
|
||||
kubectl -n kube-system rollout status deploy/coredns --timeout=600s || true
|
||||
|
||||
# 5. 验证网络插件
|
||||
echo "==== 验证 Flannel 安装 ===="
|
||||
kubectl get pods -n kube-flannel
|
||||
kubectl get nodes
|
||||
|
||||
echo "==== Flannel 网络插件安装完成 ===="
|
||||
|
||||
53
docs/kubernetes/k8s-step6-join-nodes.sh
Normal file
53
docs/kubernetes/k8s-step6-join-nodes.sh
Normal file
@@ -0,0 +1,53 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Kubernetes 节点加入脚本
|
||||
# 功能: 将 Node1 和 Node2 加入 Kubernetes 集群
|
||||
|
||||
echo "==== 将节点加入 Kubernetes 集群 ===="
|
||||
|
||||
# 检查是否存在加入命令文件
|
||||
if [ ! -f "node-join-command.txt" ]; then
|
||||
echo "错误: 找不到 node-join-command.txt 文件"
|
||||
echo "请先运行 k8s-step4-init-cluster.sh 初始化集群"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 读取加入命令
|
||||
JOIN_COMMAND=$(cat node-join-command.txt)
|
||||
|
||||
# SSH 选项与密钥
|
||||
SSH_OPTS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes'
|
||||
SSH_KEY_PATH=${SSH_KEY:-$HOME/.ssh/id_rsa}
|
||||
[ -f "$SSH_KEY_PATH" ] && SSH_ID="-i $SSH_KEY_PATH" || SSH_ID=""
|
||||
echo "使用加入命令: $JOIN_COMMAND"
|
||||
|
||||
# 定义节点列表
|
||||
NODES=("172.17.0.43:node1" "172.17.0.34:node2")
|
||||
|
||||
# 将节点加入集群
|
||||
for node in "${NODES[@]}"; do
|
||||
IFS=':' read -r ip hostname <<< "$node"
|
||||
echo "==== 将 $hostname ($ip) 加入集群 ===="
|
||||
ssh $SSH_OPTS $SSH_ID ubuntu@$ip "sudo $JOIN_COMMAND"
|
||||
echo "$hostname 加入完成"
|
||||
done
|
||||
|
||||
# 等待节点加入
|
||||
echo "==== 等待节点加入集群 ===="
|
||||
sleep 30
|
||||
|
||||
# 验证集群状态
|
||||
echo "==== 验证集群状态 ===="
|
||||
kubectl get nodes
|
||||
kubectl get pods -n kube-system
|
||||
kubectl get pods -n kube-flannel
|
||||
|
||||
echo "==== 节点加入完成 ===="
|
||||
echo "集群信息:"
|
||||
echo "- Master: 172.17.0.15"
|
||||
echo "- Node1: 172.17.0.43"
|
||||
echo "- Node2: 172.17.0.34"
|
||||
echo "- Kubernetes 版本: v1.32.3"
|
||||
echo "- 网络插件: Flannel"
|
||||
echo "- 容器运行时: containerd"
|
||||
211
docs/kubernetes/kube-flannel.yml
Normal file
211
docs/kubernetes/kube-flannel.yml
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: kube-flannel
|
||||
labels:
|
||||
k8s-app: flannel
|
||||
pod-security.kubernetes.io/enforce: privileged
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: flannel
|
||||
name: flannel
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
- patch
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: flannel
|
||||
name: flannel
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: flannel
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: flannel
|
||||
namespace: kube-flannel
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: flannel
|
||||
name: flannel
|
||||
namespace: kube-flannel
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: kube-flannel-cfg
|
||||
namespace: kube-flannel
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
app: flannel
|
||||
data:
|
||||
cni-conf.json: |
|
||||
{
|
||||
"name": "cbr0",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"hairpinMode": true,
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "10.244.0.0/16",
|
||||
"EnableNFTables": false,
|
||||
"Backend": {
|
||||
"Type": "vxlan"
|
||||
}
|
||||
}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-flannel-ds
|
||||
namespace: kube-flannel
|
||||
labels:
|
||||
tier: node
|
||||
app: flannel
|
||||
k8s-app: flannel
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: flannel
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: node
|
||||
app: flannel
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/os
|
||||
operator: In
|
||||
values:
|
||||
- linux
|
||||
hostNetwork: true
|
||||
priorityClassName: system-node-critical
|
||||
tolerations:
|
||||
- operator: Exists
|
||||
effect: NoSchedule
|
||||
serviceAccountName: flannel
|
||||
initContainers:
|
||||
- name: install-cni-plugin
|
||||
image: ghcr.io/flannel-io/flannel-cni-plugin:v1.8.0-flannel1
|
||||
command:
|
||||
- cp
|
||||
args:
|
||||
- -f
|
||||
- /flannel
|
||||
- /opt/cni/bin/flannel
|
||||
volumeMounts:
|
||||
- name: cni-plugin
|
||||
mountPath: /opt/cni/bin
|
||||
- name: install-cni
|
||||
image: ghcr.io/flannel-io/flannel:v0.27.4
|
||||
command:
|
||||
- cp
|
||||
args:
|
||||
- -f
|
||||
- /etc/kube-flannel/cni-conf.json
|
||||
- /etc/cni/net.d/10-flannel.conflist
|
||||
volumeMounts:
|
||||
- name: cni
|
||||
mountPath: /etc/cni/net.d
|
||||
- name: flannel-cfg
|
||||
mountPath: /etc/kube-flannel/
|
||||
containers:
|
||||
- name: kube-flannel
|
||||
image: ghcr.io/flannel-io/flannel:v0.27.4
|
||||
command:
|
||||
- /opt/bin/flanneld
|
||||
args:
|
||||
- --ip-masq
|
||||
- --kube-subnet-mgr
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "50Mi"
|
||||
securityContext:
|
||||
privileged: false
|
||||
capabilities:
|
||||
add: ["NET_ADMIN", "NET_RAW"]
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: EVENT_QUEUE_DEPTH
|
||||
value: "5000"
|
||||
- name: CONT_WHEN_CACHE_NOT_READY
|
||||
value: "false"
|
||||
volumeMounts:
|
||||
- name: run
|
||||
mountPath: /run/flannel
|
||||
- name: flannel-cfg
|
||||
mountPath: /etc/kube-flannel/
|
||||
- name: xtables-lock
|
||||
mountPath: /run/xtables.lock
|
||||
volumes:
|
||||
- name: run
|
||||
hostPath:
|
||||
path: /run/flannel
|
||||
- name: cni-plugin
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: cni
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
- name: flannel-cfg
|
||||
configMap:
|
||||
name: kube-flannel-cfg
|
||||
- name: xtables-lock
|
||||
hostPath:
|
||||
path: /run/xtables.lock
|
||||
type: FileOrCreate
|
||||
51
docs/kubernetes/setup-master-gateway.sh
Normal file
51
docs/kubernetes/setup-master-gateway.sh
Normal file
@@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 配置 Master 节点作为网关 ===="
|
||||
|
||||
# 1. 启用 IP 转发
|
||||
echo "启用 IP 转发..."
|
||||
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
|
||||
sudo sysctl -p
|
||||
|
||||
# 2. 配置 iptables NAT 规则
|
||||
echo "配置 iptables NAT 规则..."
|
||||
# 清空现有规则
|
||||
sudo iptables -F
|
||||
sudo iptables -t nat -F
|
||||
sudo iptables -t mangle -F
|
||||
sudo iptables -X
|
||||
sudo iptables -t nat -X
|
||||
sudo iptables -t mangle -X
|
||||
|
||||
# 设置默认策略
|
||||
sudo iptables -P INPUT ACCEPT
|
||||
sudo iptables -P FORWARD ACCEPT
|
||||
sudo iptables -P OUTPUT ACCEPT
|
||||
|
||||
# 配置 NAT 规则 - 允许内网节点通过 master 访问外网
|
||||
sudo iptables -t nat -A POSTROUTING -s 172.17.0.0/20 -o eth0 -j MASQUERADE
|
||||
|
||||
# 允许转发来自内网的流量
|
||||
sudo iptables -A FORWARD -s 172.17.0.0/20 -j ACCEPT
|
||||
sudo iptables -A FORWARD -d 172.17.0.0/20 -j ACCEPT
|
||||
|
||||
# 3. 保存 iptables 规则
|
||||
echo "保存 iptables 规则..."
|
||||
sudo apt update
|
||||
sudo apt install -y iptables-persistent
|
||||
sudo netfilter-persistent save
|
||||
|
||||
# 4. 验证配置
|
||||
echo "==== 验证配置 ===="
|
||||
echo "IP 转发状态:"
|
||||
cat /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
echo "当前 iptables NAT 规则:"
|
||||
sudo iptables -t nat -L -n -v
|
||||
|
||||
echo "当前 iptables FORWARD 规则:"
|
||||
sudo iptables -L FORWARD -n -v
|
||||
|
||||
echo "==== Master 网关配置完成 ===="
|
||||
echo "Master 节点现在可以作为内网节点的网关使用"
|
||||
26
docs/kubernetes/setup-node1.sh
Normal file
26
docs/kubernetes/setup-node1.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 配置 Node1 (172.17.0.43) 网络路由 ===="
|
||||
|
||||
echo "==== 当前状态 ===="
|
||||
echo "当前主机名: $(hostname)"
|
||||
echo "当前 IP: $(ip route get 1 | awk '{print $7; exit}')"
|
||||
|
||||
# 配置网络路由 - 通过 master 访问外网
|
||||
echo "配置网络路由..."
|
||||
# 删除默认网关(如果存在)
|
||||
sudo ip route del default 2>/dev/null || true
|
||||
|
||||
# 添加默认网关指向 master
|
||||
sudo ip route add default via 172.17.0.15
|
||||
|
||||
echo "==== 验证网络配置 ===="
|
||||
echo "当前路由表:"
|
||||
ip route show
|
||||
|
||||
echo "测试网络连通性:"
|
||||
ping -c 2 172.17.0.15 && echo "✓ 可以访问 master" || echo "✗ 无法访问 master"
|
||||
ping -c 2 8.8.8.8 && echo "✓ 可以访问外网" || echo "✗ 无法访问外网"
|
||||
|
||||
echo "==== Node1 网络路由配置完成 ===="
|
||||
26
docs/kubernetes/setup-node2.sh
Normal file
26
docs/kubernetes/setup-node2.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "==== 配置 Node2 (172.17.0.34) 网络路由 ===="
|
||||
|
||||
echo "==== 当前状态 ===="
|
||||
echo "当前主机名: $(hostname)"
|
||||
echo "当前 IP: $(ip route get 1 | awk '{print $7; exit}')"
|
||||
|
||||
# 配置网络路由 - 通过 master 访问外网
|
||||
echo "配置网络路由..."
|
||||
# 删除默认网关(如果存在)
|
||||
sudo ip route del default 2>/dev/null || true
|
||||
|
||||
# 添加默认网关指向 master
|
||||
sudo ip route add default via 172.17.0.15
|
||||
|
||||
echo "==== 验证网络配置 ===="
|
||||
echo "当前路由表:"
|
||||
ip route show
|
||||
|
||||
echo "测试网络连通性:"
|
||||
ping -c 2 172.17.0.15 && echo "✓ 可以访问 master" || echo "✗ 无法访问 master"
|
||||
ping -c 2 8.8.8.8 && echo "✓ 可以访问外网" || echo "✗ 无法访问外网"
|
||||
|
||||
echo "==== Node2 网络路由配置完成 ===="
|
||||
2
go.mod
2
go.mod
@@ -40,6 +40,7 @@ require (
|
||||
github.com/dimiro1/reply v0.0.0-20200315094148-d0136a4c9e21
|
||||
github.com/djherbis/buffer v1.2.0
|
||||
github.com/djherbis/nio/v3 v3.0.1
|
||||
github.com/docker/go-connections v0.4.0
|
||||
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5
|
||||
github.com/dustin/go-humanize v1.0.1
|
||||
github.com/editorconfig/editorconfig-core-go/v2 v2.6.3
|
||||
@@ -139,7 +140,6 @@ require (
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
||||
github.com/distribution/reference v0.5.0 // indirect
|
||||
github.com/docker/distribution v2.8.3+incompatible // indirect
|
||||
github.com/docker/go-connections v0.4.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
||||
github.com/go-logr/logr v1.4.2 // indirect
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
package devcontainer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
|
||||
"code.gitea.io/gitea/models/db"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
)
|
||||
|
||||
type Devcontainer struct {
|
||||
@@ -29,7 +35,122 @@ type DevcontainerOutput struct {
|
||||
DevcontainerId int64 `xorm:"BIGINT NOT NULL FK('devcontainer_id') REFERENCES devcontainer(id) ON DELETE CASCADE 'devcontainer_id' comment('devcontainer表主键')"`
|
||||
}
|
||||
|
||||
type DevcontainerScript struct {
|
||||
Id int64 `xorm:"BIGINT pk NOT NULL autoincr 'id' comment('主键,devContainerId')"`
|
||||
RepoId int64 `xorm:"BIGINT NOT NULL 'repo_id' comment('repository表主键')"`
|
||||
UserId int64 `xorm:"BIGINT NOT NULL 'user_id' comment('user表主键')"`
|
||||
VariableName string `xorm:"NOT NULL 'variable_name' comment('user表主键')"`
|
||||
}
|
||||
|
||||
func init() {
|
||||
|
||||
db.RegisterModel(new(Devcontainer))
|
||||
db.RegisterModel(new(DevcontainerScript))
|
||||
db.RegisterModel(new(DevcontainerOutput))
|
||||
}
|
||||
func GetScript(ctx context.Context, userId, repoID int64) (map[string]string, error) {
|
||||
variables := make(map[string]string)
|
||||
var devstarVariables []*DevcontainerVariable
|
||||
var name []string
|
||||
// Devstar level
|
||||
// 从远程获取Devstar变量
|
||||
client := &http.Client{}
|
||||
req, err := http.NewRequest("GET", "http://devstar.cn/variables/export", nil)
|
||||
if err != nil {
|
||||
log.Error("Failed to create request for devstar variables: %v", err)
|
||||
} else {
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Error("Failed to fetch devstar variables: %v", err)
|
||||
} else {
|
||||
defer resp.Body.Close()
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
log.Error("Failed to read devstar variables response: %v", err)
|
||||
} else {
|
||||
|
||||
err = json.Unmarshal(body, &devstarVariables)
|
||||
if err != nil {
|
||||
log.Error("Failed to unmarshal devstar variables: %v", err)
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Global
|
||||
err = db.GetEngine(ctx).
|
||||
Select("variable_name").
|
||||
Table("devcontainer_script").
|
||||
Where("user_id = ? AND repo_id = ?", 0, 0).
|
||||
Find(&name)
|
||||
|
||||
globalVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{})
|
||||
if err != nil {
|
||||
log.Error("find global variables: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
// 过滤出name在variableNames中的变量
|
||||
globalVariables = append(devstarVariables, globalVariables...)
|
||||
var filteredGlobalVars []*DevcontainerVariable
|
||||
for _, v := range globalVariables {
|
||||
if contains(name, v.Name) {
|
||||
filteredGlobalVars = append(filteredGlobalVars, v)
|
||||
}
|
||||
}
|
||||
|
||||
// Org / User level
|
||||
err = db.GetEngine(ctx).
|
||||
Select("variable_name").
|
||||
Table("devcontainer_script").
|
||||
Where("user_id = ? AND repo_id = ?", userId, 0).
|
||||
Find(&name)
|
||||
ownerVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{OwnerID: userId})
|
||||
if err != nil {
|
||||
log.Error("find variables of org: %d, error: %v", userId, err)
|
||||
return nil, err
|
||||
}
|
||||
// 过滤出name在variableNames中的变量
|
||||
ownerVariables = append(devstarVariables, ownerVariables...)
|
||||
var filteredOwnerVars []*DevcontainerVariable
|
||||
for _, v := range ownerVariables {
|
||||
if contains(name, v.Name) {
|
||||
filteredOwnerVars = append(filteredOwnerVars, v)
|
||||
}
|
||||
}
|
||||
// Repo level
|
||||
err = db.GetEngine(ctx).
|
||||
Select("variable_name").
|
||||
Table("devcontainer_script").
|
||||
Where("repo_id = ?", repoID).
|
||||
Find(&name)
|
||||
repoVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{RepoID: repoID})
|
||||
if err != nil {
|
||||
log.Error("find variables of repo: %d, error: %v", repoID, err)
|
||||
return nil, err
|
||||
}
|
||||
// 过滤出name在variableNames中的变量
|
||||
repoVariables = append(devstarVariables, repoVariables...)
|
||||
var filteredRepoVars []*DevcontainerVariable
|
||||
for _, v := range repoVariables {
|
||||
if contains(name, v.Name) {
|
||||
filteredRepoVars = append(filteredRepoVars, v)
|
||||
}
|
||||
}
|
||||
// Level precedence: Org / User > Repo > Global
|
||||
for _, v := range append(filteredGlobalVars, append(filteredRepoVars, filteredOwnerVars...)...) {
|
||||
variables[v.Name] = v.Data
|
||||
}
|
||||
|
||||
return variables, nil
|
||||
}
|
||||
|
||||
// contains 检查字符串切片中是否包含指定的字符串
|
||||
func contains(slice []string, item string) bool {
|
||||
for _, s := range slice {
|
||||
if s == item {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
156
models/devcontainer/variable.go
Normal file
156
models/devcontainer/variable.go
Normal file
@@ -0,0 +1,156 @@
|
||||
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
package devcontainer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"code.gitea.io/gitea/models/db"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/timeutil"
|
||||
"code.gitea.io/gitea/modules/util"
|
||||
|
||||
"xorm.io/builder"
|
||||
)
|
||||
|
||||
// DevcontainerVariable represents a variable that can be used in actions
|
||||
//
|
||||
// It can be:
|
||||
// 1. global variable, OwnerID is 0 and RepoID is 0
|
||||
// 2. org/user level variable, OwnerID is org/user ID and RepoID is 0
|
||||
// 3. repo level variable, OwnerID is 0 and RepoID is repo ID
|
||||
//
|
||||
// Please note that it's not acceptable to have both OwnerID and RepoID to be non-zero,
|
||||
// or it will be complicated to find variables belonging to a specific owner.
|
||||
// For example, conditions like `OwnerID = 1` will also return variable {OwnerID: 1, RepoID: 1},
|
||||
// but it's a repo level variable, not an org/user level variable.
|
||||
// To avoid this, make it clear with {OwnerID: 0, RepoID: 1} for repo level variables.
|
||||
type DevcontainerVariable struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
OwnerID int64 `xorm:"UNIQUE(owner_repo_name)"`
|
||||
RepoID int64 `xorm:"INDEX UNIQUE(owner_repo_name)"`
|
||||
Name string `xorm:"UNIQUE(owner_repo_name) NOT NULL"`
|
||||
Data string `xorm:"LONGTEXT NOT NULL"`
|
||||
Description string `xorm:"TEXT"`
|
||||
CreatedUnix timeutil.TimeStamp `xorm:"created NOT NULL"`
|
||||
UpdatedUnix timeutil.TimeStamp `xorm:"updated"`
|
||||
}
|
||||
|
||||
const (
|
||||
VariableDescriptionMaxLength = 4096
|
||||
)
|
||||
|
||||
func init() {
|
||||
db.RegisterModel(new(DevcontainerVariable))
|
||||
}
|
||||
|
||||
func InsertVariable(ctx context.Context, ownerID, repoID int64, name, data, description string) (*DevcontainerVariable, error) {
|
||||
if ownerID != 0 && repoID != 0 {
|
||||
// It's trying to create a variable that belongs to a repository, but OwnerID has been set accidentally.
|
||||
// Remove OwnerID to avoid confusion; it's not worth returning an error here.
|
||||
ownerID = 0
|
||||
}
|
||||
|
||||
description = util.TruncateRunes(description, VariableDescriptionMaxLength)
|
||||
|
||||
variable := &DevcontainerVariable{
|
||||
OwnerID: ownerID,
|
||||
RepoID: repoID,
|
||||
Name: strings.ToUpper(name),
|
||||
Data: data,
|
||||
Description: description,
|
||||
}
|
||||
return variable, db.Insert(ctx, variable)
|
||||
}
|
||||
|
||||
type FindVariablesOpts struct {
|
||||
db.ListOptions
|
||||
IDs []int64
|
||||
RepoID int64
|
||||
OwnerID int64 // it will be ignored if RepoID is set
|
||||
Name string
|
||||
}
|
||||
|
||||
func (opts FindVariablesOpts) ToConds() builder.Cond {
|
||||
cond := builder.NewCond()
|
||||
|
||||
if len(opts.IDs) > 0 {
|
||||
if len(opts.IDs) == 1 {
|
||||
cond = cond.And(builder.Eq{"id": opts.IDs[0]})
|
||||
} else {
|
||||
cond = cond.And(builder.In("id", opts.IDs))
|
||||
}
|
||||
}
|
||||
|
||||
// Since we now support instance-level variables,
|
||||
// there is no need to check for null values for `owner_id` and `repo_id`
|
||||
cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
|
||||
if opts.RepoID != 0 { // if RepoID is set
|
||||
// ignore OwnerID and treat it as 0
|
||||
cond = cond.And(builder.Eq{"owner_id": 0})
|
||||
} else {
|
||||
cond = cond.And(builder.Eq{"owner_id": opts.OwnerID})
|
||||
}
|
||||
|
||||
if opts.Name != "" {
|
||||
cond = cond.And(builder.Eq{"name": strings.ToUpper(opts.Name)})
|
||||
}
|
||||
return cond
|
||||
}
|
||||
|
||||
func FindVariables(ctx context.Context, opts FindVariablesOpts) ([]*DevcontainerVariable, error) {
|
||||
return db.Find[DevcontainerVariable](ctx, opts)
|
||||
}
|
||||
|
||||
func UpdateVariableCols(ctx context.Context, variable *DevcontainerVariable, cols ...string) (bool, error) {
|
||||
|
||||
variable.Description = util.TruncateRunes(variable.Description, VariableDescriptionMaxLength)
|
||||
|
||||
variable.Name = strings.ToUpper(variable.Name)
|
||||
count, err := db.GetEngine(ctx).
|
||||
ID(variable.ID).
|
||||
Cols(cols...).
|
||||
Update(variable)
|
||||
return count != 0, err
|
||||
}
|
||||
|
||||
func DeleteVariable(ctx context.Context, id int64) error {
|
||||
if _, err := db.DeleteByID[DevcontainerVariable](ctx, id); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetVariables(ctx context.Context, userId, repoID int64) (map[string]string, error) {
|
||||
variables := map[string]string{}
|
||||
|
||||
// Global
|
||||
globalVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{})
|
||||
if err != nil {
|
||||
log.Error("find global variables: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Org / User level
|
||||
ownerVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{OwnerID: userId})
|
||||
if err != nil {
|
||||
log.Error("find variables of org: %d, error: %v", userId, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Repo level
|
||||
repoVariables, err := db.Find[DevcontainerVariable](ctx, FindVariablesOpts{RepoID: repoID})
|
||||
if err != nil {
|
||||
log.Error("find variables of repo: %d, error: %v", repoID, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Level precedence: Org / User > Repo > Global
|
||||
for _, v := range append(globalVariables, append(repoVariables, ownerVariables...)...) {
|
||||
variables[v.Name] = v.Data
|
||||
}
|
||||
|
||||
return variables, nil
|
||||
}
|
||||
@@ -1072,8 +1072,11 @@ visibility.private_tooltip = Visible only to members of organizations you have j
|
||||
dev_container = Dev Container
|
||||
dev_container_empty = Oops, it looks like there is no Dev Container Setting in this repository.
|
||||
dev_container_invalid_config_prompt = Invalid Dev Container Configuration: Please upload a valid 'devcontainer.json' file to the default branch, and ensure that this repository is NOT archived.
|
||||
dev_container_control = Container Management
|
||||
dev_container_control.update = Save Dev Container
|
||||
dev_container_control.create = Create Dev Container
|
||||
dev_container_control.stop = Stop Dev Container
|
||||
dev_container_control.start = Start Dev Container
|
||||
dev_container_control.creation_success_for_user = The Dev Container has been created successfully for user '%s'.
|
||||
dev_container_control.creation_failed_for_user = Failed to create the Dev Container for user '%s'.
|
||||
dev_container_control.delete = Delete Dev Container
|
||||
@@ -3026,6 +3029,7 @@ config_summary = Summary
|
||||
config_settings = Settings
|
||||
notices = System Notices
|
||||
monitor = Monitoring
|
||||
devcontainer = devcontainer
|
||||
first_page = First
|
||||
last_page = Last
|
||||
total = Total: %d
|
||||
@@ -3838,6 +3842,24 @@ deletion.success = The secret has been removed.
|
||||
deletion.failed = Failed to remove secret.
|
||||
management = Secrets Management
|
||||
|
||||
[devcontainer]
|
||||
variables = Variables
|
||||
variables.management = Variables Management
|
||||
variables.creation = Add Variable
|
||||
variables.none = There are no variables yet.
|
||||
variables.deletion = Remove variable
|
||||
variables.deletion.description = Removing a variable is permanent and cannot be undone. Continue?
|
||||
variables.description = 1. As a variable: "$variable name" can be referenced in the variable value and the script specified in devcontainer.json, with the same name priority being: User > Repository > Administration <br>2. As a script: Script management adds variable names to become the initialization script content of the devcontainer.
|
||||
variables.id_not_exist = Variable with ID %d does not exist.
|
||||
variables.edit = Edit Variable
|
||||
variables.deletion.failed = Failed to remove variable.
|
||||
variables.deletion.success = The variable has been removed.
|
||||
variables.creation.failed = Failed to add variable.
|
||||
variables.creation.success = The variable "%s" has been added.
|
||||
variables.update.failed = Failed to edit variable.
|
||||
variables.update.success = The variable has been edited.
|
||||
scripts=Script Management
|
||||
scripts.description=Add variable names to become the initialization script content of the development container, with the same name priority being: User > Repository > Administration
|
||||
[actions]
|
||||
actions = Actions
|
||||
|
||||
|
||||
@@ -1064,6 +1064,9 @@ visibility.private_tooltip=仅对您已加入的组织的成员可见。
|
||||
dev_container = 开发容器
|
||||
dev_container_empty = 本仓库没有开发容器配置
|
||||
dev_container_invalid_config_prompt = 开发容器配置无效:需要上传有效的 devcontainer.json 至默认分支,且确保仓库未处于存档状态
|
||||
dev_container_control = 容器管理
|
||||
dev_container_control.stop = 停止开发容器
|
||||
dev_container_control.start = 启动开发容器
|
||||
dev_container_control.update = 保存开发容器
|
||||
dev_container_control.create = 创建开发容器
|
||||
dev_container_control.creation_success_for_user = 用户 '%s' 已成功创建开发容器
|
||||
@@ -3015,11 +3018,13 @@ config_summary=摘要
|
||||
config_settings=设置
|
||||
notices=系统提示
|
||||
monitor=监控面板
|
||||
devcontainer=开发容器
|
||||
first_page=首页
|
||||
last_page=末页
|
||||
total=总计:%d
|
||||
settings=管理设置
|
||||
|
||||
|
||||
dashboard.new_version_hint=Gitea %s 现已可用,您正在运行 %s。查看 <a target="_blank" rel="noreferrer" href="%s">博客</a> 了解详情。
|
||||
dashboard.statistic=摘要
|
||||
dashboard.maintenance_operations=运维
|
||||
@@ -3809,7 +3814,7 @@ description=密钥将被传给特定的工作流,其它情况无法读取。
|
||||
none=还没有密钥。
|
||||
|
||||
; These keys are also for "edit secret", the keys are kept as-is to avoid unnecessary re-translation
|
||||
creation.description=组织描述
|
||||
creation.description=描述
|
||||
creation.name_placeholder=不区分大小写,仅限字母数字或下划线且不能以 GITEA_ 或 GITHUB_ 开头
|
||||
creation.value_placeholder=输入任何内容,开头和结尾的空白将会被忽略
|
||||
creation.description_placeholder=输入简短描述(可选)
|
||||
@@ -3825,6 +3830,24 @@ deletion.success=此密钥已删除。
|
||||
deletion.failed=删除密钥失败。
|
||||
management=密钥管理
|
||||
|
||||
[devcontainer]
|
||||
variables=变量
|
||||
variables.management=变量管理
|
||||
variables.creation=添加变量
|
||||
variables.none=目前还没有变量。
|
||||
variables.deletion=删除变量
|
||||
variables.deletion.description=删除变量是永久性的,无法撤消。继续吗?
|
||||
variables.description=1.作为变量使用:「$变量名」可以在变量值和devcontainer.json指定的脚本中引用,同名变量优先级:用户>仓库>管理后台。<br>2.作为脚本使用:脚本管理添加变量名成为开发容器的初始化脚本内容。
|
||||
variables.id_not_exist=ID为 %d 的变量不存在。
|
||||
variables.edit=编辑变量
|
||||
variables.deletion.failed=变量删除失败。
|
||||
variables.deletion.success=变量已删除。
|
||||
variables.creation.failed=变量添加失败。
|
||||
variables.creation.success=变量「%s」添加成功。
|
||||
variables.update.failed=变量编辑失败。
|
||||
variables.update.success=变量已编辑。
|
||||
scripts=脚本管理
|
||||
scripts.description=添加变量名成为开发容器的初始化脚本内容,同名脚本优先级:用户>仓库>管理后台。
|
||||
[actions]
|
||||
actions=工作流
|
||||
|
||||
|
||||
@@ -971,7 +971,7 @@ owner.settings.cleanuprules.enabled=已啟用
|
||||
[secrets]
|
||||
|
||||
; These keys are also for "edit secret", the keys are kept as-is to avoid unnecessary re-translation
|
||||
creation.description=組織描述
|
||||
creation.description=描述
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -86,6 +86,13 @@ function install {
|
||||
sudo docker pull devstar.cn/devstar/$IMAGE_NAME:$VERSION
|
||||
IMAGE_REGISTRY_USER=devstar.cn/devstar
|
||||
fi
|
||||
if sudo docker pull devstar.cn/devstar/webterminal:latest; then
|
||||
success "Successfully pulled devstar.cn/devstar/webterminal:latest"
|
||||
else
|
||||
sudo docker pull mengning997/webterminal:latest
|
||||
success "Successfully pulled mengning997/webterminal:latest renamed to devstar.cn/devstar/webterminal:latest"
|
||||
sudo docker tag mengning997/webterminal:latest devstar.cn/devstar/webterminal:latest
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to start
|
||||
|
||||
@@ -648,7 +648,7 @@ func SubmitInstall(ctx *context.Context) {
|
||||
} else {
|
||||
err = devcontainer_service.RegistWebTerminal(otherCtx)
|
||||
if err != nil {
|
||||
ctx.RenderWithErr(ctx.Tr("install.web_terminal_failed", err), tplInstall, &form)
|
||||
log.Error("Unable to shutdown the install server! Error: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -183,25 +183,31 @@ func CreateDevContainerConfiguration(ctx *context.Context) {
|
||||
if err != nil {
|
||||
log.Info(err.Error())
|
||||
ctx.Flash.Error(err.Error(), true)
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
return
|
||||
}
|
||||
if hasDevContainerConfiguration {
|
||||
ctx.Flash.Error("Already exist", true)
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
return
|
||||
}
|
||||
isAdmin, err := devcontainer_service.IsAdmin(ctx, ctx.Doer, ctx.Repo.Repository.ID)
|
||||
if err != nil {
|
||||
log.Info(err.Error())
|
||||
ctx.Flash.Error(err.Error(), true)
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
return
|
||||
}
|
||||
if !isAdmin {
|
||||
ctx.Flash.Error("permisson denied", true)
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
return
|
||||
}
|
||||
err = devcontainer_service.CreateDevcontainerConfiguration(ctx.Repo.Repository, ctx.Doer)
|
||||
if err != nil {
|
||||
log.Info(err.Error())
|
||||
ctx.Flash.Error(err.Error(), true)
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
return
|
||||
}
|
||||
ctx.Redirect(path.Join(ctx.Repo.RepoLink, "/devcontainer"))
|
||||
@@ -256,7 +262,7 @@ func RestartDevContainer(ctx *context.Context) {
|
||||
if err != nil {
|
||||
ctx.Flash.Error(err.Error(), true)
|
||||
}
|
||||
ctx.JSON(http.StatusOK, "")
|
||||
ctx.JSON(http.StatusOK, map[string]string{"status": "6"})
|
||||
}
|
||||
func StopDevContainer(ctx *context.Context) {
|
||||
hasDevContainer, err := devcontainer_service.HasDevContainer(ctx, ctx.Doer.ID, ctx.Repo.Repository.ID)
|
||||
@@ -273,7 +279,7 @@ func StopDevContainer(ctx *context.Context) {
|
||||
if err != nil {
|
||||
ctx.Flash.Error(err.Error(), true)
|
||||
}
|
||||
ctx.JSON(http.StatusOK, "")
|
||||
ctx.JSON(http.StatusOK, map[string]string{"status": "7"})
|
||||
}
|
||||
func UpdateDevContainer(ctx *context.Context) {
|
||||
hasDevContainer, err := devcontainer_service.HasDevContainer(ctx, ctx.Doer.ID, ctx.Repo.Repository.ID)
|
||||
|
||||
425
routers/web/devcontainer/variables.go
Normal file
425
routers/web/devcontainer/variables.go
Normal file
@@ -0,0 +1,425 @@
|
||||
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
package devcontainer
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"code.gitea.io/gitea/models/db"
|
||||
devcontainer_model "code.gitea.io/gitea/models/devcontainer"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/templates"
|
||||
"code.gitea.io/gitea/modules/web"
|
||||
shared_user "code.gitea.io/gitea/routers/web/shared/user"
|
||||
"code.gitea.io/gitea/services/context"
|
||||
devcontainer_service "code.gitea.io/gitea/services/devcontainer"
|
||||
"code.gitea.io/gitea/services/forms"
|
||||
)
|
||||
|
||||
const (
|
||||
tplRepoVariables templates.TplName = "repo/settings/devcontainer"
|
||||
tplOrgVariables templates.TplName = "org/settings/devcontainer"
|
||||
tplUserVariables templates.TplName = "user/settings/devcontainer"
|
||||
tplAdminVariables templates.TplName = "admin/devcontainer"
|
||||
)
|
||||
|
||||
type variablesCtx struct {
|
||||
OwnerID int64
|
||||
RepoID int64
|
||||
IsRepo bool
|
||||
IsOrg bool
|
||||
IsUser bool
|
||||
IsGlobal bool
|
||||
VariablesTemplate templates.TplName
|
||||
RedirectLink string
|
||||
}
|
||||
|
||||
func getVariablesCtx(ctx *context.Context) (*variablesCtx, error) {
|
||||
if ctx.Data["PageIsRepoSettings"] == true {
|
||||
return &variablesCtx{
|
||||
OwnerID: 0,
|
||||
RepoID: ctx.Repo.Repository.ID,
|
||||
IsRepo: true,
|
||||
VariablesTemplate: tplRepoVariables,
|
||||
RedirectLink: ctx.Repo.RepoLink + "/settings/devcontainer/variables",
|
||||
}, nil
|
||||
}
|
||||
|
||||
if ctx.Data["PageIsOrgSettings"] == true {
|
||||
if _, err := shared_user.RenderUserOrgHeader(ctx); err != nil {
|
||||
ctx.ServerError("RenderUserOrgHeader", err)
|
||||
return nil, nil
|
||||
}
|
||||
return &variablesCtx{
|
||||
OwnerID: ctx.ContextUser.ID,
|
||||
RepoID: 0,
|
||||
IsOrg: true,
|
||||
VariablesTemplate: tplOrgVariables,
|
||||
RedirectLink: ctx.Org.OrgLink + "/settings/devcontainer/variables",
|
||||
}, nil
|
||||
}
|
||||
|
||||
if ctx.Data["PageIsUserSettings"] == true {
|
||||
return &variablesCtx{
|
||||
OwnerID: ctx.Doer.ID,
|
||||
RepoID: 0,
|
||||
IsUser: true,
|
||||
VariablesTemplate: tplUserVariables,
|
||||
RedirectLink: setting.AppSubURL + "/user/settings/devcontainer/variables",
|
||||
}, nil
|
||||
}
|
||||
|
||||
if ctx.Data["PageIsAdmin"] == true {
|
||||
return &variablesCtx{
|
||||
OwnerID: 0,
|
||||
RepoID: 0,
|
||||
IsGlobal: true,
|
||||
VariablesTemplate: tplAdminVariables,
|
||||
RedirectLink: setting.AppSubURL + "/-/admin/devcontainer/variables",
|
||||
}, nil
|
||||
}
|
||||
|
||||
return nil, errors.New("unable to set Variables context")
|
||||
}
|
||||
|
||||
func Variables(ctx *context.Context) {
|
||||
ctx.Data["Title"] = ctx.Tr("devcontainer.variables")
|
||||
ctx.Data["PageType"] = "variables"
|
||||
ctx.Data["PageIsSharedSettingsDevcontainerVariables"] = true
|
||||
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
variables, err := db.Find[devcontainer_model.DevcontainerVariable](ctx, devcontainer_model.FindVariablesOpts{
|
||||
OwnerID: vCtx.OwnerID,
|
||||
RepoID: vCtx.RepoID,
|
||||
})
|
||||
if err != nil {
|
||||
ctx.ServerError("FindVariables", err)
|
||||
return
|
||||
}
|
||||
|
||||
var tags []string
|
||||
|
||||
// 使用JOIN查询,关联DevcontainerScript表和devcontainer_variable表
|
||||
err = db.GetEngine(ctx).
|
||||
Select("variable_name").
|
||||
Table("devcontainer_script").
|
||||
Where("user_id = ? AND repo_id = ?", vCtx.OwnerID, vCtx.RepoID).
|
||||
Find(&tags)
|
||||
|
||||
// 将tags转换为JSON格式的字符串
|
||||
tagsJSON, err := json.Marshal(tags)
|
||||
if err != nil {
|
||||
ctx.ServerError("Marshal tags", err)
|
||||
return
|
||||
}
|
||||
// 确保tagsJSON不为null
|
||||
tagsJSONStr := string(tagsJSON)
|
||||
if tagsJSONStr == "null" {
|
||||
tagsJSONStr = "[]"
|
||||
}
|
||||
// 创建一个新的请求
|
||||
req, err := http.NewRequest("GET", "http://devstar.cn/variables/export", nil)
|
||||
if err != nil {
|
||||
ctx.Data["DevstarVariables"] = []*devcontainer_model.DevcontainerVariable{}
|
||||
} else {
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
ctx.Data["DevstarVariables"] = []*devcontainer_model.DevcontainerVariable{}
|
||||
} else {
|
||||
defer resp.Body.Close()
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
ctx.Data["DevstarVariables"] = []*devcontainer_model.DevcontainerVariable{}
|
||||
} else {
|
||||
var devstarVariables []*devcontainer_model.DevcontainerVariable
|
||||
err = json.Unmarshal(body, &devstarVariables)
|
||||
if err != nil {
|
||||
ctx.Data["DevstarVariables"] = []*devcontainer_model.DevcontainerVariable{}
|
||||
} else {
|
||||
// 创建一个本地变量名称的映射,用于快速查找
|
||||
localVariableNames := make(map[string]bool)
|
||||
for _, variable := range variables {
|
||||
localVariableNames[variable.Name] = true
|
||||
}
|
||||
|
||||
// 筛选出不与本地变量同名的devstar变量
|
||||
var filteredDevstarVariables []*devcontainer_model.DevcontainerVariable
|
||||
for _, devstarVar := range devstarVariables {
|
||||
if !localVariableNames[devstarVar.Name] {
|
||||
filteredDevstarVariables = append(filteredDevstarVariables, devstarVar)
|
||||
}
|
||||
}
|
||||
|
||||
ctx.Data["DevstarVariables"] = filteredDevstarVariables
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ctx.Data["Variables"] = variables
|
||||
ctx.Data["Tags"] = tagsJSONStr
|
||||
ctx.Data["DescriptionMaxLength"] = devcontainer_model.VariableDescriptionMaxLength
|
||||
ctx.HTML(http.StatusOK, vCtx.VariablesTemplate)
|
||||
}
|
||||
func GetExportVariables(ctx *context.Context) {
|
||||
globalVariables, err := db.Find[devcontainer_model.DevcontainerVariable](ctx, devcontainer_model.FindVariablesOpts{})
|
||||
if err != nil {
|
||||
ctx.ServerError("Get Global Variables", err)
|
||||
return
|
||||
}
|
||||
// 筛选出键以"DEVSTAR_"开头的脚本
|
||||
var devstarVariables []devcontainer_model.DevcontainerVariable
|
||||
for _, value := range globalVariables {
|
||||
if strings.HasPrefix(value.Name, "DEVSTAR_") {
|
||||
devstarVariables = append(devstarVariables, *value)
|
||||
}
|
||||
}
|
||||
ctx.JSON(http.StatusOK, devstarVariables)
|
||||
}
|
||||
func VariableCreate(ctx *context.Context) {
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
if ctx.HasError() { // form binding validation error
|
||||
ctx.JSONError(ctx.GetErrMsg())
|
||||
return
|
||||
}
|
||||
|
||||
form := web.GetForm(ctx).(*forms.EditVariableForm)
|
||||
|
||||
v, err := devcontainer_service.CreateVariable(ctx, vCtx.OwnerID, vCtx.RepoID, form.Name, form.Data, form.Description)
|
||||
if err != nil {
|
||||
log.Error("CreateVariable: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
ctx.Flash.Success(ctx.Tr("actions.variables.creation.success", v.Name))
|
||||
ctx.JSONRedirect(vCtx.RedirectLink)
|
||||
}
|
||||
func ScriptCreate(ctx *context.Context) {
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
if ctx.HasError() { // form binding validation error
|
||||
ctx.JSONError(ctx.GetErrMsg())
|
||||
return
|
||||
}
|
||||
query := ctx.Req.URL.Query()
|
||||
var script *devcontainer_model.DevcontainerScript
|
||||
// 首先检查变量是否在DevcontainerVariable表中存在
|
||||
exists, err := db.GetEngine(ctx).
|
||||
Table("devcontainer_variable").
|
||||
Where("(owner_id = 0 AND repo_id = 0) OR (owner_id = ? AND repo_id = 0) OR (owner_id = 0 AND repo_id = ?)", vCtx.OwnerID, vCtx.RepoID).
|
||||
And("name = ?", strings.ToUpper(query.Get("name"))).
|
||||
Exist()
|
||||
if err != nil {
|
||||
log.Error("Check variable existence: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
if !exists {
|
||||
// 创建一个新的请求来获取devstar变量
|
||||
req, err := http.NewRequest("GET", "http://devstar.cn/variables/export", nil)
|
||||
if err != nil {
|
||||
log.Error("Failed to create request for devstar variables: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Error("Failed to fetch devstar variables: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
defer resp.Body.Close()
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
log.Error("Failed to read devstar variables response: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
var devstarVariables []*devcontainer_model.DevcontainerVariable
|
||||
err = json.Unmarshal(body, &devstarVariables)
|
||||
if err != nil {
|
||||
log.Error("Failed to unmarshal devstar variables: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
// 查找是否有匹配的devstar变量
|
||||
foundInDevstar := false
|
||||
searchName := strings.ToUpper(query.Get("name"))
|
||||
for _, devstarVar := range devstarVariables {
|
||||
if devstarVar.Name == searchName {
|
||||
foundInDevstar = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !foundInDevstar {
|
||||
log.Error("Variable %s does not exist", query.Get("name"))
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
// 创建devcontainer_script记录
|
||||
script = &devcontainer_model.DevcontainerScript{
|
||||
UserId: vCtx.OwnerID,
|
||||
RepoId: vCtx.RepoID,
|
||||
VariableName: strings.ToUpper(query.Get("name")),
|
||||
}
|
||||
|
||||
_, err = db.GetEngine(ctx).Insert(script)
|
||||
if err != nil {
|
||||
log.Error("CreateScript: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
func VariableUpdate(ctx *context.Context) {
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
if ctx.HasError() { // form binding validation error
|
||||
ctx.JSONError(ctx.GetErrMsg())
|
||||
return
|
||||
}
|
||||
|
||||
id := ctx.PathParamInt64("variable_id")
|
||||
|
||||
variable := findActionsVariable(ctx, id, vCtx)
|
||||
if ctx.Written() {
|
||||
return
|
||||
}
|
||||
|
||||
form := web.GetForm(ctx).(*forms.EditVariableForm)
|
||||
variable.Name = form.Name
|
||||
variable.Data = form.Data
|
||||
variable.Description = form.Description
|
||||
|
||||
if ok, err := devcontainer_service.UpdateVariableNameData(ctx, variable); err != nil || !ok {
|
||||
log.Error("UpdateVariable: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.update.failed"))
|
||||
return
|
||||
}
|
||||
ctx.Flash.Success(ctx.Tr("actions.variables.update.success"))
|
||||
ctx.JSONRedirect(vCtx.RedirectLink)
|
||||
}
|
||||
|
||||
func findActionsVariable(ctx *context.Context, id int64, vCtx *variablesCtx) *devcontainer_model.DevcontainerVariable {
|
||||
opts := devcontainer_model.FindVariablesOpts{
|
||||
IDs: []int64{id},
|
||||
}
|
||||
switch {
|
||||
case vCtx.IsRepo:
|
||||
opts.RepoID = vCtx.RepoID
|
||||
if opts.RepoID == 0 {
|
||||
panic("RepoID is 0")
|
||||
}
|
||||
case vCtx.IsOrg, vCtx.IsUser:
|
||||
opts.OwnerID = vCtx.OwnerID
|
||||
if opts.OwnerID == 0 {
|
||||
panic("OwnerID is 0")
|
||||
}
|
||||
case vCtx.IsGlobal:
|
||||
// do nothing
|
||||
default:
|
||||
panic("invalid actions variable")
|
||||
}
|
||||
got, err := devcontainer_model.FindVariables(ctx, opts)
|
||||
if err != nil {
|
||||
ctx.ServerError("FindVariables", err)
|
||||
return nil
|
||||
} else if len(got) == 0 {
|
||||
ctx.NotFound(nil)
|
||||
return nil
|
||||
}
|
||||
return got[0]
|
||||
}
|
||||
|
||||
func VariableDelete(ctx *context.Context) {
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
id := ctx.PathParamInt64("variable_id")
|
||||
|
||||
variable := findActionsVariable(ctx, id, vCtx)
|
||||
if ctx.Written() {
|
||||
return
|
||||
}
|
||||
|
||||
if err := devcontainer_service.DeleteVariableByID(ctx, variable.ID); err != nil {
|
||||
log.Error("Delete variable [%d] failed: %v", id, err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.deletion.failed"))
|
||||
return
|
||||
}
|
||||
// 删除相应的script记录,根据repoId、userId和name
|
||||
script := &devcontainer_model.DevcontainerScript{
|
||||
UserId: vCtx.OwnerID,
|
||||
RepoId: vCtx.RepoID,
|
||||
VariableName: variable.Name,
|
||||
}
|
||||
_, err = db.GetEngine(ctx).Delete(script)
|
||||
if err != nil {
|
||||
log.Error("Delete script for variable [%d] failed: %v", id, err)
|
||||
// 注意:这里我们记录错误但不中断变量删除过程
|
||||
}
|
||||
ctx.Flash.Success(ctx.Tr("actions.variables.deletion.success"))
|
||||
ctx.JSONRedirect(vCtx.RedirectLink)
|
||||
}
|
||||
func ScriptDelete(ctx *context.Context) {
|
||||
vCtx, err := getVariablesCtx(ctx)
|
||||
if err != nil {
|
||||
ctx.ServerError("getVariablesCtx", err)
|
||||
return
|
||||
}
|
||||
|
||||
if ctx.HasError() { // form binding validation error
|
||||
ctx.JSONError(ctx.GetErrMsg())
|
||||
return
|
||||
}
|
||||
query := ctx.Req.URL.Query()
|
||||
// 删除devcontainer_script记录
|
||||
script := &devcontainer_model.DevcontainerScript{
|
||||
UserId: vCtx.OwnerID,
|
||||
RepoId: vCtx.RepoID,
|
||||
VariableName: query.Get("name"),
|
||||
}
|
||||
_, err = db.GetEngine(ctx).Delete(script)
|
||||
if err != nil {
|
||||
log.Error("DeleteScript: %v", err)
|
||||
ctx.JSONError(ctx.Tr("actions.variables.creation.failed"))
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
@@ -8,14 +8,14 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
// TplDevstarHome 显示 DevStar Home 页面 templates/vscode-home.tmpl
|
||||
TplDevstarHome templates.TplName = "repo/devcontainer/vscode-home"
|
||||
// TplVscodeHome 显示 DevStar Home 页面 templates/vscode-home.tmpl
|
||||
TplVscodeHome templates.TplName = "repo/devcontainer/vscode-home"
|
||||
)
|
||||
|
||||
// DevstarHome 渲染适配于 VSCode 插件的 DevStar Home 页面
|
||||
func DevstarHome(ctx *gitea_web_context.Context) {
|
||||
// VscodeHome 渲染适配于 VSCode 插件的 DevStar Home 页面
|
||||
func VscodeHome(ctx *gitea_web_context.Context) {
|
||||
ctx.Data["Title"] = ctx.Tr("home")
|
||||
ctx.Resp.Header().Del("X-Frame-Options")
|
||||
//ctx.Resp.Header().Set("Content-Security-Policy", "frame-ancestors *")
|
||||
ctx.HTML(http.StatusOK, TplDevstarHome)
|
||||
ctx.HTML(http.StatusOK, TplVscodeHome)
|
||||
}
|
||||
@@ -11,11 +11,13 @@ import (
|
||||
asymkey_model "code.gitea.io/gitea/models/asymkey"
|
||||
"code.gitea.io/gitea/models/db"
|
||||
user_model "code.gitea.io/gitea/models/user"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/templates"
|
||||
"code.gitea.io/gitea/modules/web"
|
||||
asymkey_service "code.gitea.io/gitea/services/asymkey"
|
||||
"code.gitea.io/gitea/services/context"
|
||||
devcontainer_service "code.gitea.io/gitea/services/devcontainer"
|
||||
"code.gitea.io/gitea/services/forms"
|
||||
)
|
||||
|
||||
@@ -208,6 +210,13 @@ func KeysPost(ctx *context.Context) {
|
||||
}
|
||||
return
|
||||
}
|
||||
// 将公钥添加到所有打开的容器中
|
||||
log.Info("将公钥添加到所有打开的容器中")
|
||||
err = devcontainer_service.AddPublicKeyToAllRunningDevContainer(ctx, ctx.Doer.ID, content)
|
||||
if err != nil {
|
||||
ctx.ServerError("AddPublicKey To Devcontainer", err)
|
||||
return
|
||||
}
|
||||
ctx.Flash.Success(ctx.Tr("settings.add_key_success", form.Title))
|
||||
ctx.Redirect(setting.AppSubURL + "/user/settings/keys")
|
||||
case "verify_ssh":
|
||||
|
||||
@@ -457,6 +457,20 @@ func registerWebRoutes(m *web.Router) {
|
||||
})
|
||||
}
|
||||
|
||||
addSettingsDevcontainerVariablesRoutes := func() {
|
||||
m.Group("/variables", func() {
|
||||
m.Get("", devcontainer_web.Variables)
|
||||
m.Post("/new", web.Bind(forms.EditVariableForm{}), devcontainer_web.VariableCreate)
|
||||
m.Post("/{variable_id}/edit", web.Bind(forms.EditVariableForm{}), devcontainer_web.VariableUpdate)
|
||||
m.Post("/{variable_id}/delete", devcontainer_web.VariableDelete)
|
||||
m.Group("/script", func() {
|
||||
m.Get("/new", devcontainer_web.ScriptCreate)
|
||||
m.Get("/delete", devcontainer_web.ScriptDelete)
|
||||
})
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
addSettingsSecretsRoutes := func() {
|
||||
m.Group("/secrets", func() {
|
||||
m.Get("", repo_setting.Secrets)
|
||||
@@ -489,6 +503,7 @@ func registerWebRoutes(m *web.Router) {
|
||||
// Especially some AJAX requests, we can reduce middleware number to improve performance.
|
||||
|
||||
m.Get("/", Home)
|
||||
m.Get("/variables/export", devcontainer_web.GetExportVariables)
|
||||
m.Get("/sitemap.xml", sitemapEnabled, optExploreSignIn, HomeSitemap)
|
||||
m.Group("/.well-known", func() {
|
||||
m.Get("/openid-configuration", auth.OIDCWellKnown)
|
||||
@@ -684,6 +699,9 @@ func registerWebRoutes(m *web.Router) {
|
||||
addSettingsSecretsRoutes()
|
||||
addSettingsVariablesRoutes()
|
||||
}, actions.MustEnableActions)
|
||||
m.Group("/devcontainer", func() {
|
||||
addSettingsDevcontainerVariablesRoutes()
|
||||
})
|
||||
|
||||
m.Get("/organization", user_setting.Organization)
|
||||
m.Get("/repos", user_setting.Repos)
|
||||
@@ -762,6 +780,9 @@ func registerWebRoutes(m *web.Router) {
|
||||
})
|
||||
m.Get("/diagnosis", admin.MonitorDiagnosis)
|
||||
})
|
||||
m.Group("/devcontainer", func() {
|
||||
addSettingsDevcontainerVariablesRoutes()
|
||||
})
|
||||
|
||||
m.Group("/users", func() {
|
||||
m.Get("", admin.Users)
|
||||
@@ -989,6 +1010,9 @@ func registerWebRoutes(m *web.Router) {
|
||||
addSettingsVariablesRoutes()
|
||||
}, actions.MustEnableActions)
|
||||
|
||||
m.Group("/devcontainer", func() {
|
||||
addSettingsDevcontainerVariablesRoutes()
|
||||
})
|
||||
m.Post("/rename", web.Bind(forms.RenameOrgForm{}), org.SettingsRenamePost)
|
||||
m.Post("/delete", org.SettingsDeleteOrgPost)
|
||||
|
||||
@@ -1181,6 +1205,10 @@ func registerWebRoutes(m *web.Router) {
|
||||
addSettingsSecretsRoutes()
|
||||
addSettingsVariablesRoutes()
|
||||
}, actions.MustEnableActions)
|
||||
|
||||
m.Group("/devcontainer", func() {
|
||||
addSettingsDevcontainerVariablesRoutes()
|
||||
})
|
||||
// the follow handler must be under "settings", otherwise this incomplete repo can't be accessed
|
||||
m.Group("/migrate", func() {
|
||||
m.Post("/retry", repo.MigrateRetryPost)
|
||||
@@ -1411,7 +1439,8 @@ func registerWebRoutes(m *web.Router) {
|
||||
// 具有code读取权限
|
||||
context.RepoAssignment, reqUnitCodeReader,
|
||||
)
|
||||
m.Get("/devstar-home", devcontainer_web.DevstarHome)
|
||||
m.Get("/devstar-home", devcontainer_web.VscodeHome) // 旧地址,保留兼容性
|
||||
m.Get("/vscode-home", devcontainer_web.VscodeHome)
|
||||
m.Group("/api/devcontainer", func() {
|
||||
// 获取 某用户在某仓库中的 DevContainer 细节(包括SSH连接信息),默认不会等待 (wait = false)
|
||||
// 请求方式: GET /api/devcontainer?repoId=${repoId}&wait=true // 无需传入 userId,直接从 token 中提取
|
||||
|
||||
@@ -8,7 +8,8 @@ import (
|
||||
"math"
|
||||
"net"
|
||||
"net/url"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -17,10 +18,12 @@ import (
|
||||
devcontainer_models "code.gitea.io/gitea/models/devcontainer"
|
||||
"code.gitea.io/gitea/models/repo"
|
||||
"code.gitea.io/gitea/models/user"
|
||||
"code.gitea.io/gitea/modules/docker"
|
||||
docker_module "code.gitea.io/gitea/modules/docker"
|
||||
"code.gitea.io/gitea/modules/git"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/templates"
|
||||
gitea_context "code.gitea.io/gitea/services/context"
|
||||
files_service "code.gitea.io/gitea/services/repository/files"
|
||||
"github.com/docker/docker/api/types"
|
||||
@@ -106,28 +109,17 @@ func HasDevContainerDockerFile(ctx context.Context, repo *gitea_context.Reposito
|
||||
}
|
||||
}
|
||||
func CreateDevcontainerConfiguration(repo *repo.Repository, doer *user.User) error {
|
||||
jsonString := `{
|
||||
"name":"template",
|
||||
"image":"mcr.microsoft.com/devcontainers/base:dev-ubuntu-20.04",
|
||||
"forwardPorts": ["8080"],
|
||||
"containerEnv": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"initializeCommand": "echo \"init\";",
|
||||
"postCreateCommand": [
|
||||
"echo \"created\"",
|
||||
"echo \"test\""
|
||||
],
|
||||
"runArgs": [
|
||||
"-p 8888"
|
||||
]
|
||||
}`
|
||||
_, err := files_service.ChangeRepoFiles(db.DefaultContext, repo, doer, &files_service.ChangeRepoFilesOptions{
|
||||
|
||||
jsonContent, err := templates.AssetFS().ReadFile("repo/devcontainer/default_devcontainer.json")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = files_service.ChangeRepoFiles(db.DefaultContext, repo, doer, &files_service.ChangeRepoFilesOptions{
|
||||
Files: []*files_service.ChangeRepoFile{
|
||||
{
|
||||
Operation: "create",
|
||||
TreePath: ".devcontainer/devcontainer.json",
|
||||
ContentReader: bytes.NewReader([]byte(jsonString)),
|
||||
ContentReader: bytes.NewReader([]byte(jsonContent)),
|
||||
},
|
||||
},
|
||||
OldBranch: "main",
|
||||
@@ -554,29 +546,19 @@ func GetTerminalCommand(ctx context.Context, userID string, repo *repo.Repositor
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if status == "running" {
|
||||
if status == "created" {
|
||||
//添加脚本文件
|
||||
if cfg.Section("k8s").Key("ENABLE").Value() == "true" {
|
||||
|
||||
} else {
|
||||
|
||||
var scriptContent []byte
|
||||
_, err = os.Stat("webTerminal.sh")
|
||||
if os.IsNotExist(err) {
|
||||
_, err = os.Stat("/app/gitea/webTerminal.sh")
|
||||
if os.IsNotExist(err) {
|
||||
return "", "", err
|
||||
} else {
|
||||
scriptContent, err = os.ReadFile("/app/gitea/webTerminal.sh")
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
scriptContent, err = os.ReadFile("webTerminal.sh")
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
userNum, err := strconv.ParseInt(userID, 10, 64)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
var scriptContent string
|
||||
scriptContent, err = GetCommandContent(ctx, userNum, repo)
|
||||
log.Info("command: %s", scriptContent)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
// 创建 tar 归档文件
|
||||
var buf bytes.Buffer
|
||||
@@ -602,6 +584,7 @@ func GetTerminalCommand(ctx context.Context, userID string, repo *repo.Repositor
|
||||
}
|
||||
}
|
||||
realTimeStatus = 3
|
||||
|
||||
}
|
||||
}
|
||||
break
|
||||
@@ -610,7 +593,6 @@ func GetTerminalCommand(ctx context.Context, userID string, repo *repo.Repositor
|
||||
if cfg.Section("k8s").Key("ENABLE").Value() == "true" {
|
||||
//k8s的逻辑
|
||||
} else {
|
||||
|
||||
status, err := CheckDirExistsFromDocker(ctx, devContainerInfo.Name, devContainerInfo.DevcontainerWorkDir)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
@@ -891,3 +873,184 @@ func Get_IDE_TerminalURL(ctx *gitea_context.Context, doer *user.User, repo *gite
|
||||
"&devstar_username=" + repo.Repository.OwnerName +
|
||||
"&devstar_domain=" + cfg.Section("server").Key("ROOT_URL").Value(), nil
|
||||
}
|
||||
func GetCommandContent(ctx context.Context, userId int64, repo *repo.Repository) (string, error) {
|
||||
configurationString, err := GetDevcontainerConfigurationString(ctx, repo)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
configurationModel, err := UnmarshalDevcontainerConfigContent(configurationString)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
onCreateCommand := strings.TrimSpace(strings.Join(configurationModel.ParseCommand(configurationModel.OnCreateCommand), "\n"))
|
||||
if _, ok := configurationModel.OnCreateCommand.(map[string]interface{}); ok {
|
||||
// 是 map[string]interface{} 类型
|
||||
cmdObj := configurationModel.OnCreateCommand.(map[string]interface{})
|
||||
if pathValue, hasPath := cmdObj["path"]; hasPath {
|
||||
fileCommand, err := GetFileContentByPath(ctx, repo, ".devcontainer/"+pathValue.(string))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
onCreateCommand += "\n" + fileCommand
|
||||
}
|
||||
}
|
||||
updateCommand := strings.TrimSpace(strings.Join(configurationModel.ParseCommand(configurationModel.UpdateContentCommand), "\n"))
|
||||
if _, ok := configurationModel.UpdateContentCommand.(map[string]interface{}); ok {
|
||||
// 是 map[string]interface{} 类型
|
||||
cmdObj := configurationModel.UpdateContentCommand.(map[string]interface{})
|
||||
if pathValue, hasPath := cmdObj["path"]; hasPath {
|
||||
fileCommand, err := GetFileContentByPath(ctx, repo, ".devcontainer/"+pathValue.(string))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
updateCommand += "\n" + fileCommand
|
||||
}
|
||||
}
|
||||
postCreateCommand := strings.TrimSpace(strings.Join(configurationModel.ParseCommand(configurationModel.PostCreateCommand), "\n"))
|
||||
if _, ok := configurationModel.PostCreateCommand.(map[string]interface{}); ok {
|
||||
// 是 map[string]interface{} 类型
|
||||
cmdObj := configurationModel.PostCreateCommand.(map[string]interface{})
|
||||
if pathValue, hasPath := cmdObj["path"]; hasPath {
|
||||
fileCommand, err := GetFileContentByPath(ctx, repo, ".devcontainer/"+pathValue.(string))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
postCreateCommand += "\n" + fileCommand
|
||||
}
|
||||
}
|
||||
|
||||
postStartCommand := strings.TrimSpace(strings.Join(configurationModel.ParseCommand(configurationModel.PostStartCommand), "\n"))
|
||||
if _, ok := configurationModel.PostStartCommand.(map[string]interface{}); ok {
|
||||
// 是 map[string]interface{} 类型
|
||||
cmdObj := configurationModel.PostStartCommand.(map[string]interface{})
|
||||
if pathValue, hasPath := cmdObj["path"]; hasPath {
|
||||
fileCommand, err := GetFileContentByPath(ctx, repo, ".devcontainer/"+pathValue.(string))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
postStartCommand += "\n" + fileCommand
|
||||
}
|
||||
}
|
||||
var script []string
|
||||
scripts, err := devcontainer_models.GetScript(ctx, userId, repo.ID)
|
||||
for _, v := range scripts {
|
||||
script = append(script, v)
|
||||
}
|
||||
scriptCommand := strings.TrimSpace(strings.Join(script, "\n"))
|
||||
|
||||
userCommand := scriptCommand + "\n" + onCreateCommand + "\n" + updateCommand + "\n" + postCreateCommand + "\n" + postStartCommand + "\n"
|
||||
assetFS := templates.AssetFS()
|
||||
Content_tmpl, err := assetFS.ReadFile("repo/devcontainer/devcontainer_tmpl.sh")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
Content_start, err := assetFS.ReadFile("repo/devcontainer/devcontainer_start.sh")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
Content_restart, err := assetFS.ReadFile("repo/devcontainer/devcontainer_restart.sh")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
final_command := string(Content_tmpl)
|
||||
re1 := regexp.MustCompile(`\$\{` + regexp.QuoteMeta("START") + `\}|` + `\$` + regexp.QuoteMeta("START") + `\b`)
|
||||
escapedContentStart := strings.ReplaceAll(string(Content_start), `$`, `$$`)
|
||||
escapedUserCommand := strings.ReplaceAll(userCommand, `$`, `$$`)
|
||||
final_command = re1.ReplaceAllString(final_command, escapedContentStart+"\n"+escapedUserCommand)
|
||||
|
||||
re1 = regexp.MustCompile(`\$RESTART\b`)
|
||||
escapedContentRestart := strings.ReplaceAll(string(Content_restart), `$`, `$$`)
|
||||
escapedPostStartCommand := strings.ReplaceAll(postStartCommand, `$`, `$$`)
|
||||
final_command = re1.ReplaceAllString(final_command, escapedContentRestart+"\n"+escapedPostStartCommand)
|
||||
return parseCommand(ctx, final_command, userId, repo)
|
||||
}
|
||||
func AddPublicKeyToAllRunningDevContainer(ctx context.Context, userId int64, publicKey string) error {
|
||||
// 加载配置文件
|
||||
cfg, err := setting.NewConfigProviderFromFile(setting.CustomConf)
|
||||
if err != nil {
|
||||
log.Error("Get_IDE_TerminalURL: 加载配置文件失败: %v", err)
|
||||
return err
|
||||
}
|
||||
if cfg.Section("k8s").Key("ENABLE").Value() == "true" {
|
||||
|
||||
} else {
|
||||
cli, err := docker.CreateDockerClient(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cli.Close()
|
||||
var devcontainerList []devcontainer_models.Devcontainer
|
||||
// 查询所有打开的容器
|
||||
err = db.GetEngine(ctx).
|
||||
Table("devcontainer").
|
||||
Where("user_id = ? AND devcontainer_status = ?", userId, 4).
|
||||
Find(&devcontainerList)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(devcontainerList) > 0 {
|
||||
// 将公钥写入这些打开的容器中
|
||||
for _, repoDevContainer := range devcontainerList {
|
||||
containerID, err := docker.GetContainerID(cli, repoDevContainer.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Info("container id: %s, name: %s", containerID, repoDevContainer.Name)
|
||||
// 检查容器状态
|
||||
containerStatus, err := docker.GetContainerStatus(cli, containerID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if containerStatus == "running" {
|
||||
// 只为处于运行状态的容器添加公钥
|
||||
_, err = docker.ExecCommandInContainer(ctx, cli, repoDevContainer.Name, fmt.Sprintf("echo '%s' >> ~/.ssh/authorized_keys", publicKey))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("unknown agent")
|
||||
|
||||
}
|
||||
func parseCommand(ctx context.Context, command string, userId int64, repo *repo.Repository) (string, error) {
|
||||
variables, err := devcontainer_models.GetVariables(ctx, userId, repo.ID)
|
||||
|
||||
var variablesName []string
|
||||
variablesCircle := checkEachVariable(variables)
|
||||
for key := range variables {
|
||||
if !variablesCircle[key] {
|
||||
variablesName = append(variablesName, key)
|
||||
}
|
||||
}
|
||||
for ContainsAnySubstring(command, variablesName) {
|
||||
for key, value := range variables {
|
||||
if variablesCircle[key] == true {
|
||||
continue
|
||||
}
|
||||
log.Info("key: %s, value: %s", key, value)
|
||||
re1 := regexp.MustCompile(`\$\{` + regexp.QuoteMeta(key) + `\}|` + `\$` + regexp.QuoteMeta(key) + `\b`)
|
||||
|
||||
escapedValue := strings.ReplaceAll(value, `$`, `$$`)
|
||||
command = re1.ReplaceAllString(command, escapedValue)
|
||||
variablesName = append(variablesName, key)
|
||||
}
|
||||
}
|
||||
|
||||
var userSSHPublicKeyList []string
|
||||
err = db.GetEngine(ctx).
|
||||
Table("public_key").
|
||||
Select("content").
|
||||
Where("owner_id = ?", userId).
|
||||
Find(&userSSHPublicKeyList)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
re1 := regexp.MustCompile(`\$\{` + regexp.QuoteMeta("PUBLIC_KEY_LIST") + `\}|` + `\$` + regexp.QuoteMeta("PUBLIC_KEY_LIST") + `\b`)
|
||||
command = re1.ReplaceAllString(command, strings.Join(userSSHPublicKeyList, "\n"))
|
||||
return command, nil
|
||||
}
|
||||
|
||||
@@ -385,8 +385,16 @@ func (c *DevContainerConfiguration) validateLifecycleCommands() []error {
|
||||
// 验证命令对象结构
|
||||
cmdObj := cmd.(map[string]interface{})
|
||||
if _, ok := cmdObj["command"]; !ok {
|
||||
errors = append(errors,
|
||||
fmt.Errorf("%s: command object requires 'command' property", name))
|
||||
if pathValue, hasPath := cmdObj["path"]; !hasPath {
|
||||
errors = append(errors,
|
||||
fmt.Errorf("%s: command object requires either 'command' or 'path' property", name))
|
||||
} else if hasPath {
|
||||
// 如果存在 path,检查它是否为字符串
|
||||
if _, ok := pathValue.(string); !ok {
|
||||
errors = append(errors,
|
||||
fmt.Errorf("%s: 'path' must be a string, got %T", name, pathValue))
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
errors = append(errors,
|
||||
|
||||
@@ -233,3 +233,69 @@ func AddFileToTar(tw *tar.Writer, filename string, content string, mode int64) e
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func buildDependencyGraph(variables map[string]string) map[string][]string {
|
||||
graph := make(map[string][]string)
|
||||
varRefRegex := regexp.MustCompile(`\$[a-zA-Z_][a-zA-Z0-9_]*\b`)
|
||||
|
||||
for varName, varValue := range variables {
|
||||
graph[varName] = []string{}
|
||||
matches := varRefRegex.FindAllString(varValue, -1)
|
||||
for _, match := range matches {
|
||||
refVarName := strings.TrimPrefix(match, "$")
|
||||
if _, exists := variables[refVarName]; exists {
|
||||
graph[varName] = append(graph[varName], refVarName)
|
||||
}
|
||||
}
|
||||
}
|
||||
return graph
|
||||
}
|
||||
|
||||
func dfsDetectCycle(node string, graph map[string][]string, visited, inStack map[string]bool, path *[]string) bool {
|
||||
visited[node] = true
|
||||
inStack[node] = true
|
||||
*path = append(*path, node)
|
||||
|
||||
for _, neighbor := range graph[node] {
|
||||
if !visited[neighbor] {
|
||||
if dfsDetectCycle(neighbor, graph, visited, inStack, path) {
|
||||
return true
|
||||
}
|
||||
} else if inStack[neighbor] {
|
||||
// Found cycle, complete the cycle path
|
||||
*path = append(*path, neighbor)
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
inStack[node] = false
|
||||
*path = (*path)[:len(*path)-1]
|
||||
return false
|
||||
}
|
||||
func checkEachVariable(variables map[string]string) map[string]bool {
|
||||
results := make(map[string]bool)
|
||||
graph := buildDependencyGraph(variables)
|
||||
|
||||
for varName := range variables {
|
||||
visited := make(map[string]bool)
|
||||
inStack := make(map[string]bool)
|
||||
var cyclePath []string
|
||||
|
||||
hasCycle := dfsDetectCycle(varName, graph, visited, inStack, &cyclePath)
|
||||
results[varName] = hasCycle
|
||||
|
||||
if hasCycle {
|
||||
fmt.Printf("变量 %s 存在循环引用: %v\n", varName, cyclePath)
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
func ContainsAnySubstring(s string, substrList []string) bool {
|
||||
for _, substr := range substrList {
|
||||
hasSubstr, _ := regexp.MatchString(`\$`+substr+`\b`, s)
|
||||
if hasSubstr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
97
services/devcontainer/devcontainer_variables.go
Normal file
97
services/devcontainer/devcontainer_variables.go
Normal file
@@ -0,0 +1,97 @@
|
||||
// Copyright 2024 The Gitea Authors. All rights reserved.
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
package devcontainer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"regexp"
|
||||
|
||||
devcontainer_model "code.gitea.io/gitea/models/devcontainer"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/util"
|
||||
secret_service "code.gitea.io/gitea/services/secrets"
|
||||
)
|
||||
|
||||
func CreateVariable(ctx context.Context, ownerID, repoID int64, name, data, description string) (*devcontainer_model.DevcontainerVariable, error) {
|
||||
if err := secret_service.ValidateName(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := envNameCIRegexMatch(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
v, err := devcontainer_model.InsertVariable(ctx, ownerID, repoID, name, util.ReserveLineBreakForTextarea(data), description)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return v, nil
|
||||
}
|
||||
|
||||
func UpdateVariableNameData(ctx context.Context, variable *devcontainer_model.DevcontainerVariable) (bool, error) {
|
||||
if err := secret_service.ValidateName(variable.Name); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
if err := envNameCIRegexMatch(variable.Name); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
variable.Data = util.ReserveLineBreakForTextarea(variable.Data)
|
||||
|
||||
return devcontainer_model.UpdateVariableCols(ctx, variable, "name", "data", "description")
|
||||
}
|
||||
|
||||
func DeleteVariableByID(ctx context.Context, variableID int64) error {
|
||||
return devcontainer_model.DeleteVariable(ctx, variableID)
|
||||
}
|
||||
|
||||
func DeleteVariableByName(ctx context.Context, ownerID, repoID int64, name string) error {
|
||||
if err := secret_service.ValidateName(name); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := envNameCIRegexMatch(name); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
v, err := GetVariable(ctx, devcontainer_model.FindVariablesOpts{
|
||||
OwnerID: ownerID,
|
||||
RepoID: repoID,
|
||||
Name: name,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return devcontainer_model.DeleteVariable(ctx, v.ID)
|
||||
}
|
||||
|
||||
func GetVariable(ctx context.Context, opts devcontainer_model.FindVariablesOpts) (*devcontainer_model.DevcontainerVariable, error) {
|
||||
vars, err := devcontainer_model.FindVariables(ctx, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(vars) != 1 {
|
||||
return nil, util.NewNotExistErrorf("variable not found")
|
||||
}
|
||||
return vars[0], nil
|
||||
}
|
||||
|
||||
// some regular expression of `variables` and `secrets`
|
||||
// reference to:
|
||||
// https://docs.github.com/en/actions/learn-github-actions/variables#naming-conventions-for-configuration-variables
|
||||
// https://docs.github.com/en/actions/security-guides/encrypted-secrets#naming-your-secrets
|
||||
var (
|
||||
forbiddenEnvNameCIRx = regexp.MustCompile("(?i)^CI")
|
||||
)
|
||||
|
||||
func envNameCIRegexMatch(name string) error {
|
||||
if forbiddenEnvNameCIRx.MatchString(name) {
|
||||
log.Error("Env Name cannot be ci")
|
||||
return util.NewInvalidArgumentErrorf("env name cannot be ci")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -100,7 +100,7 @@ func CreateDevContainerByDockerAPI(ctx context.Context, newDevcontainer *devcont
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Info("ExecCommandInContainerExecCommandInContainerExecCommandInContainerExecCommandInContainerExecCommandInContainer")
|
||||
|
||||
output, err := docker_module.ExecCommandInContainer(ctx, cli, newDevcontainer.Name,
|
||||
`echo "`+newDevcontainer.DevcontainerHost+` host.docker.internal" | tee -a /etc/hosts;apt update;apt install -y git ;git clone `+strings.TrimSuffix(setting.AppURL, "/")+repo.Link()+" "+newDevcontainer.DevcontainerWorkDir+"/"+repo.Name+`; apt install -y ssh;echo "PubkeyAuthentication yes `+"\n"+`PermitRootLogin yes `+"\n"+`" | tee -a /etc/ssh/sshd_config;rm -f /etc/ssh/ssh_host_*; ssh-keygen -A; service ssh restart;mkdir -p ~/.ssh;chmod 700 ~/.ssh;echo "`+strings.Join(publicKeyList, "\n")+`" > ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys;`,
|
||||
)
|
||||
@@ -199,7 +199,7 @@ func CreateDevContainerByDockerCommand(ctx context.Context, newDevcontainer *dev
|
||||
if err != nil {
|
||||
return imageName, err
|
||||
}
|
||||
var startCommand string = `docker -H ` + dockerSocket + ` run --restart=always -d --name ` + newDevcontainer.Name
|
||||
var startCommand string = `docker -H ` + dockerSocket + ` create --restart=always --name ` + newDevcontainer.Name
|
||||
|
||||
// 将每个端口转换为 "-p <port>" 格式
|
||||
var portFlags string = " -p 22 "
|
||||
@@ -209,10 +209,11 @@ func CreateDevContainerByDockerCommand(ctx context.Context, newDevcontainer *dev
|
||||
portFlags = portFlags + fmt.Sprintf(" -p %d ", port)
|
||||
}
|
||||
startCommand += portFlags
|
||||
|
||||
var envFlags string = ` -e RepoLink="` + strings.TrimSuffix(cfg.Section("server").Key("ROOT_URL").Value(), `/`) + repo.Link() + `" ` +
|
||||
` -e DevstarHost="` + newDevcontainer.DevcontainerHost + `"` +
|
||||
` -e WorkSpace="` + newDevcontainer.DevcontainerWorkDir + `/` + repo.Name + `"` +
|
||||
` -e PublicKeyList="` + strings.Join(publicKeyList, "\n") + `" `
|
||||
` -e WorkSpace="` + newDevcontainer.DevcontainerWorkDir + `/` + repo.Name + `" ` +
|
||||
` -e DEVCONTAINER_STATUS="start" `
|
||||
// 遍历 ContainerEnv 映射中的每个环境变量
|
||||
for name, value := range configurationModel.ContainerEnv {
|
||||
// 将每个环境变量转换为 "-e name=value" 格式
|
||||
@@ -225,15 +226,14 @@ func CreateDevContainerByDockerCommand(ctx context.Context, newDevcontainer *dev
|
||||
if configurationModel.Privileged {
|
||||
startCommand += " --privileged "
|
||||
}
|
||||
|
||||
var capAddFlags string
|
||||
// 遍历 CapAdd 列表中的每个能力
|
||||
for _, capability := range configurationModel.CapAdd {
|
||||
// 将每个能力转换为 --cap-add=capability 格式
|
||||
capAddFlags = capAddFlags + fmt.Sprintf(" --cap-add %s ", capability)
|
||||
}
|
||||
startCommand += capAddFlags
|
||||
|
||||
startCommand += capAddFlags
|
||||
var securityOptFlags string
|
||||
// 遍历 SecurityOpt 列表中的每个安全选项
|
||||
for _, option := range configurationModel.SecurityOpt {
|
||||
@@ -246,34 +246,31 @@ func CreateDevContainerByDockerCommand(ctx context.Context, newDevcontainer *dev
|
||||
startCommand += fmt.Sprintf(" -w %s ", configurationModel.WorkspaceFolder)
|
||||
}
|
||||
startCommand += " " + strings.Join(configurationModel.RunArgs, " ") + " "
|
||||
overrideCommand := ""
|
||||
if !configurationModel.OverrideCommand {
|
||||
overrideCommand = ` sh -c "/home/webTerminal.sh" `
|
||||
startCommand += ` --entrypoint="" `
|
||||
}
|
||||
//创建并运行容器的命令
|
||||
if _, err := dbEngine.Table("devcontainer_output").Insert(&devcontainer_models.DevcontainerOutput{
|
||||
Output: "",
|
||||
Status: "waitting",
|
||||
UserId: newDevcontainer.UserId,
|
||||
RepoId: newDevcontainer.RepoId,
|
||||
Command: startCommand + imageName + ` sh -c "tail -f /dev/null"` + "\n",
|
||||
Command: startCommand + imageName + overrideCommand + "\n",
|
||||
ListId: 2,
|
||||
DevcontainerId: newDevcontainer.Id,
|
||||
}); err != nil {
|
||||
log.Info("Failed to insert record: %v", err)
|
||||
return imageName, err
|
||||
}
|
||||
//安装基本工具的命令
|
||||
onCreateCommand := strings.TrimSpace(strings.Join(configurationModel.ParseCommand(configurationModel.OnCreateCommand), ";"))
|
||||
if !strings.HasSuffix(onCreateCommand, ";") {
|
||||
onCreateCommand += ";"
|
||||
}
|
||||
if onCreateCommand == ";" {
|
||||
onCreateCommand = ""
|
||||
}
|
||||
|
||||
if _, err := dbEngine.Table("devcontainer_output").Insert(&devcontainer_models.DevcontainerOutput{
|
||||
Output: "",
|
||||
Status: "waitting",
|
||||
UserId: newDevcontainer.UserId,
|
||||
RepoId: newDevcontainer.RepoId,
|
||||
Command: `docker -H ` + dockerSocket + ` exec ` + newDevcontainer.Name + ` /home/webTerminal.sh start; ` +
|
||||
onCreateCommand + "\n",
|
||||
Output: "",
|
||||
Status: "waitting",
|
||||
UserId: newDevcontainer.UserId,
|
||||
RepoId: newDevcontainer.RepoId,
|
||||
Command: `docker -H ` + dockerSocket + ` start -a ` + newDevcontainer.Name + "\n",
|
||||
ListId: 3,
|
||||
DevcontainerId: newDevcontainer.Id,
|
||||
}); err != nil {
|
||||
@@ -471,7 +468,6 @@ func UpdateDevContainerByDocker(ctx context.Context, devContainerInfo *devcontai
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 定义正则表达式来匹配 image 字段
|
||||
re := regexp.MustCompile(`"image"\s*:\s*"([^"]+)"`)
|
||||
// 使用正则表达式查找并替换 image 字段的值
|
||||
@@ -504,6 +500,7 @@ func ImageExists(ctx context.Context, imageName string) (bool, error) {
|
||||
}
|
||||
return true, nil // 镜像存在
|
||||
}
|
||||
|
||||
func CheckDirExistsFromDocker(ctx context.Context, containerName, dirPath string) (bool, error) {
|
||||
// 上下文
|
||||
// 创建 Docker 客户端
|
||||
@@ -545,6 +542,47 @@ func CheckDirExistsFromDocker(ctx context.Context, containerName, dirPath string
|
||||
exitCode = resp.ExitCode
|
||||
return exitCode == 0, nil // 退出码为 0 表示目录存在
|
||||
}
|
||||
func CheckFileExistsFromDocker(ctx context.Context, containerName, filePath string) (bool, error) {
|
||||
// 上下文
|
||||
// 创建 Docker 客户端
|
||||
cli, err := docker_module.CreateDockerClient(ctx)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
// 获取容器 ID
|
||||
containerID, err := docker_module.GetContainerID(cli, containerName)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
// 创建 exec 配置
|
||||
execConfig := types.ExecConfig{
|
||||
Cmd: []string{"test", "-e", filePath}, // 检查文件是否存在
|
||||
AttachStdout: true,
|
||||
AttachStderr: true,
|
||||
}
|
||||
|
||||
// 创建 exec 实例
|
||||
execResp, err := cli.ContainerExecCreate(context.Background(), containerID, execConfig)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// 执行命令
|
||||
var exitCode int
|
||||
err = cli.ContainerExecStart(context.Background(), execResp.ID, types.ExecStartCheck{})
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// 获取命令执行结果
|
||||
resp, err := cli.ContainerExecInspect(context.Background(), execResp.ID)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
exitCode = resp.ExitCode
|
||||
return exitCode == 0, nil // 退出码为 0 表示目录存在
|
||||
}
|
||||
func RegistWebTerminal(ctx context.Context) error {
|
||||
log.Info("开始构建WebTerminal...")
|
||||
cli, err := docker_module.CreateDockerClient(ctx)
|
||||
|
||||
7
templates/admin/devcontainer.tmpl
Normal file
7
templates/admin/devcontainer.tmpl
Normal file
@@ -0,0 +1,7 @@
|
||||
{{template "admin/layout_head" (dict "ctxData" . "pageClass" "admin actions")}}
|
||||
<div class="admin-setting-content">
|
||||
{{if eq .PageType "variables"}}
|
||||
{{template "shared/devcontainer/variable_list" .}}
|
||||
{{end}}
|
||||
</div>
|
||||
{{template "admin/layout_footer" .}}
|
||||
@@ -112,5 +112,14 @@
|
||||
</a>
|
||||
</div>
|
||||
</details>
|
||||
|
||||
<details class="item toggleable-item" {{if or .PageIsSharedSettingsDevcontainerVariables}}open{{end}}>
|
||||
<summary>{{ctx.Locale.Tr "admin.devcontainer"}}</summary>
|
||||
<div class="menu">
|
||||
<a class="{{if .PageIsSharedSettingsDevcontainerVariables}}active {{end}}item" href="{{AppSubUrl}}/-/admin/devcontainer/variables">
|
||||
{{ctx.Locale.Tr "devcontainer.variables"}}
|
||||
</a>
|
||||
</div>
|
||||
</details>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
7
templates/org/settings/devcontainer.tmpl
Normal file
7
templates/org/settings/devcontainer.tmpl
Normal file
@@ -0,0 +1,7 @@
|
||||
{{template "org/settings/layout_head" (dict "ctxData" . "pageClass" "organization settings actions")}}
|
||||
<div class="org-setting-content">
|
||||
{{if eq .PageType "variables"}}
|
||||
{{template "shared/devcontainer/variable_list" .}}
|
||||
{{end}}
|
||||
</div>
|
||||
{{template "org/settings/layout_footer" .}}
|
||||
@@ -41,5 +41,13 @@
|
||||
</div>
|
||||
</details>
|
||||
{{end}}
|
||||
<details class="item toggleable-item" {{if or .PageIsSharedSettingsDevcontainerVariables}}open{{end}}>
|
||||
<summary>{{ctx.Locale.Tr "admin.devcontainer"}}</summary>
|
||||
<div class="menu">
|
||||
<a class="{{if .PageIsSharedSettingsDevcontainerVariables}}active {{end}}item" href="{{.OrgLink}}/settings/devcontainer/variables">
|
||||
{{ctx.Locale.Tr "devcontainer.variables"}}
|
||||
</a>
|
||||
</div>
|
||||
</details>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
20
templates/repo/devcontainer/default_devcontainer.json
Normal file
20
templates/repo/devcontainer/default_devcontainer.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"name":"template",
|
||||
"image":"mcr.microsoft.com/devcontainers/base:dev-ubuntu-20.04",
|
||||
"forwardPorts": ["8080"],
|
||||
"containerEnv": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"initializeCommand": "echo \"initializeCommand\";",
|
||||
"onCreateCommand": [
|
||||
"echo \"onCreateCommand\";",
|
||||
"echo \"onCreateCommand\";"
|
||||
],
|
||||
"postCreateCommand": [
|
||||
"echo \"postCreateCommand\"",
|
||||
"echo \"OK\""
|
||||
],
|
||||
"runArgs": [
|
||||
"-p 8888"
|
||||
]
|
||||
}
|
||||
@@ -44,7 +44,7 @@
|
||||
|
||||
<!-- 开始:Dev Container 正文内容 - 右侧展示区 -->
|
||||
<div class="issue-content-right ui segment">
|
||||
<strong>Options</strong>
|
||||
<strong>{{ctx.Locale.Tr "repo.dev_container_control"}}</strong>
|
||||
<div class="ui relaxed list">
|
||||
|
||||
{{if .HasDevContainer}}
|
||||
@@ -53,8 +53,8 @@
|
||||
<div style=" display: none;" id="updateContainer" class="item"><a class="delete-button flex-text-inline" style="color:black; " data-modal-id="updatemodal" href="#">{{svg "octicon-database"}}{{ctx.Locale.Tr "repo.dev_container_control.update"}}</a></div>
|
||||
{{end}}
|
||||
|
||||
<div style=" display: none;" id="restartContainer" class="item"><button class="flex-text-inline" style="color:black; " >{{svg "octicon-terminal" 14 "tw-mr-2"}} Restart Dev Container</button></div>
|
||||
<div style=" display: none;" id="stopContainer" class="item"><button class="flex-text-inline" style="color:black; " >{{svg "octicon-terminal" 14 "tw-mr-2"}} Stop Dev Container</button></div>
|
||||
<div style=" display: none;" id="restartContainer" class="item"><button class="flex-text-inline" style="color:black; " >{{svg "octicon-terminal" 14 "tw-mr-2"}}{{ctx.Locale.Tr "repo.dev_container_control.start"}}</button></div>
|
||||
<div style=" display: none;" id="stopContainer" class="item"><button class="flex-text-inline" style="color:black; " >{{svg "octicon-terminal" 14 "tw-mr-2"}}{{ctx.Locale.Tr "repo.dev_container_control.stop"}} </button></div>
|
||||
|
||||
<div style=" display: none;" id="webTerminal" class="item"><a class="flex-text-inline" style="color:black; " href="{{.WebSSHUrl}}" target="_blank">{{svg "octicon-code" 14}}open with WebTerminal</a></div>
|
||||
<div style=" display: none;" id="vsTerminal" class="item"><a class="flex-text-inline" style="color:black; " onclick="window.location.href = '{{.VSCodeUrl}}'">{{svg "octicon-code" 14}}open with VSCode</a ></div>
|
||||
@@ -230,6 +230,15 @@ function getStatus() {
|
||||
)
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if(status !== '9' && status !== '-1' && data.status == '9'){
|
||||
window.location.reload();
|
||||
}
|
||||
if(status !== '-1' && data.status == '-1'){
|
||||
window.location.reload();
|
||||
}
|
||||
if(status !== '4' && status !== '-1' && data.status == '4'){
|
||||
window.location.reload();
|
||||
}
|
||||
if (data.status == '-1' || data.status == '') {
|
||||
if (loadingElement) {
|
||||
loadingElement.style.display = 'none';
|
||||
@@ -265,6 +274,9 @@ function getStatus() {
|
||||
if (loadingElement) {
|
||||
loadingElement.style.display = 'none';
|
||||
}
|
||||
if (restartContainer) {
|
||||
restartContainer.style.display = 'none';
|
||||
}
|
||||
clearInterval(intervalID);
|
||||
}else if (data.status == '5') {
|
||||
concealElement();
|
||||
@@ -314,12 +326,7 @@ function getStatus() {
|
||||
loadingElement.style.display = 'block';
|
||||
}
|
||||
}
|
||||
if(status !== '9' && status !== '-1' && data.status == '9'){
|
||||
window.location.reload();
|
||||
}
|
||||
if(status !== '-1' && data.status == '-1'){
|
||||
window.location.reload();
|
||||
}
|
||||
|
||||
status = data.status
|
||||
})
|
||||
.catch(error => {
|
||||
@@ -331,6 +338,9 @@ if (restartContainer) {
|
||||
restartContainer.addEventListener('click', function(event) {
|
||||
// 处理点击逻辑
|
||||
concealElement();
|
||||
if (loadingElement) {
|
||||
loadingElement.style.display = 'block';
|
||||
}
|
||||
fetch('{{.Repository.Link}}' + '/devcontainer/restart')
|
||||
.then(response => {intervalID = setInterval(getStatus, 3000);})
|
||||
});
|
||||
@@ -338,6 +348,9 @@ if (restartContainer) {
|
||||
if (stopContainer) {
|
||||
stopContainer.addEventListener('click', function(event) {
|
||||
concealElement();
|
||||
if (loadingElement) {
|
||||
loadingElement.style.display = 'block';
|
||||
}
|
||||
// 处理点击逻辑
|
||||
fetch('{{.Repository.Link}}' + '/devcontainer/stop')
|
||||
.then(response => {intervalID = setInterval(getStatus, 3000);})
|
||||
|
||||
14
templates/repo/devcontainer/devcontainer_restart.sh
Normal file
14
templates/repo/devcontainer/devcontainer_restart.sh
Normal file
@@ -0,0 +1,14 @@
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
service ssh restart;
|
||||
;;
|
||||
centos)
|
||||
;;
|
||||
fedora)
|
||||
;;
|
||||
*)
|
||||
failure "Unsupported OS: $OS_ID"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
# 重启服务的命令
|
||||
59
templates/repo/devcontainer/devcontainer_start.sh
Normal file
59
templates/repo/devcontainer/devcontainer_start.sh
Normal file
@@ -0,0 +1,59 @@
|
||||
# 启动服务的命令
|
||||
echo "$DevstarHost host.docker.internal" | tee -a /etc/hosts;
|
||||
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
apt-get update -y
|
||||
# 检查 SSH 是否已安装
|
||||
if ! dpkg -l | grep -q "^ii.*openssh-server"; then
|
||||
echo "SSH 未安装,将进行安装"
|
||||
apt-get install ssh -y
|
||||
else
|
||||
echo "SSH 已安装"
|
||||
fi
|
||||
# 检查 Git 是否已安装
|
||||
if ! dpkg -l | grep -q "^ii.*git"; then
|
||||
echo "Git 未安装,将进行安装"
|
||||
apt-get install git -y
|
||||
else
|
||||
echo "Git 已安装"
|
||||
fi
|
||||
|
||||
;;
|
||||
centos)
|
||||
# sudo yum update -y
|
||||
# sudo yum install -y epel-release
|
||||
# sudo yum groupinstall -y "Development Tools"
|
||||
# sudo yum install -y yaml-cpp yaml-cpp-devel
|
||||
;;
|
||||
fedora)
|
||||
# sudo dnf update -y
|
||||
# sudo dnf group install -y "Development Tools"
|
||||
# sudo dnf install -y yaml-cpp yaml-cpp-devel
|
||||
;;
|
||||
*)
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "PubkeyAuthentication yes\nPermitRootLogin yes\n" | tee -a /etc/ssh/sshd_config;
|
||||
rm -f /etc/ssh/ssh_host_*;
|
||||
ssh-keygen -A;
|
||||
mkdir -p ~/.ssh;
|
||||
chmod 700 ~/.ssh;
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
service ssh restart;
|
||||
;;
|
||||
centos)
|
||||
;;
|
||||
fedora)
|
||||
;;
|
||||
*)
|
||||
failure "Unsupported OS: $OS_ID"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
echo "$PUBLIC_KEY_LIST" > ~/.ssh/authorized_keys;
|
||||
chmod 600 ~/.ssh/authorized_keys
|
||||
|
||||
34
templates/repo/devcontainer/devcontainer_tmpl.sh
Executable file
34
templates/repo/devcontainer/devcontainer_tmpl.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/sh
|
||||
# 获取参数
|
||||
OS_ID=$(grep '^ID=' /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
|
||||
# 根据参数执行不同命令
|
||||
case $DEVCONTAINER_STATUS in
|
||||
"restart")
|
||||
echo "Restarting service..."
|
||||
$RESTART
|
||||
sh -c "tail -f /dev/null"
|
||||
;;
|
||||
"start")
|
||||
echo "Starting service..."
|
||||
$START
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
echo 'DEVCONTAINER_STATUS="restart"' | tee -a /etc/environment
|
||||
;;
|
||||
centos)
|
||||
;;
|
||||
fedora)
|
||||
;;
|
||||
*)
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
git clone $RepoLink $WorkSpace
|
||||
sh -c "tail -f /dev/null"
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 {start|stop|restart}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
7
templates/repo/settings/devcontainer.tmpl
Normal file
7
templates/repo/settings/devcontainer.tmpl
Normal file
@@ -0,0 +1,7 @@
|
||||
{{template "repo/settings/layout_head" (dict "ctxData" . "pageClass" "repository settings actions")}}
|
||||
<div class="repo-setting-content">
|
||||
{{if eq .PageType "variables"}}
|
||||
{{template "shared/devcontainer/variable_list" .}}
|
||||
{{end}}
|
||||
</div>
|
||||
{{template "repo/settings/layout_footer" .}}
|
||||
@@ -54,5 +54,13 @@
|
||||
</div>
|
||||
</details>
|
||||
{{end}}
|
||||
<details class="item toggleable-item" {{if or .PageIsSharedSettingsDevcontainerVariables}}open{{end}}>
|
||||
<summary>{{ctx.Locale.Tr "admin.devcontainer"}}</summary>
|
||||
<div class="menu">
|
||||
<a class="{{if .PageIsSharedSettingsDevcontainerVariables}}active {{end}}item" href="{{.RepoLink}}/settings/devcontainer/variables">
|
||||
{{ctx.Locale.Tr "devcontainer.variables"}}
|
||||
</a>
|
||||
</div>
|
||||
</details>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
363
templates/shared/devcontainer/variable_list.tmpl
Normal file
363
templates/shared/devcontainer/variable_list.tmpl
Normal file
@@ -0,0 +1,363 @@
|
||||
<h4 class="ui top attached header">
|
||||
{{ctx.Locale.Tr "devcontainer.scripts"}}
|
||||
</h4>
|
||||
<div class="ui attached segment">
|
||||
{{ctx.Locale.Tr "devcontainer.scripts.description"}}
|
||||
{{if or .Variables .DevstarVariables}}
|
||||
<div class="dynamic-tags" data-tags='{{.Tags}}'></div>
|
||||
{{end}}
|
||||
</div>
|
||||
<h4 class="ui top attached header">
|
||||
{{ctx.Locale.Tr "devcontainer.variables.management"}}
|
||||
<div class="ui right">
|
||||
<button class="ui primary tiny button show-modal"
|
||||
data-modal="#edit-variable-modal"
|
||||
data-modal-form.action="{{.Link}}/new"
|
||||
data-modal-header="{{ctx.Locale.Tr "devcontainer.variables.creation"}}"
|
||||
data-modal-dialog-variable-name=""
|
||||
data-modal-dialog-variable-data=""
|
||||
data-modal-dialog-variable-description=""
|
||||
>
|
||||
{{ctx.Locale.Tr "devcontainer.variables.creation"}}
|
||||
</button>
|
||||
</div>
|
||||
</h4>
|
||||
<div class="ui attached segment">
|
||||
{{if or .Variables .DevstarVariables}}
|
||||
<div class="flex-list">
|
||||
{{range .Variables}}
|
||||
<div class="flex-item tw-items-center">
|
||||
<div class="flex-item-leading">
|
||||
{{svg "octicon-pencil" 32}}
|
||||
</div>
|
||||
<div class="flex-item-main">
|
||||
<div class="flex-item-title">
|
||||
{{.Name}}
|
||||
</div>
|
||||
<div class="flex-item-body">
|
||||
{{if .Description}}{{.Description}}{{else}}-{{end}}
|
||||
</div>
|
||||
<div class="flex-item-body">
|
||||
{{.Data}}
|
||||
</div>
|
||||
</div>
|
||||
<div class="flex-item-trailing">
|
||||
<span class="color-text-light-2">
|
||||
{{ctx.Locale.Tr "settings.added_on" (DateUtils.AbsoluteShort .CreatedUnix)}}
|
||||
</span>
|
||||
<button class="btn interact-bg tw-p-2 show-modal"
|
||||
data-tooltip-content="{{ctx.Locale.Tr "devcontainer.variables.edit"}}"
|
||||
data-modal="#edit-variable-modal"
|
||||
data-modal-form.action="{{$.Link}}/{{.ID}}/edit"
|
||||
data-modal-header="{{ctx.Locale.Tr "devcontainer.variables.edit"}}"
|
||||
data-modal-dialog-variable-name="{{.Name}}"
|
||||
data-modal-dialog-variable-data="{{.Data}}"
|
||||
data-modal-dialog-variable-description="{{.Description}}"
|
||||
>
|
||||
{{svg "octicon-pencil"}}
|
||||
</button>
|
||||
<button class="btn interact-bg tw-p-2 link-action"
|
||||
data-tooltip-content="{{ctx.Locale.Tr "devcontainer.variables.deletion"}}"
|
||||
data-url="{{$.Link}}/{{.ID}}/delete"
|
||||
data-modal-confirm="{{ctx.Locale.Tr "devcontainer.variables.deletion.description"}}"
|
||||
>
|
||||
{{svg "octicon-trash"}}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{range .DevstarVariables}}
|
||||
<div class="flex-item tw-items-center">
|
||||
<div class="flex-item-leading">
|
||||
{{svg "octicon-pencil" 32}}
|
||||
</div>
|
||||
<div class="flex-item-main">
|
||||
<div class="flex-item-title">
|
||||
{{.Name}}
|
||||
</div>
|
||||
<div class="flex-item-body">
|
||||
{{if .Description}}{{.Description}}{{else}}-{{end}}
|
||||
</div>
|
||||
<div class="flex-item-body">
|
||||
{{.Data}}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{else}}
|
||||
{{ctx.Locale.Tr "devcontainer.variables.none"}}
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{/** Edit variable dialog */}}
|
||||
<div class="ui small modal" id="edit-variable-modal">
|
||||
<div class="header"></div>
|
||||
<form class="ui form form-fetch-action" method="post">
|
||||
<div class="content">
|
||||
{{.CsrfTokenHtml}}
|
||||
<div class="field">
|
||||
{{ctx.Locale.Tr "devcontainer.variables.description"}}
|
||||
</div>
|
||||
<div class="field">
|
||||
<label for="dialog-variable-name">{{ctx.Locale.Tr "name"}}</label>
|
||||
<input autofocus required
|
||||
name="name"
|
||||
id="dialog-variable-name"
|
||||
value="{{.name}}"
|
||||
pattern="^(?!GITEA_|GITHUB_)[a-zA-Z_][a-zA-Z0-9_]*$"
|
||||
placeholder="{{ctx.Locale.Tr "secrets.creation.name_placeholder"}}"
|
||||
>
|
||||
</div>
|
||||
<div class="field">
|
||||
<label for="dialog-variable-data">{{ctx.Locale.Tr "value"}}</label>
|
||||
<textarea required
|
||||
name="data"
|
||||
id="dialog-variable-data"
|
||||
placeholder="{{ctx.Locale.Tr "secrets.creation.value_placeholder"}}"
|
||||
></textarea>
|
||||
</div>
|
||||
<div class="field">
|
||||
<label for="dialog-variable-description">{{ctx.Locale.Tr "secrets.creation.description"}}</label>
|
||||
<textarea
|
||||
name="description"
|
||||
id="dialog-variable-description"
|
||||
rows="2"
|
||||
maxlength="{{.DescriptionMaxLength}}"
|
||||
placeholder="{{ctx.Locale.Tr "secrets.creation.description_placeholder"}}"
|
||||
></textarea>
|
||||
</div>
|
||||
</div>
|
||||
{{template "base/modal_actions_confirm" (dict "ModalButtonTypes" "confirm")}}
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function initDynamicTags() {
|
||||
// 查找所有具有 dynamic-tags 类的元素
|
||||
const elements = document.querySelectorAll('.dynamic-tags');
|
||||
|
||||
elements.forEach(el => {
|
||||
// 获取标签数据
|
||||
let tags = [];
|
||||
try {
|
||||
tags = JSON.parse(el.getAttribute('data-tags') || '[]');
|
||||
} catch (e) {
|
||||
console.error('Invalid tags data:', el.getAttribute('data-tags'));
|
||||
}
|
||||
|
||||
// 创建容器
|
||||
const container = document.createElement('div');
|
||||
container.className = 'dynamic-tags-container';
|
||||
|
||||
// 创建标签列表容器
|
||||
const tagList = document.createElement('div');
|
||||
tagList.className = 'dynamic-tags-list';
|
||||
|
||||
// 渲染标签
|
||||
function renderTags() {
|
||||
// 清空标签列表
|
||||
tagList.innerHTML = '';
|
||||
|
||||
// 添加每个标签
|
||||
tags.forEach(tag => {
|
||||
const tagElement = document.createElement('span');
|
||||
tagElement.className = 'tag-item';
|
||||
tagElement.innerHTML = `
|
||||
${tag}
|
||||
<button class="tag-close" data-tag="${tag}">×</button>
|
||||
`;
|
||||
tagList.appendChild(tagElement);
|
||||
});
|
||||
|
||||
// 添加"新增标签"按钮或输入框
|
||||
const inputContainer = document.createElement('span');
|
||||
inputContainer.className = 'tag-input-container';
|
||||
inputContainer.innerHTML = `
|
||||
<button class="tag-add-button">+ New Script</button>
|
||||
<input type="text" class="tag-input" style="display: none;" placeholder="Enter tag">
|
||||
`;
|
||||
|
||||
tagList.appendChild(inputContainer);
|
||||
|
||||
// 绑定事件
|
||||
bindEvents();
|
||||
}
|
||||
|
||||
// 绑定事件
|
||||
function bindEvents() {
|
||||
// 删除标签事件
|
||||
tagList.querySelectorAll('.tag-close').forEach(button => {
|
||||
button.addEventListener('click', (e) => {
|
||||
const tag = e.target.getAttribute('data-tag');
|
||||
// 删除标签时访问 /script/delete
|
||||
fetch('{{.Link}}/script/delete?name=' + encodeURIComponent(tag), {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
}).then(response => {
|
||||
if (response.ok) {
|
||||
tags = tags.filter(t => t !== tag);
|
||||
renderTags();
|
||||
console.log('Successfully deleted script for variable: ' + tag);
|
||||
} else {
|
||||
console.error('Failed to delete script for variable: ' + tag);
|
||||
}
|
||||
}).catch(error => {
|
||||
console.error('Error deleting script:', error);
|
||||
});
|
||||
|
||||
});
|
||||
});
|
||||
|
||||
// 显示输入框事件
|
||||
const addButton = tagList.querySelector('.tag-add-button');
|
||||
const tagInput = tagList.querySelector('.tag-input');
|
||||
|
||||
addButton.addEventListener('click', () => {
|
||||
addButton.style.display = 'none';
|
||||
tagInput.style.display = 'inline-block';
|
||||
tagInput.focus();
|
||||
});
|
||||
|
||||
// 添加标签事件
|
||||
tagInput.addEventListener('keyup', (e) => {
|
||||
if (e.key === 'Enter') {
|
||||
const value = e.target.value.trim();
|
||||
if (value && !tags.includes(value)) {
|
||||
// 当按下 Enter 键时,发送请求到 /script/new
|
||||
fetch('{{.Link}}/script/new?name=' + encodeURIComponent(value), {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
}).then(response => {
|
||||
if (response.ok) {
|
||||
console.log('Successfully created script for variable: ' + value);
|
||||
tags.push(value);
|
||||
renderTags();
|
||||
} else {
|
||||
console.error('Failed to create script for variable: ' + value);
|
||||
}
|
||||
}).catch(error => {
|
||||
console.error('Error creating script:', error);
|
||||
});
|
||||
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// 失去焦点时隐藏输入框
|
||||
tagInput.addEventListener('blur', () => {
|
||||
const value = tagInput.value.trim();
|
||||
if (value && !tags.includes(value)) {
|
||||
fetch('{{.Link}}/script/new?name=' + encodeURIComponent(value), {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
}).then(response => {
|
||||
if (response.ok) {
|
||||
console.log('Successfully created script for variable: ' + value);
|
||||
tags.push(value);
|
||||
renderTags();
|
||||
} else {
|
||||
console.error('Failed to create script for variable: ' + value);
|
||||
}
|
||||
}).catch(error => {
|
||||
console.error('Error creating script:', error);
|
||||
});
|
||||
}
|
||||
addButton.style.display = 'inline-flex';
|
||||
tagInput.style.display = 'none';
|
||||
tagInput.value = '';
|
||||
renderTags();
|
||||
});
|
||||
}
|
||||
|
||||
// 初始渲染
|
||||
renderTags();
|
||||
container.appendChild(tagList);
|
||||
el.innerHTML = '';
|
||||
el.appendChild(container);
|
||||
});
|
||||
}
|
||||
|
||||
// 页面加载完成后初始化
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
initDynamicTags();
|
||||
});
|
||||
</script>
|
||||
<style>
|
||||
.dynamic-tags-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
flex-wrap: wrap;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.tag-item {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
background-color: #007bff;
|
||||
color: white;
|
||||
border-radius: 4px;
|
||||
padding: 4px 8px;
|
||||
font-size: 12px;
|
||||
margin: 2px;
|
||||
}
|
||||
|
||||
.tag-close {
|
||||
margin-left: 6px;
|
||||
cursor: pointer;
|
||||
opacity: 0.8;
|
||||
background: none;
|
||||
border: none;
|
||||
color: white;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.tag-close:hover {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.tag-add-button {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
background: transparent;
|
||||
border: 1px dashed #6c757d;
|
||||
color: #6c757d;
|
||||
border-radius: 4px;
|
||||
padding: 4px 8px;
|
||||
font-size: 12px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.tag-add-button:hover {
|
||||
border-color: #007bff;
|
||||
color: #007bff;
|
||||
}
|
||||
|
||||
.tag-input-container {
|
||||
display: inline-block;
|
||||
width: 90px;
|
||||
}
|
||||
|
||||
.tag-input {
|
||||
width: 100%;
|
||||
padding: 4px 8px;
|
||||
font-size: 12px;
|
||||
border: 1px solid #6c757d;
|
||||
border-radius: 4px;
|
||||
background: #fff;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.tag-input:focus {
|
||||
outline: none;
|
||||
border-color: #007bff;
|
||||
box-shadow: 0 0 0 2px rgba(0, 123, 255, 0.25);
|
||||
}
|
||||
</style>
|
||||
8
templates/user/settings/devcontainer.tmpl
Normal file
8
templates/user/settings/devcontainer.tmpl
Normal file
@@ -0,0 +1,8 @@
|
||||
{{template "user/settings/layout_head" (dict "ctxData" . "pageClass" "user settings actions")}}
|
||||
<div class="user-setting-content">
|
||||
{{if eq .PageType "variables"}}
|
||||
{{template "shared/devcontainer/variable_list" .}}
|
||||
{{end}}
|
||||
</div>
|
||||
|
||||
{{template "user/settings/layout_footer" .}}
|
||||
@@ -65,5 +65,13 @@
|
||||
<a class="{{if .PageIsSettingsRepos}}active {{end}}item" href="{{AppSubUrl}}/user/settings/repos">
|
||||
{{ctx.Locale.Tr "settings.repos"}}
|
||||
</a>
|
||||
<details class="item toggleable-item" {{if or .PageIsSharedSettingsDevcontainerVariables}}open{{end}}>
|
||||
<summary>{{ctx.Locale.Tr "admin.devcontainer"}}</summary>
|
||||
<div class="menu">
|
||||
<a class="{{if .PageIsSharedSettingsDevcontainerVariables}}active {{end}}item" href="{{AppSubUrl}}/user/settings/devcontainer/variables">
|
||||
{{ctx.Locale.Tr "devcontainer.variables"}}
|
||||
</a>
|
||||
</div>
|
||||
</details>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1,70 +0,0 @@
|
||||
#!/bin/bash
|
||||
# 获取参数
|
||||
ACTION=$1
|
||||
OS_ID=$(grep '^ID=' /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
|
||||
|
||||
|
||||
# 根据参数执行不同命令
|
||||
case $ACTION in
|
||||
"start")
|
||||
echo "Starting service..."
|
||||
# 启动服务的命令
|
||||
echo "$DevstarHost host.docker.internal" | tee -a /etc/hosts;
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
apt-get update -y
|
||||
apt-get install ssh git -y
|
||||
;;
|
||||
centos)
|
||||
# sudo yum update -y
|
||||
# sudo yum install -y epel-release
|
||||
# sudo yum groupinstall -y "Development Tools"
|
||||
# sudo yum install -y yaml-cpp yaml-cpp-devel
|
||||
;;
|
||||
fedora)
|
||||
# sudo dnf update -y
|
||||
# sudo dnf group install -y "Development Tools"
|
||||
# sudo dnf install -y yaml-cpp yaml-cpp-devel
|
||||
;;
|
||||
*)
|
||||
failure "Unsupported OS: $OS_ID"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo -e "PubkeyAuthentication yes\nPermitRootLogin yes\n" | tee -a /etc/ssh/sshd_config;
|
||||
rm -f /etc/ssh/ssh_host_*;
|
||||
ssh-keygen -A;
|
||||
mkdir -p ~/.ssh;
|
||||
chmod 700 ~/.ssh;
|
||||
case $OS_ID in
|
||||
ubuntu|debian)
|
||||
service ssh restart;
|
||||
;;
|
||||
centos)
|
||||
;;
|
||||
fedora)
|
||||
;;
|
||||
*)
|
||||
failure "Unsupported OS: $OS_ID"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
echo "$PublicKeyList" > ~/.ssh/authorized_keys;
|
||||
chmod 600 ~/.ssh/authorized_keys
|
||||
git clone $RepoLink $WorkSpace
|
||||
;;
|
||||
"stop")
|
||||
echo "Stopping service..."
|
||||
# 停止服务的命令
|
||||
;;
|
||||
"restart")
|
||||
echo "Restarting service..."
|
||||
# 重启服务的命令
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 {start|stop|restart}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
Reference in New Issue
Block a user