Helm

Helm 是什么?

Helm 是一个客户端命令行工具,通过Heml可以从仓库中获取一个项目运行需要的所有清单文件,并应用运行在Kubernetes集群之上。

例如,部署一个Nginx,通常需要2个清单,Service,Deployment,如果需要的话可能还有HPA清单

这些清单组合在一起构成了一个应用程序,在heml中称之为Chart

Helm 是如何和APIServer交互的?

从以往的使用经验上来说,创建资源时都是和APIServer进行进行交互,但是Helm不同,它是和Tiller交互,Tiller作为服务端守护程序处理Helm的资源创建请求,再由Tiller连接APIServer创建资源。

Helm 创建新的应用时,需要先将Chart清单拉取到本地,然后才能执行创建资源请求,一旦Chart部署到Kubernetes集群中,就不在叫做Chart了,而叫做Release,所以Chart更像是类,而Release是实例化后的对象。

Chart -> Config -> Release

Helm 相关术语

Helm:

  • 术语:
    • Chart:一个helm程序包
    • Repository:Charts仓库,http/https 服务
    • Release:特定的chart部署至目标集群的一个实例
  • 架构:
    • Helm:客户端,管理本地的Chart仓库,于Tiller进行服务器交互,发送Chart,安装、查询、卸载等操作
    • Tiller: 服务器端,接收Helm发送的Charts与Config,合并生成Release完成部署。

Helm 应用仓库

Helm 仓库:https://hub.kubeapps.com/

stable incubator:预发布

部署 Helm

Helm可以部署到任何一台注意,但要保证~/.kube/config存在,他会读取APIServer地址和认证信息

# wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
# tar xf helm-v2.9.1-linux-amd64.tar.gz
# mv linux-amd64/helm /bin/

部署Tiller

添加tiller服务账户

# cat rbac-tiller-config.yamlapiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

# kubectl apply -f rbac-tiller-config.yaml

初始化 Tiller

# helm init --service-account tiller
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

检查Tiller安装成功与否

# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

拉取Charts到远程仓库信息到本地

# helm repo update

在拉取时不出意外,有可能会被墙,提示信息:Unable to get an update from the "stable" chart repository

配置使用阿里云的镜像地址,但是这个仓库很久没有更新了。最好配置HTTPS代理使用官方仓库

# helm repo remove stable
# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# helm repo update
# helm search

Helm 常用命令

仓库相关

  • helm repo list
  • helm repo remove <NAME>
  • helm repo add <NAME> <URL>
  • helm repo update
  • helm search <KEYWORD>
  • helm serve

Release相关

  • helm install <NAME>
  • helm delete <NAME>
  • helm upgrade <NAME> helm upgrade solid-warthog -f redis/values.yaml stable/redis
  • helm list <NAME>
  • helm rollback <NAME>
  • helm history <NAME>
  • helm status <NAME>

Charts 相关

  • helm create
  • helm inspect <REPO>/<CHART_NAME>
  • helm fetch <REPO>/<CHART_NAME>
  • helm package
  • helm get
  • helm verify

Charts 应用文件结构解析

https://docs.helm.sh/developing_charts/#charts

通过helm部署Memcached

# helm install --name mem1 stable/memcached
NAME:   mem1
LAST DEPLOYED: Mon Oct 15 14:33:56 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME            TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)    AGE
mem1-memcached  ClusterIP  None        <none>       11211/TCP  0s

==> v1beta1/StatefulSet
NAME            DESIRED  CURRENT  AGE
mem1-memcached  3        1        0s

==> v1beta1/PodDisruptionBudget
NAME            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
mem1-memcached  3              N/A              0                    0s

==> v1/Pod(related)
NAME              READY  STATUS   RESTARTS  AGE
mem1-memcached-0  0/1    Pending  0         0s


NOTES:
Memcached can be accessed via port 11211 on the following DNS name from within your cluster:
mem1-memcached.default.svc.cluster.local

If you'd like to test your instance, forward the port locally:

  export POD_NAME=$(kubectl get pods --namespace default -l "app=mem1-memcached" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward $POD_NAME 11211

In another tab, attempt to set a key:

  $ echo -e 'set mykey 0 60 5\r\nhello\r' | nc localhost 11211

You should see:

  STORED

部署EFK

创建EFK名称空间

# kubectl create ns efk

下载es helm应用包

# helm fetch incubator/elasticsearch --version 1.4.1

修改相关配置,已便于运行在低配环境

 egrep -v "#|^$"  elasticsearch/values.yaml 
appVersion: "6.2.4"
image:
  repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
  tag: "6.3.1"
  pullPolicy: "IfNotPresent"
cluster:
  name: "elasticsearch"
  kubernetesDomain: cluster.local
  xpackEnable: false
  config:
  env:
    MINIMUM_MASTER_NODES: "1"
client:
  name: client
  replicas: 1
  serviceType: ClusterIP
  heapSize: "256m"
  antiAffinity: "soft"
  nodeSelector: {}
  tolerations: {}
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "25m"
      memory: "256Mi"
  priorityClassName: ""
  podDisruptionBudget:
    enabled: false
    minAvailable: 1
master:
  name: master
  exposeHttp: false
  replicas: 1
  heapSize: "256m"
  persistence:
    enabled: false
    accessMode: ReadWriteOnce
    name: data
    size: "4Gi"
  antiAffinity: "soft"
  nodeSelector: {}
  tolerations: {}
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "25m"
      memory: "256Mi"
  priorityClassName: ""
  podDisruptionBudget:
    enabled: false
data:
  name: data
  exposeHttp: false
  replicas: 1
  heapSize: "512m"
  persistence:
    enabled: false
    accessMode: ReadWriteOnce
    name: data
    size: "30Gi"
  terminationGracePeriodSeconds: 3600
  antiAffinity: "soft"
  nodeSelector: {}
  tolerations: {}
  resources:
    limits:
      cpu: "1"
    requests:
      cpu: "25m"
      memory: "512Mi"
  priorityClassName: ""
  podDisruptionBudget:
    enabled: false
    maxUnavailable: 1

通过helm创建 ElasticSearch

# helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch --version 1.4.1
# helm install --name els1 --namespace=efk -f ./elasticsearch/values.yaml ./elasticsearch-1.4.1.tgz

测试ES运行情况

# kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh
# curl els1-elasticsearch-client.efk.svc.cluster.local:9200

下载 fluentd helm应用包

# helm fetch stable/fluentd-elasticsearch --version 1.0.0

配置fluentd污点容忍和ES地址

  • elasticsearch.host
  • tolerations

通过helm启动fluentd

# helm install --name flu1 --namespace=efk -f fluentd-elasticsearch/values.yaml ./fluentd-elasticsearch-1.0.0.tgz

下载 Kibana helm应用包

# helm fetch stable/kibana --version 0.10.0

配置Service类型和ES地址

$ egrep -v "#|^$" kibana/values.yaml 
image:
  repository: "docker.elastic.co/kibana/kibana-oss"
  tag: "6.3.1"
  pullPolicy: "IfNotPresent"
commandline:
  args:
env:
files:
  kibana.yml:
    server.name: kibana
    server.host: "0"
    elasticsearch.url: http://els1-elasticsearch-client.efk.svc.cluster.local:9200
service:
  type: NodePort
  externalPort: 443
  internalPort: 5601
  annotations:
  labels:
ingress:
  enabled: false
resources: {}
priorityClassName: ""
tolerations: []
nodeSelector: {}
podAnnotations: {}
replicaCount: 1

通过helm启动kibana

# helm install --name kib1 --namespace=efk -f ./kibana/values.yaml ./kibana-0.10.0.tgz

results matching ""

    No results matching ""