目 录CONTENT

文章目录

Kubeadm管理k8s集群系列09-日志收集系统EFK

cplinux98
2022-09-06 / 0 评论 / 0 点赞 / 539 阅读 / 1,558 字 / 正在检测是否收录...

00:文章简介

kubernetes中的日志收集系统EFK。

和传统的ELK相比,EFK更加轻量级,也更适合k8s集群环境。

这次我们分别部署fluentd和fluent-bit查看效果。

fluentd和fluent-bit都是Treasure Data公司赞助开发的开源项目,目标都是为了解决日志收集、处理和转发。

fluentd fluent-bit
范围 容器/服务器 容器/服务器
语言 C和Ruby C
大小 约40MB 约450KB
性能 高性能 高性能
插件支持 650+ 30+
官方网站 https://www.fluentd.org/ https://fluentbit.io/

其他具体的信息可以参考官方网站

01:ElasticSearch安装

这里我就使用apt的方式安装了,单节点即可。

参考地址:https://www.elastic.co/guide/en/elasticsearch/reference/8.1/deb.html#deb-repo

如有集群需要,请参考:https://linux98.com/#/operation/soft/elk/

1.1:安装

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-7.x.list

apt-get update && apt-get install elasticsearch

1.2:配置

# 设定elasticsearch集群的名称
cluster.name: elastic.linux98.com

# 设定集群master界面的名称,节点名称
node.name: 192.168.31.51
# node.name: node-1

# 设定elasticsearch的存储目录,包括数据目录和日志目录
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

# 允许所有主机都能访问我们的elasticsearch
network.host: 0.0.0.0

# 设置elasticsearch对外的访问端口
http.port:9200

# 设定主机发现
discovery.seed_hosts: ["192.168.31.51"] # 这里填主机名或者ip地址,前提是可以通过主机名通信
cluster.initial_master_nodes: ["192.168.31.51"] # 这里和node.name时一致的node-1

# 开启跨域访问支持,默认为false
http.cors.enabled: true

# 跨域访问允许的域名地址,(允许所有域名)以上使用正则
http.cors.allow-origin: "*"

1.3:启动服务

systemctl daemon-reload
systemctl enable elasticsearch.service

1.4:安装插件

/usr/share/elasticsearch/bin/elasticsearch-plugin

/usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-smartcn
/usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-icu
systemctl restart elasticsearch.service

1.5:测试

curl -X POST 'http://192.168.31.51:9200/_analyze?pretty=true' -H 'content-type:application/json' -d '{
"analyzer": "icu_analyzer",
"text": "中华人民共和国国歌"
}'

curl -X POST 'http://192.168.31.51:9200/_analyze?pretty=true' -H 'content-type:application/json' -d '{
"analyzer": "smartcn",
"text": "中华人民共和国国歌"
}'

02:Kibana安装

K和E就部署在同一台服务器上了。

2.1:安装

apt install kibana

2.2:配置

# 设定kibana对外开放的通信端口
server.port: 5601
# 设定可以访问kibana的主机地址
server.host: "0.0.0.0"
# 设定elasticsearch的主机地址
elasticsearch.hosts: ["http://localhost:9200"]
# 设定kibana的数据索引
kibana.index: ".kibana"
# 设定中文显示格式
i18n.locale: "zh-CN"

2.3:启动

systemctl start kibana.service 
systemctl status kibana.service

netstat -nutlp | grep 5601

03:fluent-bit安装

fluent-bit的安装就使用helm进行配置了

参考地址:https://docs.fluentbit.io/manual/installation/kubernetes

参考地址:https://www.icode9.com/content-3-1048191.html

3.1:配置values文件

fluent-bit-value.yaml

其中myapp-nginx-demo.*是自定义的业务

# kind -- DaemonSet or Deployment
kind: DaemonSet

image:
  repository: fluent/fluent-bit
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 2020
  annotations:
    prometheus.io/path: "/api/v1/metrics/prometheus"
    prometheus.io/port: "2020"
    prometheus.io/scrape: "true"

resources: {}
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  #requests:
  #  cpu: 100m
  #  memory: 128Mi

tolerations:
  - key: node-role.kubernetes.io/master
    effect: NoSchedule

config:
  service: |
    [SERVICE]
        Flush 3
        Daemon Off
        #Log_Level info
        Log_Level debug
        Parsers_File custom_parsers.conf
        Parsers_File parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020

  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
        Refresh_Interval  10
    [INPUT]
        Name tail
        Path /var/log/containers/myapp-nginx-demo*.log
        Parser docker
        Tag myapp-nginx-demo.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
        Refresh_Interval  10
    [INPUT]
        Name tail
        Path /var/log/containers/ingress-nginx-controller*.log
        Parser docker
        Tag ingress-nginx-controller.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On
        Refresh_Interval  10

  filters: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Keep_Log            Off
        K8S-Logging.Exclude On
        K8S-Logging.Parser On
    [FILTER]
        Name                kubernetes
        Match               ingress-nginx-controller.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Parser        ingress-nginx
        Keep_Log            Off
        K8S-Logging.Exclude On
        K8S-Logging.Parser On


  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host 192.168.31.51 
        Logstash_Format On
        Logstash_Prefix k8s-cluster
        Type  flb_type
        Replace_Dots On

    [OUTPUT]
        Name es
        Match myapp-nginx-demo.*
        Host 192.168.31.51
        Logstash_Format On
        Logstash_Prefix myapp-nginx-demo
        Type  flb_type
        Replace_Dots On
    [OUTPUT]
        Name es
        Match ingress-nginx-controller.*
        Host 192.168.31.51 
        Logstash_Format On
        Logstash_Prefix ingress-nginx-controller
        Type  flb_type
        Replace_Dots On


  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L

    [PARSER]
        Name        ingress-nginx
        Format      regex
        Regex       ^(?<message>(?<remote>[^ ]*) - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] \[(?<proxy_alternative_upstream_name>[^ ]*)\] (?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<req_id>[^ ]*).*)$
        Time_Key    time
        Time_Format %d/%b/%Y:%H:%M:%S %z

3.2:安装chart

helm install fluent-bit -f fluent-bit-values.yaml fluent/fluent-bit

3.3:创建测试pod

kubectl run myapp-nginx-demo --image=nginx
curl pod_ip # 多访问几次

3.4:查看效果

在kibana中创建索引:myapp-nginx-demo-*

image-20220420164048287

04:fluentd安装

搜索

helm pull az-stable/fluentd-elasticsearch
tar -xf fluentd-elasticsearch-2.0.7.tgz 

4.1:配置文件

修改elasticsearch地址

elasticsearch:
  host: '192.168.31.51' 

4.2:安装

helm install test-fluentd ./fluentd-elasticsearch

4.3:创建索引

在kibana中创建索引logstash-*

4.4:查看效果

image-20220420164713300

05:FAQ

5.1:日志量过大出现丢失日志

当集群内部的日志数量过于庞大,会导致elasticsearch来不及处理日志或丢失部分日志,此时我们应该根据日志系统的架构模式,在E-L-F或E-F中间部署MQ,这个MQ通常来说是kafka。

0

评论区