使用 Loki 实现 Kubernetes 容器日志监控的方法

目录
  • 一、基本介绍
    • 1.Loki 架构
    • 2.Loki 工作原理
  • 二、使用 Loki 实现容器日志监控
    • 1.安装 Loki
    • 2.安装 Promtail
    • 3.安装 Grafana
    • 4.验证

一、基本介绍

Loki 是由 Grafana Labs 团队开发的,基于 Go 语言实现,是一个水平可扩展,高可用性,多租户的日志聚合系统。它的设计非常经济高效且易于操作,因为它不会为日志内容编制索引,而是为每个日志流配置一组标签。Loki 项目受 Prometheus 启发。

官方的介绍就是:Like Prometheus, but for logs,类似于 Prometheus 的日志系统。

1.Loki 架构

  • Loki:主服务,用于存储日志和处理查询。
  • Promtail:代理服务,用于采集日志,并转发给 Loki。
  • Grafana:通过 Web 界面来提供数据展示、查询、告警等功能。

2.Loki 工作原理

首先由 Promtail 进行日志采集,并发送给 Distributor 组件,Distributor 组件会对接收到的日志流进行正确性校验,并将验证后的日志分批并行发送给 Ingester 组件。Ingester 组件会将接收过来的日志流构建成数据块,并进行压缩后存放到所连接的后端存储中。

Querier 组件,用于接收 HTTP 查询请求,并将查询请求转发给 Ingester 组件,来返回存在 Ingester 内存中的数据。要是在 Ingester 的内存中没有找到符合条件的数据时,那么 Querier 组件便会直接在后端存储中进行查询(内置去重功能)。

二、使用 Loki 实现容器日志监控

1.安装 Loki

1)创建 RBAC 授权

[root@k8s-master01 ~]# cat <<END > loki-rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki
  namespace: logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: loki
  namespace: logging
rules:
- apiGroups: ["extensions"]
  resources: ["podsecuritypolicies"]
  verbs: ["use"]
  resourceNames: [loki]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: loki
  namespace: logging
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: loki
subjects:
- kind: ServiceAccount
  name: loki
END
[root@k8s-master01 ~]# kubectl create -f loki-rbac.yaml

2)创建 ConfigMap 文件

[root@k8s-master01 ~]# cat <<END > loki-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
data:
  loki.yaml: |
    auth_enabled: false
    ingester:
      chunk_idle_period: 3m
      chunk_block_size: 262144
      chunk_retain_period: 1m
      max_transfer_retries: 0
      lifecycler:
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
    schema_config:
      configs:
      - from: "2022-05-15"
        store: boltdb-shipper
        object_store: filesystem
        schema: v11
        index:
          prefix: index_
          period: 24h
    server:
      http_listen_port: 3100
    storage_config:
      boltdb_shipper:
        active_index_directory: /data/loki/boltdb-shipper-active
        cache_location: /data/loki/boltdb-shipper-cache
        cache_ttl: 24h
        shared_store: filesystem
      filesystem:
        directory: /data/loki/chunks
    chunk_store_config:
      max_look_back_period: 0s
    table_manager:
      retention_deletes_enabled: true
      retention_period: 48h
    compactor:
      working_directory: /data/loki/boltdb-shipper-compactor
      shared_store: filesystem
END
[root@k8s-master01 ~]# kubectl create -f loki-configmap.yaml

3)创建 StatefulSet

[root@k8s-master01 ~]# cat <<END > loki-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
spec:
  type: NodePort
  ports:
    - port: 3100
      protocol: TCP
      name: http-metrics
      targetPort: http-metrics
      nodePort: 30100
  selector:
    app: loki
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: loki
  namespace: logging
  labels:
    app: loki
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  selector:
    matchLabels:
      app: loki
  serviceName: loki
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: loki
    spec:
      serviceAccountName: loki
      initContainers:
      - name: chmod-data
        image: busybox:1.28.4
        imagePullPolicy: IfNotPresent
        command: ["chmod","-R","777","/loki/data"]
        volumeMounts:
        - name: storage
          mountPath: /loki/data
      containers:
        - name: loki
          image: grafana/loki:2.3.0
          imagePullPolicy: IfNotPresent
          args:
            - -config.file=/etc/loki/loki.yaml
          volumeMounts:
            - name: config
              mountPath: /etc/loki
            - name: storage
              mountPath: /data
          ports:
            - name: http-metrics
              containerPort: 3100
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 45
          readinessProbe:
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 45
          securityContext:
            readOnlyRootFilesystem: true
      terminationGracePeriodSeconds: 4800
      volumes:
        - name: config
          configMap:
            name: loki
        - name: storage
          hostPath:
            path: /app/loki
END
[root@k8s-master01 ~]# kubectl create -f loki-statefulset.yaml

2.安装 Promtail

1)创建 RBAC 授权文件

[root@k8s-master01 ~]# cat <<END > promtail-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: loki-promtail
  labels:
    app: promtail
  namespace: logging
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    app: promtail
  name: promtail-clusterrole
  namespace: logging
rules:
- apiGroups: [""]
  resources: ["nodes","nodes/proxy","services","endpoints","pods"]
  verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: promtail-clusterrolebinding
  labels:
    app: promtail
  namespace: logging
subjects:
  - kind: ServiceAccount
    name: loki-promtail
    namespace: logging
roleRef:
  kind: ClusterRole
  name: promtail-clusterrole
  apiGroup: rbac.authorization.k8s.io
END
[root@k8s-master01 ~]# kubectl create -f promtail-rbac.yaml

2)创建 ConfigMap 文件

Promtail 配置文件:官方介绍

[root@k8s-master01 ~]# cat <<"END" > promtail-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-promtail
  namespace: logging
  labels:
    app: promtail
data:
  promtail.yaml: |
    client:
      backoff_config:
        max_period: 5m
        max_retries: 10
        min_period: 500ms
      batchsize: 1048576
      batchwait: 1s
      external_labels: {}
      timeout: 10s
    positions:
      filename: /run/promtail/positions.yaml
    server:
      http_listen_port: 3101
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-app
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        source_labels:
        - __meta_kubernetes_pod_label_name
      - source_labels:
        - __meta_kubernetes_pod_label_app
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-direct-controllers
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: drop
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-indirect-controller
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: .+
        separator: ''
        source_labels:
        - __meta_kubernetes_pod_label_name
        - __meta_kubernetes_pod_label_app
      - action: keep
        regex: '[0-9a-z-.]+-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
      - action: replace
        regex: '([0-9a-z-.]+)-[0-9a-f]{8,10}'
        source_labels:
        - __meta_kubernetes_pod_controller_name
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
    - job_name: kubernetes-pods-static
      pipeline_stages:
        - docker: {}
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: drop
        regex: ''
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_label_component
        target_label: __service__
      - source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: __host__
      - action: drop
        regex: ''
        source_labels:
        - __service__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - __meta_kubernetes_namespace
        - __service__
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror
        - __meta_kubernetes_pod_container_name
        target_label: __path__
END
[root@k8s-master01 ~]# kubectl create -f promtail-configmap.yaml

3)创建 DaemonSet 文件

[root@k8s-master01 ~]# cat <<END > promtail-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: loki-promtail
  namespace: logging
  labels:
    app: promtail
spec:
  selector:
    matchLabels:
      app: promtail
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: promtail
    spec:
      serviceAccountName: loki-promtail
      containers:
        - name: promtail
          image: grafana/promtail:2.3.0
          imagePullPolicy: IfNotPresent
          args:
          - -config.file=/etc/promtail/promtail.yaml
          - -client.url=http://192.168.1.1:30100/loki/api/v1/push
          env:
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          volumeMounts:
          - mountPath: /etc/promtail
            name: config
          - mountPath: /run/promtail
            name: run
          - mountPath: /data/k8s/docker/data/containers
            name: docker
            readOnly: true
          - mountPath: /var/log/pods
            name: pods
            readOnly: true
          ports:
          - containerPort: 3101
            name: http-metrics
            protocol: TCP
          securityContext:
            readOnlyRootFilesystem: true
            runAsGroup: 0
            runAsUser: 0
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /ready
              port: http-metrics
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
        - name: config
          configMap:
            name: loki-promtail
        - name: run
          hostPath:
            path: /run/promtail
            type: ""
        - name: docker
          hostPath:
            path: /data/k8s/docker/data/containers
        - name: pods
          hostPath:
            path: /var/log/pods
END
[root@k8s-master01 ~]# kubectl create -f promtail-daemonset.yaml

4)Promtail 关键配置

    volumeMounts:
    - mountPath: /data/k8s/docker/data/containers
      name: docker
      readOnly: true
    - mountPath: /var/log/pods
      name: pods
      readOnly: true
volumes:
- name: docker
  hostPath:
    path: /data/k8s/docker/data/containers
- name: pods
  hostPath:
    path: /var/log/pods

这里需要注意,hostPathmountPath 配置的路径要相同(这里说的相同指的是,要和宿主机的容器目录相同),因为 Promtail 在读取容器内的日志时,会通过 K8s 的 API 接口来返回容器信息(通过源路径取的)。如果配置的不同,将会导致 httpGet 检查失败。

输出:Not ready: Unable to find any logs to tail. Please verify permissions, volumes, scrape_config, etc. 信息,原因可能就是因为挂载路径没有匹配上,导致 Promtail tail 不到日志文件。可以先手动将就绪检查关闭,来查看 Promtail 容器的日志。

3.安装 Grafana

[root@k8s-master01 ~]# cat <<END > grafana-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  labels:
    app: grafana
  namespace: logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:8.4.7
        imagePullPolicy: IfNotPresent
        env:
        - name: GF_AUTH_BASIC_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "false"
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
          limits:
            cpu: '1'
            memory: 2Gi
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
        volumeMounts:
        - name: storage
          mountPath: /var/lib/grafana
      volumes:
      - name: storage
        hostPath:
          path: /app/grafana
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
  labels:
    app: grafana
  namespace: logging
spec:
  type: NodePort
  ports:
  - port: 3000
    targetPort: 3000
    nodePort: 30030
  selector:
    app: grafana
END
[root@k8s-master01 ~]# mkdir -p /app/grafana
[root@k8s-master01 ~]# chmod -R 777 /app/grafana
[root@k8s-master01 ~]# kubectl create -f grafana-deploy.yaml

4.验证

访问 Grafana 控制台:http://192.168.1.1:30030(账号密码:admin/admin

1)配置数据源

2)选择 Loki 数据源

3)配置 Loki 的 URL 地址

4)验证

到此这篇关于使用 Loki 实现 Kubernetes 容器日志监控的文章就介绍到这了,更多相关Kubernetes 日志监控内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • kubernetes(k8s)安装metrics-server实现资源使用情况监控方式详解

    1. Metrics Server 与 kubenetes版本 Metrics Server    Metrics API group/version    Supported Kubernetes version0.6x    metrics.k8s.io/v1beta1    *1.19+0.5x    metrics.k8s.io/v1beta1    *1.8+0.4x    metrics.k8s.io/v1beta1    *1.8+0.3x    metrics.k8s.io/v1

  • 使用 Loki 实现 Kubernetes 容器日志监控的方法

    目录 一.基本介绍 1.Loki 架构 2.Loki 工作原理 二.使用 Loki 实现容器日志监控 1.安装 Loki 2.安装 Promtail 3.安装 Grafana 4.验证 一.基本介绍 Loki 是由 Grafana Labs 团队开发的,基于 Go 语言实现,是一个水平可扩展,高可用性,多租户的日志聚合系统.它的设计非常经济高效且易于操作,因为它不会为日志内容编制索引,而是为每个日志流配置一组标签.Loki 项目受 Prometheus 启发. 官方的介绍就是:Like Prom

  • Sentry错误日志监控使用方法解析

    无论作为新手还是老手程序员在程序的开发过程中,代码运行时难免会抛出异常,而且项目在部署到测试.生产环境后,我们便不可能像在开发时那样容易的及时发现处理错误了.一般我们都是在错误发生一段时间后,错误信息才会传递到开发人员那里,然后一顿操作查看程序运行的日志,就熟练使用awk和grep去分析日志,但是往往我们会因为日志中缺少上下文关系,导致很难分析真正的错误是什么. Sentry由此应运而生成为了解决这个问题的一个很好的工具,设计了诸多特性帮助开发者更快.更方面.更直观的监控错误信息. 关于日志管理

  • Docker 容器日志分析

    查看容器日志 先使用  docker run -it --rm -d -p 80:80 nginx:1.15.8-alpine 命令启动一个nginx容器.如果没有异常,会得到容器ID如  d2408a7931c95a3a83ffeca2fba887763cf925a67890ef3be4d9ff838aa25b00  的长串.再使用  curl -i http://127.0.0.1  访问服务,确认nginx容器正常启动运行.最后使用  docker logs -f d24  查看容器的日志

  • golang实现对docker容器心跳监控功能

    自己写的go程序放到线上本来编译成二进制扔上去就行啦,但是怀着一颗docker的心,最终还是将它放到docker容器中运行起来了,运行起来也ok,一个最小容器64M,统一管理起来也方便,但是毕竟是个线上长驻内存的服务程序,万一跑挂了怎么办,如何才能监控它,直接上go代码,网上代码,略微做了下注释,但实测过,真实有效: package main import ( "encoding/json" "errors" "flag" "fmt&q

  • 根据mysql慢日志监控SQL语句执行效率

    根据mysql慢日志监控SQL语句执行效率 启用MySQL的log-slow-queries(慢查询记录). 在Linux环境下先要找到my.cnf文件(一般在/etc/mysql/),然后可能会发现该文件修改后无法保存,原因是你没有相应的权限,可以从属性中看到该文件的所有者是root,这时要先以root的身份打开它: sudo nautilus /etc/mysql 接着再打开my.cnf文件然后找到[mysqld]标签在下面加上: log-slow-queries=/path/slow.lo

  • Docker 容器内存监控原理及应用

    Docker 容器内存监控 linux内存监控 要明白docker容器内存是如何计算的,首先要明白linux中内存的相关概念. 使用free命令可以查看当前内存使用情况. [root@localhost ~]$ free total used free shared buffers cached Mem: 264420684 213853512 50567172 71822688 2095364 175733516 -/+ buffers/cache: 36024632 228396052 Sw

  • Docker容器 日志中文乱码问题解决办法

    Docker容器 日志中文乱码问题解决办法 1. 找到dockerfile文件, 如 /use/local/src/Docker/Dockerfile 2. 编辑Dockerfile 添加 ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en ENV LC_ALL en_US.UTF-8 3. 生成新的镜像文件 docker build -t centos7 . 4. 使用docker images查看新生成的镜像 5. 使用新的镜像启动容器 docker run

  • Docker容器日志查看与清理的方法(亲测有效)

    1. 问题 docker容器日志导致主机磁盘空间满了.docker logs -f container_name噼里啪啦一大堆,很占用空间,不用的日志可以清理掉了. 2. 解决方法 2.1 找出Docker容器日志 在linux上,容器日志一般存放在/var/lib/docker/containers/container_id/下面, 以json.log结尾的文件(业务日志)很大,查看各个日志文件大小的脚本docker_log_size.sh,内容如下: #!/bin/sh echo "====

  • 浅析springcloud 整合 zipkin-server 内存日志监控

    Zipkin Zipkin是一款开源的分布式实时数据追踪系统(Distributed Tracking System),基于 Google Dapper的论文设计而来,由 Twitter 公司开发贡献.其主要功能是聚集来自各个异构系统的实时监控数据. Zipkin主要包括四个模块  - Collector           接收或收集各应用传输的数据  - Storage            存储接受或收集过来的数据,当前支持Memory,MySQL,Cassandra,ElasticSea

  • Python脚本实现Zabbix多行日志监控过程解析

    通过使用zabbix 日志监控 我发现一个问题 例如oracle的日志有报错的情况 ,通常不会去手动清理 这样的话当第二次有日志写进来的时候 zabbix的机制是回去检查全部日志,这样的话之前已经告警过的错误日志,又会被检查到,这样就会出现重复告警,而且zabbix的日志监控只能读到匹配当前行关键字的数据,感觉不太灵活, 比如我想要匹配到的关键字之后再当前关键字的下N行再去匹配另一个关键字这个时候就比较麻烦,在这里给大家推荐一个有效,便捷解决的方式. 通过Python脚本实现日志监控 要求 1

随机推荐