elasticsearch+logstash并使用java代码实现日志检索

为了项目日志不被泄露,数据展示不采用Kibana

1、环境准备

1.1 创建普通用户

#创建用户
useradd querylog

#设置密码
passwd queylog

#授权sudo权限
查找sudoers文件位置
whereis sudoers
#修改文件为可编辑
chmod -v u+w /etc/sudoers
#编辑文件
vi /etc/sudoers
#收回权限
chmod -v u-w /etc/sudoers
#第一次使用sudo会有提示

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

 #1) Respect the privacy of others.
 #2) Think before you type.
 #3) With great power comes great responsibility.

用户创建完成。

1.2 安装jdk

su queylog
cd /home/queylog
#解压jdk-8u191-linux-x64.tar.gz
tar -zxvf jdk-8u191-linux-x64.tar.gz
sudo mv jdk1.8.0_191 /opt/jdk1.8
#编辑/ect/profile
vi /ect/profile
export JAVA_HOME=/opt/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
#刷新配置文件
source /ect/profile
#查看jdk版本
java -verion

1.3 防火墙设置

#放行指定IP
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="172.16.110.55" accept"
#重新载入
firewall-cmd --reload

2、安装elasticsearch

2.1 elasticsearch配置

注意:elasticsearch要使用普通用户启动要不然会报错

su queylog
cd /home/queylog
#解压elasticsearch-6.5.4.tar.gz
tar -zxvf elasticsearch-6.5.4.tar.gz
sudo mv elasticsearch-6.5.4 /opt/elasticsearch
#编辑es配置文件
vi /opt/elasticsearch/config/elasticsearch.yml
# 配置es的集群名称
cluster.name: elastic
# 修改服务地址
network.host: 192.168.8.224
# 修改服务端口
http.port: 9200

#切换root用户
su root
#修改/etc/security/limits.conf 追加以下内容
vi /etc/security/limits.conf
* hard nofile 655360
* soft nofile 131072
* hard nproc 4096
* soft nproc 2048

#编辑 /etc/sysctl.conf,追加以下内容:
vi /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655360

#保存后,重新加载:
sysctl -p

#切换回普通用户
su queylog
#启动elasticsearch
./opt/elasticsearch/bin/elasticsearch
#测试
curl http://192.168.8.224:9200
#控制台会打印
{
 "name" : "L_dA6oi",
 "cluster_name" : "elasticsearch",
 "cluster_uuid" : "eS7yP6fVTvC8KMhLutOz6w",
 "version" : {
 "number" : "6.5.4",
 "build_flavor" : "default",
 "build_type" : "tar",
 "build_hash" : "d2ef93d",
 "build_date" : "2018-12-17T21:17:40.758843Z",
 "build_snapshot" : false,
 "lucene_version" : "7.5.0",
 "minimum_wire_compatibility_version" : "5.6.0",
 "minimum_index_compatibility_version" : "5.0.0"
 },
 "tagline" : "You Know, for Search"
}

2.2 把elasticsearch作为服务进行管理

#切换root用户
su root
#编写服务配置文件
vi /usr/lib/systemd/system/elasticsearch.service
[unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target

[Service]
Environment=ES_HOME=/opt/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/config
Environment=PID_DIR=/opt/elasticsearch/config
EnvironmentFile=/etc/sysconfig/elasticsearch
WorkingDirectory=/opt/elasticsearch
User=queylog
Group=queylog
ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid

# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536

# Specifies the maximum number of process
LimitNPROC=4096

# Specifies the maximum size of virtual memory
LimitAS=infinity

# Specifies the maximum file size
LimitFSIZE=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0

# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM

# Send the signal only to the JVM rather than its control group
KillMode=process

# Java process is never killed
SendSIGKILL=no

# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target

vi /etc/sysconfig/elasticsearch

elasticsearch #
#######################

# Elasticsearch home directory
ES_HOME=/opt/elasticsearch

# Elasticsearch Java path
JAVA_HOME=/home/liyijie/jdk1.8
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOMR/jre/lib

# Elasticsearch configuration directory
ES_PATH_CONF=/opt/elasticsearch/config

# Elasticsearch PID directory
PID_DIR=/opt/elasticsearch/config

#############################
# Elasticsearch Service #
#############################

# SysV init.d
# The number of seconds to wait before checking if elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5

################################
# Elasticsearch Properties #
################################
# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd,this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536

# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using Systemd,LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/elasticsearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited

# Maximum number of VMA(Virtual Memory Areas) a process can own
# When using Systemd,this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144

# 重新加载服务
systemctl daemon-reload
#切换普通用户
su queylog
#启动elasticsearch
sudo systemctl start elasticsearch
#设置开机自启动
sudo systemctl enable elasticsearch

3、安装logstash

3.1、logstash配置

su queylog
cd /home/queylog
#解压 logstash-6.5.4.tar.gz
tar -zxvf logstash-6.5.4.tar.gz
sudo mv logstash-6.5.4 /opt/logstash
#编辑es配置文件
vi /opt/logstash/config/logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: ["http://192.168.8.224:9200"]
#在bin目录下创建logstash.conf
vi /opt/logstash/bin/logstash.conf
input {
# 以文件作为来源
file {
# 日志文件路径
path => "/opt/tomcat/logs/catalina.out"
start_position => "beginning" # (end, beginning)
type=> "isp"
}
}
#filter {
#定义数据的格式,正则解析日志(根据实际需要对日志日志过滤、收集)
#grok {
# match => { "message" => "%{IPV4:clientIP}|%{GREEDYDATA:request}|%{NUMBER:duration}"}
#}
#根据需要对数据的类型转换
#mutate { convert => { "duration" => "integer" }}
#}
# 定义输出
output {
elasticsearch {
hosts => "192.168.43.211:9200" #Elasticsearch 默认端口
index => "ind"
document_type => "isp"
}
}
#给该用户授权
chown queylog:queylog /opt/logstash
#启动logstash
./opt/logstash/bin/logstash -f logstash.conf 

# 安装并配置启动logstash后查看es索引是否创建完成
curl http://192.168.8.224:9200/_cat/indices

4、java代码部分

之前在SpringBoot整合ElasticSearch与Redis的异常解决

查阅资料,这个归纳的原因比较合理。
原因分析:程序的其他地方使用了Netty,这里指redis。这影响在实例化传输客户端之前初始化处理器的数量。 实例化传输客户端时,我们尝试初始化处理器的数量。 由于在其他地方使用Netty,因此已经初始化并且Netty会对此进行防范,因此首次实例化会因看到的非法状态异常而失败。

解决方案

在SpringBoot启动类中加入:

System.setProperty("es.set.netty.runtime.available.processors", "false");

4.1、引入pom依赖

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
  </dependency>

4.2、修改配置文件

spring.data.elasticsearch.cluster-name=elastic
# restapi使用9200
# java程序使用9300
spring.data.elasticsearch.cluster-nodes=192.168.43.211:9300

4.3、对应的接口以及实现类

import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;

@Document(indexName = "ind", type = "isp")
public class Bean {

 @Field
 private String message;

 public String getMessage() {
  return message;
 }

 public void setMessage(String message) {
  this.message = message;
 }

 @Override
 public String toString() {
  return "Tomcat{" +
    ", message='" + message + '\'' +
    '}';
 }
}
import java.util.Map;
public interface IElasticSearchService {

  Map<String, Object> search(String keywords, Integer currentPage, Integer pageSize) throws Exception ;

	//特殊字符转义
  default String escape( String s) {
  StringBuilder sb = new StringBuilder();

  for(int i = 0; i < s.length(); ++i) {
   char c = s.charAt(i);
   if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' || c == ':' || c == '^' || c == '[' || c == ']' || c == '"' || c == '{' || c == '}' || c == '~' || c == '*' || c == '?' || c == '|' || c == '&' || c == '/') {
    sb.append('\\');
   }

   sb.append(c);
  }
  return sb.toString();
 }

}
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.stereotype.Service;

import javax.annotation.Resource;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

/**
 * ElasticSearch实现类
 */
@Service
public class ElasticSearchServiceImpl implements IElasticSearchService {

 Logger log = LoggerFactory.getLogger(ElasticSearchServiceImpl.class);

 @Autowired
 ElasticsearchTemplate elasticsearchTemplate;

 @Resource
 HighlightResultHelper highlightResultHelper;

 @Override
 public Map<String, Object> search(String keywords, Integer currentPage, Integer pageSize) {
  keywords= escape(keywords);
   currentPage = Math.max(currentPage - 1, 0);

  List<HighlightBuilder.Field> highlightFields = new ArrayList<>();
 //设置高亮 把查询到的关键字进行高亮
  HighlightBuilder.Field message = new HighlightBuilder.Field("message").fragmentOffset(80000).numOfFragments(0).requireFieldMatch(false).preTags("<span style='color:red'>").postTags("</span>");
  highlightFields.add(message);

  HighlightBuilder.Field[] highlightFieldsAry = highlightFields.toArray(new HighlightBuilder
    .Field[highlightFields.size()]);

  //创建查询构造器
  NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();
  //过滤 按字段权重进行搜索 查询内容不为空按关键字、摘要、其他属性权重
  BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
  queryBuilder.withPageable(PageRequest.of(currentPage, pageSize));
  if (!MyStringUtils.isEmpty(keywords)){
   boolQueryBuilder.must(QueryBuilders.queryStringQuery(keywords).field("message"));
  }

  queryBuilder.withQuery(boolQueryBuilder);

  queryBuilder.withHighlightFields(highlightFieldsAry);

  log.info("查询语句:{}", queryBuilder.build().getQuery().toString());

  //查询
  AggregatedPage<Bean> result = elasticsearchTemplate.queryForPage(queryBuilder.build(), Bean
    .class,highlightResultHelper);
  //解析结果
  long total = result.getTotalElements();
  int totalPage = result.getTotalPages();
  List<Bean> blogList = result.getContent();
  Map<String, Object> map = new HashMap<>();
  map.put("total", total);
  map.put("totalPage", totalPage);
  map.put("pageSize", pageSize);
  map.put("currentPage", currentPage + 1);
  map.put("blogList", blogList);
  return map;
 }
import com.alibaba.fastjson.JSONObject;
import org.apache.commons.beanutils.PropertyUtils;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.core.SearchResultMapper;
import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;
import org.springframework.data.elasticsearch.core.aggregation.impl.AggregatedPageImpl;
import org.springframework.stereotype.Component;
import org.springframework.util.StringUtils;

import java.lang.reflect.InvocationTargetException;
import java.util.ArrayList;
import java.util.List;

/**
 * ElasticSearch高亮配置
 */
@Component
public class HighlightResultHelper implements SearchResultMapper {

 Logger log = LoggerFactory.getLogger(HighlightResultHelper.class);

 @Override
 public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable) {
  List<T> results = new ArrayList<>();
  for (SearchHit hit : response.getHits()) {
   if (hit != null) {
    T result = null;
    if (StringUtils.hasText(hit.getSourceAsString())) {
     result = JSONObject.parseObject(hit.getSourceAsString(), clazz);
    }
    // 高亮查询
    for (HighlightField field : hit.getHighlightFields().values()) {
     try {
      PropertyUtils.setProperty(result, field.getName(), concat(field.fragments()));
     } catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {
      log.error("设置高亮字段异常:{}", e.getMessage(), e);
     }
    }
    results.add(result);
   }
  }
  return new AggregatedPageImpl<T>(results, pageable, response.getHits().getTotalHits(), response
    .getAggregations(), response.getScrollId());
 }

 public <T> T mapSearchHit(SearchHit searchHit, Class<T> clazz) {
  List<T> results = new ArrayList<>();
  for (HighlightField field : searchHit.getHighlightFields().values()) {
   T result = null;
   if (StringUtils.hasText(searchHit.getSourceAsString())) {
    result = JSONObject.parseObject(searchHit.getSourceAsString(), clazz);
   }
   try {
    PropertyUtils.setProperty(result, field.getName(), concat(field.fragments()));
   } catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {
    log.error("设置高亮字段异常:{}", e.getMessage(), e);
   }
   results.add(result);
  }
  return null;
 }

 private String concat(Text[] texts) {
  StringBuffer sb = new StringBuffer();
  for (Text text : texts) {
   sb.append(text.toString());
  }
  return sb.toString();
 }
}
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

@RunWith(SpringRunner.class)
@SpringBootTest(classes = CbeiIspApplication.class)
public class ElasticSearchServiceTest {w

 private static Logger logger= LoggerFactory.getLogger(EncodePhoneAndCardTest.class);

 @Autowired
 private IElasticSearchService elasticSearchService;

 	@Test
 public ResponseVO getLog(){
  try {
  Map<String, Object> search = elasticSearchService.search("Exception", 1, 10);
   logger.info( JSON.toJSONString(search));
  } catch (Exception e) {
   e.printStackTrace();
  }
}

例如:以上就是今天要讲的内容,本文仅仅简单介绍了elasticsearch跟logstash的使用, 文章若有不当之处,欢迎评论指出~

到此这篇关于elasticsearch+logstash并使用java代码实现日志检索的文章就介绍到这了,更多相关elasticsearch logstash日志检索内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • 基于logstash实现日志文件同步elasticsearch

    引言: 之前博文介绍过了mysql/oracle与ES之间的同步机制.而logstash最初始的日志同步功能还没有介绍.本文就logstash同步日志到ES做下详细解读. 1.目的: 将本地磁盘存储的日志文件同步(全量同步.实时增量同步)到ES中. 2.源文件: [root@5b9dbaaa148a test_log]# ll -rwxrwxrwx 1 root root 170 Jul 5 08:02 logmachine.sh -rw-r--r-- 1 root root 66 Jul 5

  • elasticsearch+logstash并使用java代码实现日志检索

    为了项目日志不被泄露,数据展示不采用Kibana 1.环境准备 1.1 创建普通用户 #创建用户 useradd querylog #设置密码 passwd queylog #授权sudo权限 查找sudoers文件位置 whereis sudoers #修改文件为可编辑 chmod -v u+w /etc/sudoers #编辑文件 vi /etc/sudoers #收回权限 chmod -v u-w /etc/sudoers #第一次使用sudo会有提示 We trust you have

  • java异步写日志到文件中实现代码

    java异步写日志到文件中详解 实现代码: package com.tydic.ESUtil; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.util.Properties; public class LogWriter { // 日志的配置文件 publi

  • Java代码统计网站中不同省份用户的访问数

    一.需求 针对log日志中给定的信息,统计网站中不同省份用户的访问数 二.编程代码 package org.apache.hadoop.studyhdfs.mapreduce; import java.io.IOException; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; im

  • JAVA代码开发规范

    一.开发工具规范: 1. 开发工具经项目负责人调试后统一确定. 2. 开发工具一经确定不允许集成任何非统一插件,若有需要,经项目负责人同意后统一为 项目组成员添加. 3. 开发工具的编码格式不允许修改. 二.排版规范: 1. 关键词(或变量)和操作符之间加一个空格. 例如:int iCont = 1;//操作符和值之间有一个空格. 2. 相对独立的代码块与块之间加空行. 例如:两个方法之间需要用空格隔开. 3. 较长的语句.表达式等要分成多行书写. 4. 长表达式要在低优先级操作符处划分新行,操

  • Java实时监控日志文件并输出的方法详解

    前言 最近有一个银行数据漂白系统,要求操作人员在页面调用远端Linux服务器的shell,并将shell输出的信息保存到一个日志文件,前台页面要实时显示日志文件的内容.这个问题难点在于如何判断哪些数据是新增加的,通过查看JDK 的帮助文档, java.io.RandomAccessFile可以解决这个问题.为了模拟这个问题,编写LogSvr和 LogView类,LogSvr不断向mock.log日志文件写数据,而 LogView则实时输出日志变化部分的数据. 代码1:日志产生类 package

  • Java虚拟机GC日志分析

    本文研究的主要是Java虚拟机中gc日志的理解问题,具体如下. 一.日志分析 理解GC日志是处理Java虚拟机内存问题的基本技能. 通过在java命令种加入参数来指定对应的gc类型,打印gc日志信息并输出至文件等策略. 1.编写java代码 public class ReferenceCountingGC { public Object instance = null; private static final int ONE_MB = 1024 * 1024; private byte[] b

  • 养成良好java代码编码规范

    一.基本原则 强制性原则: 1.字符串的拼加操作,必须使用StringBuilder: 2.try-catch的用法 try{ }catch{Exception e e.printStackTrace(); }finally{ }//在最外层的Action中可以使用,其它地方一律禁止使用: try{ //程序代码 }catch(Exception e){ //为空,什么都不写 }//在任何场景中都禁止使用 try{ }catch{Exception e throw new runtimeExce

  • 如何用120行Java代码写一个自己的区块链

    区块链是目前最热门的话题,广大读者都听说过比特币,或许还有智能合约,相信大家都非常想了解这一切是如何工作的.这篇文章就是帮助你使用 Java 语言来实现一个简单的区块链,用不到 120 行代码来揭示区块链的原理! "用不到120行 Java 代码就能实现一个自己的区块链!" 听起来不可思议吧?有什么能比开发一个自己的区块链更好的学习实践方法呢?那我们就一起来实践下! 因为我们是一家从事互联网金融的科技公司,所以我们采用虚拟资产金额作为这篇文章中的示例数据.大家可以先为自己想一个数字,后

  • 详解基于IDEA2020.1的JAVA代码提示插件开发例子

    之前因为项目组有自己的代码规范,为了约束平时的开发规范,于是基于2019.1.3版本开发了一个代码提示的插件.但是在把IDEA切换到2020.1版本的时候,却发现疯狂报错,但是网上关于IDEA插件开发的相关文章还是不够多,只能自己解决.于是根据官方的SDK文档,使用Gradle重新构建了一下项目,把代码拉了过来.下文会根据2020.1版本简单开发一个代码异常的提示插件,把容易踩坑的地方提示一下. 1.首先先根据IDEA插件开发官方文档,用Gradle新建一个project 选中file -> n

  • Java 配置log4j日志文件路径 (附-获取当前类路径的多种操作)

    1 日志路径带来的痛点 Java 项目中少不了要和log4j等日志框架打交道, 开发环境和生产环境下日志文件的输出路径总是不一致, 设置为绝对路径的方式缺少了灵活性, 每次变更项目路径都要修改文件, 目前想到的最佳实现方式是: 根据项目位置自动加载并配置文件路径. 本文借鉴 Tomcat 的配置方式 "${catalina.home}/logs/catalina.out", 通过相对路径的方式设置日志的输出路径, 有其他解决方案的小伙伴, 请直接评论区交流哦

随机推荐