关于redis状态监控和性能调优详解

前言

对于任何应用服务和组件,都需要一套完善可靠谱监控方案。

尤其redis这类敏感的纯内存、高并发和低延时的服务,一套完善的监控告警方案,是精细化运营的前提。

本文主要给大家介绍了关于redis状态监控和性能调优的相关内容,分享出来供大家参考学习,下面话不多说了,来一起看看详细的介绍吧。

1、redis-benchmark

redis基准信息,redis服务器性能检测

例如:

检测redis服务器性能,本机6379端口的实例,100个并发连接,100000个请求

redis-benchmark -h localhost -p 6379 -c 100 -n 100000 
[root@redis-server ~]# redis-benchmark -h localhost -p 6379 -c 100 -n 100000
====== PING_INLINE ======
requests completed in 1.29 seconds
parallel clients
bytes payload
 keep alive: 1

81.97% <= 1 milliseconds
97.69% <= 2 milliseconds
99.79% <= 3 milliseconds
99.94% <= 4 milliseconds
99.97% <= 5 milliseconds
100.00% <= 5 milliseconds
77639.75 requests per second

====== PING_BULK ======
requests completed in 1.49 seconds
parallel clients
bytes payload
 keep alive: 1

73.04% <= 1 milliseconds
97.46% <= 2 milliseconds
99.62% <= 3 milliseconds
99.97% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
67204.30 requests per second

====== SET ======
requests completed in 1.30 seconds
parallel clients
bytes payload
 keep alive: 1

81.09% <= 1 milliseconds
97.16% <= 2 milliseconds
99.43% <= 3 milliseconds
99.75% <= 4 milliseconds
99.80% <= 5 milliseconds
99.82% <= 7 milliseconds
99.83% <= 8 milliseconds
99.85% <= 9 milliseconds
99.87% <= 10 milliseconds
99.89% <= 11 milliseconds
99.89% <= 12 milliseconds
99.90% <= 13 milliseconds
99.90% <= 14 milliseconds
99.90% <= 15 milliseconds
99.91% <= 16 milliseconds
99.93% <= 17 milliseconds
99.94% <= 18 milliseconds
99.95% <= 19 milliseconds
99.96% <= 20 milliseconds
99.98% <= 21 milliseconds
99.99% <= 22 milliseconds
100.00% <= 23 milliseconds
100.00% <= 23 milliseconds
76687.12 requests per second

====== GET ======
requests completed in 1.91 seconds
parallel clients
bytes payload
 keep alive: 1

49.74% <= 1 milliseconds
93.92% <= 2 milliseconds
99.37% <= 3 milliseconds
99.95% <= 4 milliseconds
99.97% <= 5 milliseconds
99.98% <= 6 milliseconds
100.00% <= 6 milliseconds
52273.91 requests per second

====== INCR ======
requests completed in 1.60 seconds
parallel clients
bytes payload
 keep alive: 1

66.32% <= 1 milliseconds
96.55% <= 2 milliseconds
99.61% <= 3 milliseconds
99.96% <= 4 milliseconds
100.00% <= 5 milliseconds
62344.14 requests per second

====== LPUSH ======
requests completed in 1.27 seconds
parallel clients
bytes payload
 keep alive: 1

73.84% <= 1 milliseconds
95.61% <= 2 milliseconds
99.36% <= 3 milliseconds
99.96% <= 4 milliseconds
99.99% <= 5 milliseconds
100.00% <= 5 milliseconds
78492.93 requests per second

====== RPUSH ======
requests completed in 1.31 seconds
parallel clients
bytes payload
 keep alive: 1

80.47% <= 1 milliseconds
96.93% <= 2 milliseconds
99.56% <= 3 milliseconds
99.98% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
76103.50 requests per second

====== LPOP ======
requests completed in 1.30 seconds
parallel clients
bytes payload
 keep alive: 1

74.91% <= 1 milliseconds
95.50% <= 2 milliseconds
99.29% <= 3 milliseconds
99.95% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
77101.00 requests per second

====== RPOP ======
requests completed in 1.40 seconds
parallel clients
bytes payload
 keep alive: 1

77.99% <= 1 milliseconds
97.07% <= 2 milliseconds
99.61% <= 3 milliseconds
99.97% <= 4 milliseconds
99.98% <= 5 milliseconds
100.00% <= 6 milliseconds
100.00% <= 6 milliseconds
71377.59 requests per second

====== SADD ======
requests completed in 1.32 seconds
parallel clients
bytes payload
 keep alive: 1

80.83% <= 1 milliseconds
97.14% <= 2 milliseconds
99.57% <= 3 milliseconds
99.95% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
75757.57 requests per second

====== HSET ======
requests completed in 1.30 seconds
parallel clients
bytes payload
 keep alive: 1

80.25% <= 1 milliseconds
96.83% <= 2 milliseconds
99.49% <= 3 milliseconds
99.97% <= 4 milliseconds
100.00% <= 4 milliseconds
76923.08 requests per second

====== SPOP ======
requests completed in 1.48 seconds
parallel clients
bytes payload
 keep alive: 1

73.97% <= 1 milliseconds
96.91% <= 2 milliseconds
99.55% <= 3 milliseconds
99.96% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
67567.57 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
requests completed in 1.35 seconds
parallel clients
bytes payload
 keep alive: 1

71.03% <= 1 milliseconds
95.36% <= 2 milliseconds
99.29% <= 3 milliseconds
99.97% <= 4 milliseconds
100.00% <= 5 milliseconds
100.00% <= 5 milliseconds
73909.83 requests per second

====== LRANGE_100 (first 100 elements) ======
requests completed in 2.91 seconds
parallel clients
bytes payload
 keep alive: 1

14.30% <= 1 milliseconds
80.30% <= 2 milliseconds
94.42% <= 3 milliseconds
96.88% <= 4 milliseconds
98.34% <= 5 milliseconds
99.39% <= 6 milliseconds
99.78% <= 7 milliseconds
99.93% <= 8 milliseconds
99.97% <= 9 milliseconds
99.98% <= 10 milliseconds
100.00% <= 11 milliseconds
100.00% <= 11 milliseconds
34317.09 requests per second

====== LRANGE_300 (first 300 elements) ======
requests completed in 5.88 seconds
parallel clients
bytes payload
 keep alive: 1

0.00% <= 2 milliseconds
85.83% <= 3 milliseconds
94.17% <= 4 milliseconds
96.10% <= 5 milliseconds
97.90% <= 6 milliseconds
98.68% <= 7 milliseconds
98.70% <= 8 milliseconds
99.30% <= 9 milliseconds
99.49% <= 10 milliseconds
99.76% <= 11 milliseconds
99.79% <= 12 milliseconds
99.83% <= 13 milliseconds
99.85% <= 14 milliseconds
99.87% <= 15 milliseconds
99.89% <= 16 milliseconds
99.91% <= 17 milliseconds
99.92% <= 19 milliseconds
99.93% <= 20 milliseconds
99.94% <= 21 milliseconds
99.95% <= 22 milliseconds
99.96% <= 23 milliseconds
99.97% <= 24 milliseconds
99.99% <= 25 milliseconds
99.99% <= 26 milliseconds
100.00% <= 27 milliseconds
17006.80 requests per second

====== LRANGE_500 (first 450 elements) ======
requests completed in 8.16 seconds
parallel clients
bytes payload
 keep alive: 1

0.00% <= 2 milliseconds
0.01% <= 3 milliseconds
80.98% <= 4 milliseconds
90.89% <= 5 milliseconds
95.60% <= 6 milliseconds
97.20% <= 7 milliseconds
98.23% <= 8 milliseconds
98.53% <= 9 milliseconds
99.06% <= 10 milliseconds
99.09% <= 11 milliseconds
99.46% <= 12 milliseconds
99.53% <= 13 milliseconds
99.65% <= 14 milliseconds
99.75% <= 15 milliseconds
99.79% <= 16 milliseconds
99.81% <= 17 milliseconds
99.82% <= 18 milliseconds
99.84% <= 19 milliseconds
99.85% <= 20 milliseconds
99.86% <= 21 milliseconds
99.87% <= 22 milliseconds
99.88% <= 23 milliseconds
99.89% <= 24 milliseconds
99.90% <= 25 milliseconds
99.91% <= 26 milliseconds
99.93% <= 27 milliseconds
99.93% <= 28 milliseconds
99.94% <= 29 milliseconds
99.95% <= 30 milliseconds
99.96% <= 31 milliseconds
99.98% <= 32 milliseconds
99.98% <= 33 milliseconds
99.99% <= 34 milliseconds
99.99% <= 35 milliseconds
100.00% <= 36 milliseconds
100.00% <= 36 milliseconds
12260.91 requests per second

====== LRANGE_600 (first 600 elements) ======
requests completed in 10.15 seconds
parallel clients
bytes payload
 keep alive: 1

0.00% <= 3 milliseconds
0.01% <= 4 milliseconds
84.84% <= 5 milliseconds
93.41% <= 6 milliseconds
96.43% <= 7 milliseconds
97.71% <= 8 milliseconds
97.75% <= 9 milliseconds
98.32% <= 10 milliseconds
98.79% <= 11 milliseconds
99.19% <= 12 milliseconds
99.22% <= 13 milliseconds
99.25% <= 14 milliseconds
99.48% <= 15 milliseconds
99.56% <= 16 milliseconds
99.60% <= 17 milliseconds
99.68% <= 18 milliseconds
99.74% <= 19 milliseconds
99.77% <= 20 milliseconds
99.79% <= 21 milliseconds
99.82% <= 22 milliseconds
99.83% <= 23 milliseconds
99.85% <= 24 milliseconds
99.86% <= 25 milliseconds
99.86% <= 26 milliseconds
99.87% <= 27 milliseconds
99.88% <= 28 milliseconds
99.89% <= 29 milliseconds
99.90% <= 30 milliseconds
99.90% <= 31 milliseconds
99.91% <= 32 milliseconds
99.91% <= 33 milliseconds
99.92% <= 34 milliseconds
99.94% <= 35 milliseconds
99.95% <= 36 milliseconds
99.95% <= 37 milliseconds
99.96% <= 38 milliseconds
99.96% <= 39 milliseconds
99.96% <= 40 milliseconds
99.97% <= 41 milliseconds
99.98% <= 42 milliseconds
99.98% <= 43 milliseconds
99.99% <= 44 milliseconds
99.99% <= 45 milliseconds
99.99% <= 46 milliseconds
100.00% <= 47 milliseconds
100.00% <= 47 milliseconds
9851.25 requests per second

====== MSET (10 keys) ======
requests completed in 1.89 seconds
parallel clients
bytes payload
 keep alive: 1

0.00% <= 1 milliseconds
75.00% <= 2 milliseconds
89.85% <= 3 milliseconds
95.38% <= 4 milliseconds
98.52% <= 5 milliseconds
99.34% <= 6 milliseconds
99.60% <= 7 milliseconds
99.83% <= 8 milliseconds
99.98% <= 9 milliseconds
100.00% <= 9 milliseconds
52994.17 requests per second

[root@redis-server ~]#

2、redis-cli

例1:监控本机6379端口的实例的数据操作,redis的连接及读写操作

redis-cli -h localhost -p 6379 monitor 

先开启一个终端1,用于redis监控

[root@redis-server ~]# redis-cli -h localhost -p 6379 monitor
OK
1504689350.635365 [0 127.0.0.1:57996] "COMMAND"
1504689361.944610 [0 127.0.0.1:57996] "set" "a" "1"
1504689369.782029 [0 127.0.0.1:57996] "get" "a"

然后在开启一个redis终端2进行操作

[root@redis-server ~]# redis-cli -p 6379
127.0.0.1:6380> set a 1
OK
127.0.0.1:6380> get a
"1"
127.0.0.1:6380>

可以看到终端2上面进行的数据操作会在终端1上面被记录下来

例2:查询本机redis实例的信息,端口6379

redis-cli -h localhost -p 6379 info 

备注:该命令也可以在redis终端里面进行查询

[root@redis-server ~]# redis-cli -h localhost -p 6379 info
# Server
redis_version:3.2.10
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:eae5a0b8746eb6ce
redis_mode:standalone
os:Linux 2.6.32-431.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:6003
run_id:0057d03b2e908ee036c2aa1c3531e8aa051d7468
tcp_port:6379
uptime_in_seconds:159221
uptime_in_days:1
hz:10
lru_clock:11517636
executable:/usr/local/redis/bin/redis-server
config_file:/usr/local/redis/conf/redis.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:1828104
used_memory_human:1.74M
used_memory_rss:4050944
used_memory_rss_human:3.86M
used_memory_peak:8439360
used_memory_peak_human:8.05M
total_system_memory:1960443904
total_system_memory_human:1.83G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:2.22
mem_allocator:jemalloc-4.0.3

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1504689256
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:3603
total_commands_processed:3600007
instantaneous_ops_per_sec:0
total_net_input_bytes:192800186
total_net_output_bytes:2634476722
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:1000003
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:408
migrate_cached_sockets:0

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:99.45
used_cpu_user:108.88
used_cpu_sys_children:0.01
used_cpu_user_children:0.01

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=7,expires=0,avg_ttl=0
[root@redis-server ~]#

总结

以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作能带来一定的帮助,如果有疑问大家可以留言交流,谢谢大家对我们的支持。

(0)

相关推荐

  • 深入了解Redis的性能

    简介 多少次你发现自己在几个月的开发和无数的努力后陷入了毫无性能而言的web应用?多少次你在好奇如果你无法向普通用户传达快与最快的标准,你的客户还应该把你当作专家?多少你听到有关Google和Facebook一些糟糕的对比?让我告诉你,我的客户是怎么看待这些的: 我曾开发一个有着复杂处理和过滤的web应用,因为很多业务规则和UI要求.再加上一些过时技术的第三方提供者,对于他们而言,速度意味着15年的工作丢进垃圾桶,然后重新开始.我的应用不是那么快,有时处理一个请求花费6-8s才会处理完,业务规则

  • Redis性能大幅提升之Batch批量读写详解

    前言 本文主要介绍的是关于Redis性能提升之Batch批量读写的相关内容,分享出来供大家参考学习,下面来看看详细的介绍: 提示:本文针对的是StackExchange.Redis 一.问题呈现 前段时间在开发的时候,遇到了redis批量读的问题,由于在StackExchange.Redis里面我确实没有找到PipeLine命令,找到的是Batch命令,因此对其用法进行了探究一下. 下面的代码是我之前写的: public List<StudentEntity> Get(List<int&

  • asp.net性能优化之使用Redis缓存(入门)

    1:使用Redis缓存的优化思路 redis的使用场景很多,仅说下本人所用的一个场景: 1.1对于大量的数据读取,为了缓解数据库的压力将一些不经常变化的而又读取频繁的数据存入redis缓存 大致思路如下:执行一个查询 1.2首先判断缓存中是否存在,如存在直接从Redis缓存中获取. 1.3如果Redis缓存中不存在,实时读取数据库数据,同时写入缓存(并设定缓存失效的时间). 1.4缺点,如果直接修改了数据库的数据而又没有更新缓存,在缓存失效的时间内将导致读取的Redis缓存是错误的数据. 2:R

  • 关于redis状态监控和性能调优详解

    前言 对于任何应用服务和组件,都需要一套完善可靠谱监控方案. 尤其redis这类敏感的纯内存.高并发和低延时的服务,一套完善的监控告警方案,是精细化运营的前提. 本文主要给大家介绍了关于redis状态监控和性能调优的相关内容,分享出来供大家参考学习,下面话不多说了,来一起看看详细的介绍吧. 1.redis-benchmark redis基准信息,redis服务器性能检测 例如: 检测redis服务器性能,本机6379端口的实例,100个并发连接,100000个请求 redis-benchmark

  • java虚拟机之JVM调优详解

    JVM常用命令行参数 1. 查看参数列表 虚拟机参数分为基本和扩展两类,在命令行中输入 JAVA_HOME\bin\java就可得到基本参数列表. 在命令行输入 JAVA_HOME\bin\java –X就可得到扩展参数列表. 2. 基本参数说明: -client,-server: 两种Java虚拟机启动方式,client模式启动比较快,但是性能和内存管理相对较差,server模式启动比较慢,但是运行性能比较高,windos上采用的是client模式,Linux采用server模式 -class

  • G1垃圾回收器在并发场景调优详解

    目录 序言 G1概览 1.最大堆大小 2.Region大小 3.获取默认值 三种GC模式 1.新生代回收 2.混合回收 3.Full GC 默认参数 1.堆内存 2.新生代内存回收 3.混合回收 垃圾在堆中流转 1.对象如何进入老年代 (1)大对象直接到老年代 (2)动态年龄判断 2.高并发加速进入老年代 调优步骤 1.设置垃圾回收器 2.设置堆大小 3.元空间设置 4.GC停顿时间 5.新生代大小 调优实践 1.频繁的YGC 2.频繁的Mixed GC (1)大对象 (2)元空间 3.Full

  • Java JVM虚拟机调优详解

    目录 jmap查看内存信息 jstack jinfo查看jvm系统参数 Jstat查看堆内存使用和类加载的数量信息 内存泄漏 jmap查看内存信息 jmap histo /pid > ./log.txt :查看某一进程实例个数,占用内存的字节数,以及所属的类 jmap -heap /pid :查看堆信息 jmap ‐dump:format=b,file=app.hprof /pid 通过jvisualvm命令启动jvm可视化管理界面可导入dump文件进行分析:查看类的实例 jstack 分析死锁

  • python机器学习朴素贝叶斯算法及模型的选择和调优详解

    目录 一.概率知识基础 1.概率 2.联合概率 3.条件概率 二.朴素贝叶斯 1.朴素贝叶斯计算方式 2.拉普拉斯平滑 3.朴素贝叶斯API 三.朴素贝叶斯算法案例 1.案例概述 2.数据获取 3.数据处理 4.算法流程 5.注意事项 四.分类模型的评估 1.混淆矩阵 2.评估模型API 3.模型选择与调优 ①交叉验证 ②网格搜索 五.以knn为例的模型调优使用方法 1.对超参数进行构造 2.进行网格搜索 3.结果查看 一.概率知识基础 1.概率 概率就是某件事情发生的可能性. 2.联合概率 包

  • SQL Server性能调优之缓存

    在执行任何查询时,SQL Server都会将数据读取到内存,数据使用之后,不会立即释放,而是会缓存在内存Buffer中,当再次执行相同的查询时,如果所需数据全部缓存在内存中,那么SQL Server不会产生Disk IO操作,立即返回查询结果,这是SQL Server的性能优化机制. 一,主要的内存消费者(Memory Consumer) 1,数据缓存(Data Cache) Data Cache是存储数据页(Data Page)的缓冲区,当SQL Server需要读取数据文件(File)中的数

  • .NET性能调优之一:ANTS Performance Profiler的使用介绍

    在使用.NET进行快速地上手与开发出应用程序后,接下来面临的问题可能就是程序性能调优方面的问题,而性能调优有时候会涉及方方面面的问题,如程序宿主系统.数据库.网络环境等等,而当程序异常庞大复杂的时候,性能调优将变得更加无从下手. 本系列文章主要会介绍一些.NET性能调优的工具.Web性能优化的规则(如YSlow)及方法等等内容.成文前最不希望看到的就是园子里不间断的"哪个语言好,哪个语言性能高"的争论,不多说,真正的明白人都应该知道这样的争论有没有意义,希望我们能从实际性能优化的角度去

  • sqlserver性能调优经验总结

    相信不少的朋友,无论是做开发.架构的,还是DBA等,都经常听说"调优"这个词.说起"调优",可能会让很多技术人员心头激情澎湃,也可能会让很多人感觉苦恼.当然,也有很多人对此不屑一顾,因为并不是每个人接触到的项目都很大,也不是每个人做的项目都对性能要求很高. 在主流的企业级开发和互联网应用中,数据库的重要性是不言而喻的,而数据库的性能对于整个系统的性能而言也是至关重要的,这里无庸赘述. sqlserver的性能调优,其实是个很宽广的话题.坦白讲,想从概念到实践的完全讲

  • Android性能调优利器StrictMode应用分析

    作为Android开发,日常的开发工作中或多或少要接触到性能问题,比如我的Android程序运行缓慢卡顿,并且常常出现ANR对话框等等问题.既然有性能问题,就需要进行性能优化.正所谓工欲善其事,必先利其器.一个好的工具,可以帮助我们发现并定位问题,进而有的放矢进行解决.本文主要介绍StrictMode 在Android 应用开发中的应用和一些问题. 什么是StrictMode StrictMode意思为严格模式,是用来检测程序中违例情况的开发者工具.最常用的场景就是检测主线程中本地磁盘和网络读写

  • AngularJS进行性能调优的7个建议

    AnglarJS作为一款优秀的Web框架,可大大简化前端开发的负担.近日Sebastian Fröstl在一篇博文<AngularJS Performance Tuning for Long Lists>中表示AnglarJS在处理包含复杂数据结构的大型列表时,其运行速度会非常慢.他在文中同时分享了解决方案.下面为该文的译文. AnglarJS很棒,但当处理包含复杂数据结构的大型列表时,其运行速度就会非常慢.这是我们将核心管理页面迁移到AngularJS过程中遇到的问题.这些页面在显示500行

随机推荐