Go如何实现HTTP请求限流示例

在开发高并发系统时有三把利器用来保护系统:缓存、降级和限流!为了保证在业务高峰期,线上系统也能保证一定的弹性和稳定性,最有效的方案就是进行服务降级了,而限流就是降级系统最常采用的方案之一。

这里为大家推荐一个开源库 https://github.com/didip/tollbooth 但是,如果您想要一些简单的、轻量级的或者只是想要学习的东西,实现自己的中间件来处理速率限制并不困难。今天我们就来聊聊如何实现自己的一个限流中间件

首先我们需要安装一个提供了 Token bucket (令牌桶算法)的依赖包,上面提到的toolbooth 的实现也是基于它实现的

$ go get golang.org/x/time/rate

好了我们先看Demo代码的实现:

limit.go

package main

import (
  "net/http"

  "golang.org/x/time/rate"
)

var limiter = rate.NewLimiter(2, 5)

func limit(next http.Handler) http.Handler {
  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    if limiter.Allow() == false {
      http.Error(w, http.StatusText(429), http.StatusTooManyRequests)
      return
    }

    next.ServeHTTP(w, r)
  })
}

main.go

package main

import (
  "net/http"
)

func main() {
  mux := http.NewServeMux()
  mux.HandleFunc("/", okHandler)

  // Wrap the servemux with the limit middleware.
  http.ListenAndServe(":4000", limit(mux))
}

func okHandler(w http.ResponseWriter, r *http.Request) {
  w.Write([]byte("OK"))
}

我们看看 rate.NewLimiter的源码:

// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.

// Package rate provides a rate limiter.
package rate

import (
 "fmt"
 "math"
 "sync"
 "time"

 "golang.org/x/net/context"
)

// Limit defines the maximum frequency of some events.
// Limit is represented as number of events per second.
// A zero Limit allows no events.
type Limit float64

// Inf is the infinite rate limit; it allows all events (even if burst is zero).
const Inf = Limit(math.MaxFloat64)

// Every converts a minimum time interval between events to a Limit.
func Every(interval time.Duration) Limit {
 if interval <= 0 {
  return Inf
 }
 return 1 / Limit(interval.Seconds())
}

// A Limiter controls how frequently events are allowed to happen.
// It implements a "token bucket" of size b, initially full and refilled
// at rate r tokens per second.
// Informally, in any large enough time interval, the Limiter limits the
// rate to r tokens per second, with a maximum burst size of b events.
// As a special case, if r == Inf (the infinite rate), b is ignored.
// See https://en.wikipedia.org/wiki/Token_bucket for more about token buckets.
//
// The zero value is a valid Limiter, but it will reject all events.
// Use NewLimiter to create non-zero Limiters.
//
// Limiter has three main methods, Allow, Reserve, and Wait.
// Most callers should use Wait.
//
// Each of the three methods consumes a single token.
// They differ in their behavior when no token is available.
// If no token is available, Allow returns false.
// If no token is available, Reserve returns a reservation for a future token
// and the amount of time the caller must wait before using it.
// If no token is available, Wait blocks until one can be obtained
// or its associated context.Context is canceled.
//
// The methods AllowN, ReserveN, and WaitN consume n tokens.
type Limiter struct {
 limit Limit
 burst int

 mu   sync.Mutex
 tokens float64
 // last is the last time the limiter's tokens field was updated
 last time.Time
 // lastEvent is the latest time of a rate-limited event (past or future)
 lastEvent time.Time
}

// Limit returns the maximum overall event rate.
func (lim *Limiter) Limit() Limit {
 lim.mu.Lock()
 defer lim.mu.Unlock()
 return lim.limit
}

// Burst returns the maximum burst size. Burst is the maximum number of tokens
// that can be consumed in a single call to Allow, Reserve, or Wait, so higher
// Burst values allow more events to happen at once.
// A zero Burst allows no events, unless limit == Inf.
func (lim *Limiter) Burst() int {
 return lim.burst
}

// NewLimiter returns a new Limiter that allows events up to rate r and permits
// bursts of at most b tokens.
func NewLimiter(r Limit, b int) *Limiter {
 return &Limiter{
  limit: r,
  burst: b,
 }
}

// Allow is shorthand for AllowN(time.Now(), 1).
func (lim *Limiter) Allow() bool {
 return lim.AllowN(time.Now(), 1)
}

// AllowN reports whether n events may happen at time now.
// Use this method if you intend to drop / skip events that exceed the rate limit.
// Otherwise use Reserve or Wait.
func (lim *Limiter) AllowN(now time.Time, n int) bool {
 return lim.reserveN(now, n, 0).ok
}

// A Reservation holds information about events that are permitted by a Limiter to happen after a delay.
// A Reservation may be canceled, which may enable the Limiter to permit additional events.
type Reservation struct {
 ok    bool
 lim    *Limiter
 tokens  int
 timeToAct time.Time
 // This is the Limit at reservation time, it can change later.
 limit Limit
}

// OK returns whether the limiter can provide the requested number of tokens
// within the maximum wait time. If OK is false, Delay returns InfDuration, and
// Cancel does nothing.
func (r *Reservation) OK() bool {
 return r.ok
}

// Delay is shorthand for DelayFrom(time.Now()).
func (r *Reservation) Delay() time.Duration {
 return r.DelayFrom(time.Now())
}

// InfDuration is the duration returned by Delay when a Reservation is not OK.
const InfDuration = time.Duration(1<<63 - 1)

// DelayFrom returns the duration for which the reservation holder must wait
// before taking the reserved action. Zero duration means act immediately.
// InfDuration means the limiter cannot grant the tokens requested in this
// Reservation within the maximum wait time.
func (r *Reservation) DelayFrom(now time.Time) time.Duration {
 if !r.ok {
  return InfDuration
 }
 delay := r.timeToAct.Sub(now)
 if delay < 0 {
  return 0
 }
 return delay
}

// Cancel is shorthand for CancelAt(time.Now()).
func (r *Reservation) Cancel() {
 r.CancelAt(time.Now())
 return
}

// CancelAt indicates that the reservation holder will not perform the reserved action
// and reverses the effects of this Reservation on the rate limit as much as possible,
// considering that other reservations may have already been made.
func (r *Reservation) CancelAt(now time.Time) {
 if !r.ok {
  return
 }

 r.lim.mu.Lock()
 defer r.lim.mu.Unlock()

 if r.lim.limit == Inf || r.tokens == 0 || r.timeToAct.Before(now) {
  return
 }

 // calculate tokens to restore
 // The duration between lim.lastEvent and r.timeToAct tells us how many tokens were reserved
 // after r was obtained. These tokens should not be restored.
 restoreTokens := float64(r.tokens) - r.limit.tokensFromDuration(r.lim.lastEvent.Sub(r.timeToAct))
 if restoreTokens <= 0 {
  return
 }
 // advance time to now
 now, _, tokens := r.lim.advance(now)
 // calculate new number of tokens
 tokens += restoreTokens
 if burst := float64(r.lim.burst); tokens > burst {
  tokens = burst
 }
 // update state
 r.lim.last = now
 r.lim.tokens = tokens
 if r.timeToAct == r.lim.lastEvent {
  prevEvent := r.timeToAct.Add(r.limit.durationFromTokens(float64(-r.tokens)))
  if !prevEvent.Before(now) {
   r.lim.lastEvent = prevEvent
  }
 }

 return
}

// Reserve is shorthand for ReserveN(time.Now(), 1).
func (lim *Limiter) Reserve() *Reservation {
 return lim.ReserveN(time.Now(), 1)
}

// ReserveN returns a Reservation that indicates how long the caller must wait before n events happen.
// The Limiter takes this Reservation into account when allowing future events.
// ReserveN returns false if n exceeds the Limiter's burst size.
// Usage example:
//  r, ok := lim.ReserveN(time.Now(), 1)
//  if !ok {
//   // Not allowed to act! Did you remember to set lim.burst to be > 0 ?
//  }
//  time.Sleep(r.Delay())
//  Act()
// Use this method if you wish to wait and slow down in accordance with the rate limit without dropping events.
// If you need to respect a deadline or cancel the delay, use Wait instead.
// To drop or skip events exceeding rate limit, use Allow instead.
func (lim *Limiter) ReserveN(now time.Time, n int) *Reservation {
 r := lim.reserveN(now, n, InfDuration)
 return &r
}

// Wait is shorthand for WaitN(ctx, 1).
func (lim *Limiter) Wait(ctx context.Context) (err error) {
 return lim.WaitN(ctx, 1)
}

// WaitN blocks until lim permits n events to happen.
// It returns an error if n exceeds the Limiter's burst size, the Context is
// canceled, or the expected wait time exceeds the Context's Deadline.
func (lim *Limiter) WaitN(ctx context.Context, n int) (err error) {
 if n > lim.burst {
  return fmt.Errorf("rate: Wait(n=%d) exceeds limiter's burst %d", n, lim.burst)
 }
 // Check if ctx is already cancelled
 select {
 case <-ctx.Done():
  return ctx.Err()
 default:
 }
 // Determine wait limit
 now := time.Now()
 waitLimit := InfDuration
 if deadline, ok := ctx.Deadline(); ok {
  waitLimit = deadline.Sub(now)
 }
 // Reserve
 r := lim.reserveN(now, n, waitLimit)
 if !r.ok {
  return fmt.Errorf("rate: Wait(n=%d) would exceed context deadline", n)
 }
 // Wait
 t := time.NewTimer(r.DelayFrom(now))
 defer t.Stop()
 select {
 case <-t.C:
  // We can proceed.
  return nil
 case <-ctx.Done():
  // Context was canceled before we could proceed. Cancel the
  // reservation, which may permit other events to proceed sooner.
  r.Cancel()
  return ctx.Err()
 }
}

// SetLimit is shorthand for SetLimitAt(time.Now(), newLimit).
func (lim *Limiter) SetLimit(newLimit Limit) {
 lim.SetLimitAt(time.Now(), newLimit)
}

// SetLimitAt sets a new Limit for the limiter. The new Limit, and Burst, may be violated
// or underutilized by those which reserved (using Reserve or Wait) but did not yet act
// before SetLimitAt was called.
func (lim *Limiter) SetLimitAt(now time.Time, newLimit Limit) {
 lim.mu.Lock()
 defer lim.mu.Unlock()

 now, _, tokens := lim.advance(now)

 lim.last = now
 lim.tokens = tokens
 lim.limit = newLimit
}

// reserveN is a helper method for AllowN, ReserveN, and WaitN.
// maxFutureReserve specifies the maximum reservation wait duration allowed.
// reserveN returns Reservation, not *Reservation, to avoid allocation in AllowN and WaitN.
func (lim *Limiter) reserveN(now time.Time, n int, maxFutureReserve time.Duration) Reservation {
 lim.mu.Lock()
 defer lim.mu.Unlock()

 if lim.limit == Inf {
  return Reservation{
   ok:    true,
   lim:    lim,
   tokens:  n,
   timeToAct: now,
  }
 }

 now, last, tokens := lim.advance(now)

 // Calculate the remaining number of tokens resulting from the request.
 tokens -= float64(n)

 // Calculate the wait duration
 var waitDuration time.Duration
 if tokens < 0 {
  waitDuration = lim.limit.durationFromTokens(-tokens)
 }

 // Decide result
 ok := n <= lim.burst && waitDuration <= maxFutureReserve

 // Prepare reservation
 r := Reservation{
  ok:  ok,
  lim:  lim,
  limit: lim.limit,
 }
 if ok {
  r.tokens = n
  r.timeToAct = now.Add(waitDuration)
 }

 // Update state
 if ok {
  lim.last = now
  lim.tokens = tokens
  lim.lastEvent = r.timeToAct
 } else {
  lim.last = last
 }

 return r
}

// advance calculates and returns an updated state for lim resulting from the passage of time.
// lim is not changed.
func (lim *Limiter) advance(now time.Time) (newNow time.Time, newLast time.Time, newTokens float64) {
 last := lim.last
 if now.Before(last) {
  last = now
 }

 // Avoid making delta overflow below when last is very old.
 maxElapsed := lim.limit.durationFromTokens(float64(lim.burst) - lim.tokens)
 elapsed := now.Sub(last)
 if elapsed > maxElapsed {
  elapsed = maxElapsed
 }

 // Calculate the new number of tokens, due to time that passed.
 delta := lim.limit.tokensFromDuration(elapsed)
 tokens := lim.tokens + delta
 if burst := float64(lim.burst); tokens > burst {
  tokens = burst
 }

 return now, last, tokens
}

// durationFromTokens is a unit conversion function from the number of tokens to the duration
// of time it takes to accumulate them at a rate of limit tokens per second.
func (limit Limit) durationFromTokens(tokens float64) time.Duration {
 seconds := tokens / float64(limit)
 return time.Nanosecond * time.Duration(1e9*seconds)
}

// tokensFromDuration is a unit conversion function from a time duration to the number of tokens
// which could be accumulated during that duration at a rate of limit tokens per second.
func (limit Limit) tokensFromDuration(d time.Duration) float64 {
 return d.Seconds() * float64(limit)
}

算法描述:

用户配置的平均发送速率为r,则每隔1/r秒一个令牌被加入到桶中(每秒会有r个令牌放入桶中),桶中最多可以存放b个令牌。如果令牌到达时令牌桶已经满了,那么这个令牌会被丢弃;

实现用户粒度的限流

虽然在某些情况下使用单个全局速率限制器非常有用,但另一种常见情况是基于IP地址或API密钥等标识符为每个用户实施速率限制器。我们将使用IP地址作为标识符。简单实现代码如下:

package main
import (
  "net/http"
  "sync"
  "time"

  "golang.org/x/time/rate"
)

// Create a custom visitor struct which holds the rate limiter for each
// visitor and the last time that the visitor was seen.
type visitor struct {
  limiter *rate.Limiter
  lastSeen time.Time
}

// Change the the map to hold values of the type visitor.
var visitors = make(map[string]*visitor)
var mtx sync.Mutex
// Run a background goroutine to remove old entries from the visitors map.
func init() {
  go cleanupVisitors()
}

func addVisitor(ip string) *rate.Limiter {
  limiter := rate.NewLimiter(2, 5)
  mtx.Lock()
  // Include the current time when creating a new visitor.
  visitors[ip] = &visitor{limiter, time.Now()}
  mtx.Unlock()
  return limiter
}

func getVisitor(ip string) *rate.Limiter {
  mtx.Lock()
  v, exists := visitors[ip]
  if !exists {
    mtx.Unlock()
    return addVisitor(ip)
  }
  // Update the last seen time for the visitor.
  v.lastSeen = time.Now()
  mtx.Unlock()
  return v.limiter
}

// Every minute check the map for visitors that haven't been seen for
// more than 3 minutes and delete the entries.
func cleanupVisitors() {
  for {
    time.Sleep(time.Minute)
    mtx.Lock()
    for ip, v := range visitors {
      if time.Now().Sub(v.lastSeen) > 3*time.Minute {
        delete(visitors, ip)
      }
    }
    mtx.Unlock()
  }
}

func limit(next http.Handler) http.Handler {
  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    limiter := getVisitor(r.RemoteAddr)
    if limiter.Allow() == false {
      http.Error(w, http.StatusText(429), http.StatusTooManyRequests)
      return
    }
    next.ServeHTTP(w, r)
  })
}

当然这只是一个简单的实现方案,如果我们要在微服务的API-GateWay中去实现限流还是要考虑很多东西的。建议大家可以看看 https://github.com/didip/tollbooth 的源码。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持我们。

(0)

相关推荐

  • Golang实现请求限流的几种办法(小结)

    在开发高并发系统时,有三把利器用来保护系统:缓存.降级和限流.那么何为限流呢?顾名思义,限流就是限制流量,就像你宽带包了1个G的流量,用完了就没了. 简单的并发控制 利用 channel 的缓冲设定,我们就可以来实现并发的限制.我们只要在执行并发的同时,往一个带有缓冲的 channel 里写入点东西(随便写啥,内容不重要).让并发的 goroutine在执行完成后把这个 channel 里的东西给读走.这样整个并发的数量就讲控制在这个 channel的缓冲区大小上. 比如我们可以用一个 bool

  • Go如何实现HTTP请求限流示例

    在开发高并发系统时有三把利器用来保护系统:缓存.降级和限流!为了保证在业务高峰期,线上系统也能保证一定的弹性和稳定性,最有效的方案就是进行服务降级了,而限流就是降级系统最常采用的方案之一. 这里为大家推荐一个开源库 https://github.com/didip/tollbooth 但是,如果您想要一些简单的.轻量级的或者只是想要学习的东西,实现自己的中间件来处理速率限制并不困难.今天我们就来聊聊如何实现自己的一个限流中间件 首先我们需要安装一个提供了 Token bucket (令牌桶算法)

  • Redis Lua脚本实现ip限流示例

    目录 引言 相比Redis事务来说,Lua脚本有以下优点 Lua脚本 java代码 IP限流Lua脚本 引言 分布式限流最关键的是要将限流服务做成原子化,而解决方案可以使使用redis+lua或者nginx+lua技术进行实现,通过这两种技术可以实现的高并发和高性能.首先我们来使用redis+lua实现时间窗内某个接口的请求数限流,实现了该功能后可以改造为限流总并发/请求数和限制总资源数.Lua本身就是一种编程语言,也可以使用它实现复杂的令牌桶或漏桶算法.如下操作因是在一个lua脚本中(相当于原

  • 详解Golang实现请求限流的几种办法

    简单的并发控制 利用 channel 的缓冲设定,我们就可以来实现并发的限制.我们只要在执行并发的同时,往一个带有缓冲的 channel 里写入点东西(随便写啥,内容不重要).让并发的 goroutine在执行完成后把这个 channel 里的东西给读走.这样整个并发的数量就讲控制在这个 channel的缓冲区大小上. 比如我们可以用一个 bool 类型的带缓冲 channel 作为并发限制的计数器. chLimit := make(chan bool, 1) 然后在并发执行的地方,每创建一个新

  • Golang官方限流器库实现限流示例详解

    目录 前言 例子 实现 小结 前言 在翻Golang官方库的过程中,发现一个有趣的库golang.org/x/time ,里面只有一个类rate,研究了一下发现它是一个限流器,实现了很多的功能,当然它的核心原理并不复杂,也就是令牌桶算法. 令牌桶算法的原理是:令牌桶会不断地把令牌添加到桶里,而请求会从桶中获取令牌,只有拥有令牌地请求才能被接受.因为桶中可以提前保留一些令牌,所以它允许一定地突发流量通过. 例子 下面是限流算法常见的写法,首先判断是否有令牌,如果有就通过,否则直接失败. packa

  • .Net Core限流的实现示例

    目录 一.环境 二.基础使用 1.设置 2.规则设置 3.特殊规则的启用 三.请求返回头 四.使用Redis存储 1.访问计数 2.ip特殊规则 3.客户端特殊规则 五.修改规则 一.环境 1.vs2019 2..Net Core 3.1 3.引用 AspNetCoreRateLimit 4.0.1 二.基础使用 1.设置 在Startup文件中配置如下,把配置项都放在前面: public void ConfigureServices(IServiceCollection services) {

  • Springboot+Redis实现API接口限流的示例代码

    添加Redis的jar包. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> 在application.yml中配置redis spring: ## Redis redis: database: 0 host: 127.0.0.1 p

  • 高并发系统的限流详解及实现

    在开发高并发系统时有三把利器用来保护系统:缓存.降级和限流.本文结合作者的一些经验介绍限流的相关概念.算法和常规的实现方式. 缓存 缓存比较好理解,在大型高并发系统中,如果没有缓存数据库将分分钟被爆,系统也会瞬间瘫痪.使用缓存不单单能够提升系统访问速度.提高并发访问量,也是保护数据库.保护系统的有效方式.大型网站一般主要是"读",缓存的使用很容易被想到.在大型"写"系统中,缓存也常常扮演者非常重要的角色.比如累积一些数据批量写入,内存里面的缓存队列(生产消费),以及

  • Spring Cloud Alibaba之Sentinel实现熔断限流功能

    微服务中为了防止某个服务出现问题,导致影响整个服务集群无法提供服务的情况,我们在系统访问量和业务量高起来了后非常有必要对服务进行熔断限流处理. 其中熔断即服务发生异常时能够更好的处理:限流是限制每个服务的资源(比如说访问量). spring-cloud中很多使用的是Hystrix组件来进行限流的,现在我们这里使用阿里的sentinel来实现熔断限流功能. sentinel简介 这个在阿里云有企业级的商用版本 应用高可用服务 AHAS:现在有免费的入门级可以先体验下,之后再决定是否使用付费的专业版

  • Django REST framework 限流功能的使用

    正文开始 先说一个限流这个概念,最早接触这个概念是在前端.真实的业务场景是在搜索框中输入文字进行搜索时,并不希望每输一个字符都去调用后端接口,而是有停顿后才真正的调用接口.这个功能很有必要,一方面减少前端请求与渲染的压力,同时减轻后端接口访问的压力.类似前端的功能的代码如下: // 前端函数限流示例 function throttle(fn, delay) { var timer; return function () { var _this = this; var args = argumen

随机推荐