Go的Finalizer
in Go with 0 comment

Go的Finalizer

in Go with 0 comment

什么是Finalizer

In computer science, a finalizer or finalize method is a special method that performs finalization, generally some form of cleanup.

Finalizer通常用来执行一些清理操作。

finalizer(终结器) 和 destructor(析构函数)

The terminology of "finalizer" and "finalization" versus "destructor" and "destruction" varies between authors and is sometimes unclear.

In common usage, a destructor is a method called deterministically on object destruction, and the archetype is C++ destructors; while a finalizer is called non-deterministically by the garbage collector, and the archetype is Java finalize methods.

析构函数是在对象销毁时确定性调用的方法,原型是 C++ 析构函数;而垃圾收集器不确定地调用终结器,并且原型是 Javafinalize方法。

析构函数的调用时间点是确定的,拿C++举例,当跳出class声明作用域或者显式调用delete执行清理时,如果该class存在析构函数,即会在当前线程执行。

终结器的调用时间不是确定的,由垃圾回收期决定何时执行,可以保证的是在该对象真正被回收之前会执行其注册的finalize,在哪个线程上执行也是不确定的。

Go中的Finalizer

如何使用

推荐用法

package main

import (
	"log"
	"runtime"
	"time"
)

type test int

func findRoad(t *test) {
	// 一般在这里进行资源回收
	log.Println("test:", *t)
}

func entry() {
	var rd test = test(1111)
	r := &rd
	// 解除绑定并执行对应函数下一次gc在进行清理
	runtime.SetFinalizer(r, findRoad)
}

func main() {
	entry()
	for i := 0; i < 5; i++ {
		time.Sleep(time.Second)
		runtime.GC()
	}
}

// OUTPUT:
// 2021/08/26 23:22:29 test: 1111

runtime中提供了runtime.SetFinalizer来为对象注册Finalizer函数

该函数有两个入参

当然也可以用它来做一些骚操作

不太推荐的用法

如上方所述,Finalizer是在已经确定该对象不再存活的情况下,被回收之前执行的,那么我们可以在Finalizer中让这个本该被回收的对象,再“活过来”

package main

import (
	"log"
	"runtime"
	"time"
)

type test int

func findRoad(t *test) {
	log.Println("test:", *t)
	*t = 3
	global = t
}

var global interface{}

func entry() {
	var rd test = test(1)
	r := &rd
	go func() {
		rd = 2
	}()
	runtime.SetFinalizer(r, findRoad)
}

func main() {
	entry()
	for i := 0; i < 5; i++ {
		time.Sleep(time.Second)
		runtime.GC()
	}
	log.Println("main:", *(global).(*test))
}

// OUTPUT:
// 2021/08/26 23:28:57 test: 2
// 2021/08/26 23:29:00 main: 3

禁止使用的用法

当然也可以在Finalizer函数中调用会导致协程阻塞的方法,比如说Sleep()Mutex.lock()

注:这个例子仅用来说明在Go中 Finalizer是在协程环境中执行的,实际使用中Finalizer中仅应该做一些快速简单的清理操作,否则会阻塞其他Finalizer的执行(源码中可以证明这一点)。

package main

import (
	"log"
	"runtime"
	"sync"
	"time"
)

type test int

func findRoad(t *test) {
	log.Println("test:", *t)
	time.Sleep(time.Second)
	log.Println("Sleep Done")
}

func entry() {
	var rd test = test(1)
	r := &rd
	runtime.SetFinalizer(r, findRoad)
}

func main() {
	entry()
	for i := 0; i < 5; i++ {
		time.Sleep(time.Second)
		runtime.GC()
	}
}

// OUTPUT:
// 2021/08/26 23:42:35 test: 1
// 2021/08/26 23:42:36 Sleep Done

标准库中的使用?

实现原理

TODO:总结

设置

TODO:mspan简单介绍

// $GOROOT/src/runtime/mfinal.go
func SetFinalizer(obj interface{}, finalizer interface{}) {
	e := efaceOf(&obj)
	// 去除参数校验逻辑
	f := efaceOf(&finalizer)
	ftyp := f._type

okarg:
	// compute size needed for return parameters
	nret := uintptr(0)
	for _, t := range ft.out() {
		nret = alignUp(nret, uintptr(t.align)) + uintptr(t.size)
	}
	nret = alignUp(nret, sys.PtrSize)

	// 确保已创建finalizer goroutine
	createfing()
	// 切换到系统栈执行addfinalizer
	systemstack(func() {
		if !addfinalizer(e.data, (*funcval)(f.data), nret, fint, ot) {
			throw("runtime.SetFinalizer: finalizer already set")
		}
	})
}


func createfing() {
	// 使用CAS来拿到执行权,保证只创建一个goroutine
	if fingCreate == 0 && atomic.Cas(&fingCreate, 0, 1) {
		go runfinq()
	}
}

// $GOROOT/src/runtime/mheap.go

// Adds a finalizer to the object p. Returns true if it succeeded.
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool {
	lock(&mheap_.speciallock)
	// 申请一个 specialfinalizer
	s := (*specialfinalizer)(mheap_.specialfinalizeralloc.alloc())
	unlock(&mheap_.speciallock)
	// 初始化
	s.special.kind = _KindSpecialFinalizer
	s.fn = f
	s.nret = nret
	s.fint = fint
	s.ot = ot
	if addspecial(p, &s.special) {
		// GC期间调用SetFinalizer,会继续标记它的Filed
		if gcphase != _GCoff {
			base, _, _ := findObject(uintptr(p), 0, 0)
			mp := acquirem()
			gcw := &mp.p.ptr().gcw
			// Mark everything reachable from the object
			// so it's retained for the finalizer.
			scanobject(base, gcw)
			// Mark the finalizer itself, since the
			// special isn't part of the GC'd heap.
			scanblock(uintptr(unsafe.Pointer(&s.fn)), sys.PtrSize, &oneptrmask[0], gcw, nil)
			releasem(mp)
		}
		return true
	}

	// There was an old finalizer
	lock(&mheap_.speciallock)
	mheap_.specialfinalizeralloc.free(unsafe.Pointer(s))
	unlock(&mheap_.speciallock)
	return false
}

func addspecial(p unsafe.Pointer, s *special) bool {
	// 根据p找到对应的mspan
	span := spanOfHeap(uintptr(p))

	mp := acquirem()
	span.ensureSwept()
	// 计算在对应mspan中的偏移,可以根据这个信息反查到对应的对象
	offset := uintptr(p) - span.base()
	kind := s.kind

	lock(&span.speciallock)

	// Find splice point, check for existing record.
	// 添加到span.specials链表中
	t := &span.specials
	for {
		x := *t
		if x == nil {
			break
		}
		if offset == uintptr(x.offset) && kind == x.kind {
			unlock(&span.speciallock)
			releasem(mp)
			return false // already exists
		}
		if offset < uintptr(x.offset) || (offset == uintptr(x.offset) && kind < x.kind) {
			break
		}
		t = &x.next
	}

	// Splice in record, fill in offset.
	s.offset = uint16(offset)
	s.next = *t
	*t = s
	spanHasSpecials(span)
	unlock(&span.speciallock)
	releasem(mp)

	return true
}

触发

在GC mark和sweep期间都需要进行对应的处理

mark:每个mspan的specials链表是GC中的根节点,对设置了Finalizer的对象的子field进行扫描标记,但自身不会在扫描根节点过程中被标记,自身是否存活是根据从其他root节点是否可达来确定的。

标记其子field是因为,如果本轮GC过程中设置了Finalizer的对象不可达,即会执行所设置的Finalizer回调函数,并且该对象自身会被作为入参传入。而执行回调的时机是在sweep清扫阶段,所以需要对其子field进行保活,这也就导致了后续的“循环引用和Finalizer一起使用会导致内存泄漏”的问题

// $GOROOT/src/runtime/mgcmark.go

func markroot(gcw *gcWork, i uint32) {
	// Note: if you add a case here, please also update heapdump.go:dumproots.
	switch {

	case work.baseSpans <= i && i < work.baseStacks:
		// mark mspan.specials
		markrootSpans(gcw, int(i-work.baseSpans))

	default:
	}
}


func markrootSpans(gcw *gcWork, shard int) {
	// 找到对应mspan
	// Construct slice of bitmap which we'll iterate over.
	specialsbits := ha.pageSpecials[arenaPage/8:]
	specialsbits = specialsbits[:pagesPerSpanRoot/8]
	for i := range specialsbits {
		// Find set bits, which correspond to spans with specials.
		specials := atomic.Load8(&specialsbits[i])
		if specials == 0 {
			continue
		}
		for j := uint(0); j < 8; j++ {
			// Lock the specials to prevent a special from being
			// removed from the list while we're traversing it.
			lock(&s.speciallock)
			// 遍历mspan中的specials
			for sp := s.specials; sp != nil; sp = sp.next {
				// 只关心_KindSpecialFinalizer
				if sp.kind != _KindSpecialFinalizer {
					continue
				}
				// don't mark finalized object, but scan it so we
				// retain everything it points to.
				spf := (*specialfinalizer)(unsafe.Pointer(sp))
				// A finalizer can be set for an inner byte of an object, find object beginning.
				p := s.base() + uintptr(spf.special.offset)/s.elemsize*s.elemsize

				// Mark everything that can be reached from
				// the object (but *not* the object itself or
				// we'll never collect it).
				scanobject(p, gcw)

				// The special itself is a root.
				scanblock(uintptr(unsafe.Pointer(&spf.fn)), sys.PtrSize, &oneptrmask[0], gcw, nil)
			}
			unlock(&s.speciallock)
		}
	}
}
func (sl *sweepLocked) sweep(preserve bool) bool {
	// It's critical that we enter this function with preemption disabled,
	// GC must not start while we are in the middle of this function.
	_g_ := getg()

	s := sl.mspan

	hadSpecials := s.specials != nil
	siter := newSpecialsIter(s)
	// 遍历specials链表
	for siter.valid() {
		// A finalizer can be set for an inner byte of an object, find object beginning.
		// 根据offset反查对应的object在mspan中的位置
		objIndex := uintptr(siter.s.offset) / size
		p := s.base() + objIndex*size
		mbits := s.markBitsForIndex(objIndex)
		// 如果该object没有被标记,说明在这轮GC中是不可以对象,可以被回收
		// 触发Finalizer机制
		if !mbits.isMarked() {
			// This object is not marked and has at least one special record.
			// Pass 1: see if it has at least one finalizer.
			hasFin := false
			endOffset := p - s.base() + size
			for tmp := siter.s; tmp != nil && uintptr(tmp.offset) < endOffset; tmp = tmp.next {
				if tmp.kind == _KindSpecialFinalizer {
					// Stop freeing of object if it has a finalizer.
					// 标记该对象,使其多存活一周期
					mbits.setMarkedNonAtomic()
					hasFin = true
					break
				}
			}
			// Pass 2: queue all finalizers _or_ handle profile record.
			// 从special列表移除该记录,下轮GC时,该object已经不再拥有Finalizer
			for siter.valid() && uintptr(siter.s.offset) < endOffset {
				// Find the exact byte for which the special was setup
				// (as opposed to object beginning).
				special := siter.s
				p := s.base() + uintptr(special.offset)
				if special.kind == _KindSpecialFinalizer || !hasFin {
					siter.unlinkAndNext()
					freeSpecial(special, unsafe.Pointer(p), size)
				} else {
					// The object has finalizers, so we're keeping it alive.
					// All other specials only apply when an object is freed,
					// so just keep the special record.
					siter.next()
				}
			}
		} else {
			// object is still live
			if siter.s.kind == _KindSpecialReachable {
				// 测试用
				special := siter.unlinkAndNext()
				(*specialReachable)(unsafe.Pointer(special)).reachable = true
				freeSpecial(special, unsafe.Pointer(p), size)
			} else {
				// 对象存活,保留该Finalizer,继续遍历
				// keep special record
				siter.next()
			}
		}
	}
	if hadSpecials && s.specials == nil {
		spanHasNoSpecials(s)
	}
}

// freeSpecial performs any cleanup on special s and deallocates it.
// s must already be unlinked from the specials list.
func freeSpecial(s *special, p unsafe.Pointer, size uintptr) {
	switch s.kind {
	case _KindSpecialFinalizer:
		sf := (*specialfinalizer)(unsafe.Pointer(s))
		queuefinalizer(p, sf.fn, sf.nret, sf.fint, sf.ot)
		lock(&mheap_.speciallock)
		mheap_.specialfinalizeralloc.free(unsafe.Pointer(sf))
		unlock(&mheap_.speciallock)
	case _KindSpecialProfile:
		sp := (*specialprofile)(unsafe.Pointer(s))
		mProf_Free(sp.b, size)
		lock(&mheap_.speciallock)
		mheap_.specialprofilealloc.free(unsafe.Pointer(sp))
		unlock(&mheap_.speciallock)
	case _KindSpecialReachable:
		sp := (*specialReachable)(unsafe.Pointer(s))
		sp.done = true
		// The creator frees these.
	default:
		throw("bad special kind")
		panic("not reached")
	}
}

// 提交到finq队列,并设置fingwake
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype) {
	if gcphase != _GCoff {
		// Currently we assume that the finalizer queue won't
		// grow during marking so we don't have to rescan it
		// during mark termination. If we ever need to lift
		// this assumption, we can do it by adding the
		// necessary barriers to queuefinalizer (which it may
		// have automatically).
		throw("queuefinalizer during GC")
	}

	lock(&finlock)
	if finq == nil || finq.cnt == uint32(len(finq.fin)) {
		if finc == nil {
			finc = (*finblock)(persistentalloc(_FinBlockSize, 0, &memstats.gcMiscSys))
			finc.alllink = allfin
			allfin = finc
			if finptrmask[0] == 0 {
				// Build pointer mask for Finalizer array in block.
				// Check assumptions made in finalizer1 array above.
				if (unsafe.Sizeof(finalizer{}) != 5*sys.PtrSize ||
					unsafe.Offsetof(finalizer{}.fn) != 0 ||
					unsafe.Offsetof(finalizer{}.arg) != sys.PtrSize ||
					unsafe.Offsetof(finalizer{}.nret) != 2*sys.PtrSize ||
					unsafe.Offsetof(finalizer{}.fint) != 3*sys.PtrSize ||
					unsafe.Offsetof(finalizer{}.ot) != 4*sys.PtrSize) {
					throw("finalizer out of sync")
				}
				for i := range finptrmask {
					finptrmask[i] = finalizer1[i%len(finalizer1)]
				}
			}
		}
		block := finc
		finc = block.next
		block.next = finq
		finq = block
	}
	f := &finq.fin[finq.cnt]
	atomic.Xadd(&finq.cnt, +1) // Sync with markroots
	f.fn = fn
	f.nret = nret
	f.fint = fint
	f.ot = ot
	f.arg = p
	fingwake = true
	unlock(&finlock)
}

执行

// Finds a runnable goroutine to execute.
// Tries to steal from other P's, get g from local or global queue, poll network.
func findrunnable() (gp *g, inheritTime bool) {
	_g_ := getg()

	// The conditions here and in handoffp must agree: if
	// findrunnable would return a G to run, handoffp must start
	// an M.

top:
	_p_ := _g_.m.p.ptr()
	// 调度过程中发现需要执行Finalizer,唤醒之前创建的runfinq()
	if fingwait && fingwake {
		if gp := wakefing(); gp != nil {
			ready(gp, 0, true)
		}
	}
}

// This is the goroutine that runs all of the finalizers
func runfinq() {
	var (
		frame    unsafe.Pointer
		framecap uintptr
		argRegs  int
	)

	for {
		lock(&finlock)
		fb := finq
		finq = nil
		if fb == nil {
			gp := getg()
			fing = gp
			fingwait = true
			goparkunlock(&finlock, waitReasonFinalizerWait, traceEvGoBlock, 1)
			continue
		}
		argRegs = intArgRegs
		unlock(&finlock)
		if raceenabled {
			racefingo()
		}
		for fb != nil {
			for i := fb.cnt; i > 0; i-- {
				// 从队列中取出待执行的任务
				f := &fb.fin[i-1]

				var regs abi.RegArgs
				var framesz uintptr
				if argRegs > 0 {
					// The args can always be passed in registers if they're
					// available, because platforms we support always have no
					// argument registers available, or more than 2.
					//
					// But unfortunately because we can have an arbitrary
					// amount of returns and it would be complex to try and
					// figure out how many of those can get passed in registers,
					// just conservatively assume none of them do.
					framesz = f.nret
				} else {
					// Need to pass arguments on the stack too.
					framesz = unsafe.Sizeof((interface{})(nil)) + f.nret
				}
				if framecap < framesz {
					// The frame does not contain pointers interesting for GC,
					// all not yet finalized objects are stored in finq.
					// If we do not mark it as FlagNoScan,
					// the last finalized object is not collected.
					frame = mallocgc(framesz, nil, true)
					framecap = framesz
				}

				if f.fint == nil {
					throw("missing type in runfinq")
				}
				r := frame
				if argRegs > 0 {
					r = unsafe.Pointer(&regs.Ints)
				} else {
					// frame is effectively uninitialized
					// memory. That means we have to clear
					// it before writing to it to avoid
					// confusing the write barrier.
					*(*[2]uintptr)(frame) = [2]uintptr{}
				}
				switch f.fint.kind & kindMask {
				case kindPtr:
					// direct use of pointer
					*(*unsafe.Pointer)(r) = f.arg
				case kindInterface:
					ityp := (*interfacetype)(unsafe.Pointer(f.fint))
					// set up with empty interface
					(*eface)(r)._type = &f.ot.typ
					(*eface)(r).data = f.arg
					if len(ityp.mhdr) != 0 {
						// convert to interface with methods
						// this conversion is guaranteed to succeed - we checked in SetFinalizer
						(*iface)(r).tab = assertE2I(ityp, (*eface)(r)._type)
					}
				default:
					throw("bad kind in runfinq")
				}
				fingRunning = true
				// 使用反射的方式调用注册的回调函数
				// 可以看到后台只有这一个goroutine在勤勤恳恳按顺序的执行Finalizer回调
				// 所以在Finalizer中调用阻塞函数是一件非常危险的事情
				reflectcall(nil, unsafe.Pointer(f.fn), frame, uint32(framesz), uint32(framesz), uint32(framesz), &regs)
				fingRunning = false

				// Drop finalizer queue heap references
				// before hiding them from markroot.
				// This also ensures these will be
				// clear if we reuse the finalizer.
				f.fn = nil
				f.arg = nil
				f.ot = nil
				atomic.Store(&fb.cnt, i-1)
			}
			next := fb.next
			lock(&finlock)
			fb.next = finc
			finc = fb
			unlock(&finlock)
			fb = next
		}
	}
}

为什么和循环引用一起使用会导致内存泄漏?

这个应该也属于禁止使用的用法

package main

import (
	"log"
	"runtime"
	"time"
)

type X struct {
	data [1 << 20][10]byte // 构造大对象,使其逃逸到堆上
	ptr  *X
}

func test() {
	var a, b X
	a.ptr = &b
	b.ptr = &a

	runtime.SetFinalizer(&a, func(*X) { log.Println("Finalizer a") })
	runtime.SetFinalizer(&b, func(*X) { log.Println("Finalizer b") })
}

func main() {
	for i := 0; i < 10; i++ {
		test()
		runtime.GC()
		time.Sleep(time.Second)
	}
}

// OUTPUT:
// gc 1 @0.001s 6%: 0.009+1.3+0.002 ms clock, 0.072+0.10/1.3/1.2+0.018 ms cpu, 10->10->10 MB, 11 MB goal, 8 P
// gc 2 @0.003s 15%: 0.007+3.8+0.002 ms clock, 0.062+0/7.4/18+0.016 ms cpu, 20->20->60 MB, 21 MB goal, 8 P
// gc 3 @0.007s 17%: 0.015+2.7+0.002 ms clock, 0.12+0/5.4/5.3+0.021 ms cpu, 60->60->40 MB, 120 MB goal, 8 P (forced)
// gc 4 @1.021s 0%: 0.068+5.6+0.002 ms clock, 0.54+0/11/16+0.019 ms cpu, 60->60->80 MB, 80 MB goal, 8 P (forced)
// gc 5 @2.032s 0%: 0.054+10+0.002 ms clock, 0.43+0/21/61+0.021 ms cpu, 100->100->120 MB, 160 MB goal, 8 P (forced)

可以看出,Finalizer并没有被执行,并且GC无法回收掉函数中的局部变量。

注:解除a和b的循环引用关系,或者不对其SetFinalizer都可以正常回收,这并不是GC的问题。

Finalizers are run in dependency order: if A points at B, both have finalizers, and they are otherwise unreachable, only the finalizer for A runs; once A is freed, the finalizer for B can run. If a cyclic structure includes a block with a finalizer, that cycle is not guaranteed to be garbage collected and the finalizer is not guaranteed to run, because there is no ordering that respects the dependencies.

其实主要是因为Finalizers也会作为GC的根节点,会进行标记它的子Field,所以在这种场景下,a和b在每轮GC中都是存活的(a是通过b的Finalizers标记存活,b是通过a的Finalizers标记存活),也就导致了内存泄漏,在sweep期间,对于存活对象也不会调用其Finalizers函数,所以Finalizers回调也并不会被执行。

参考链接

深入理解Go-runtime.SetFinalizer原理剖析

Go如何巧妙使用runtime.SetFinalizer

Go 性能优化技巧 10/10