append
Function append
intelligently handle volume growth underlying array. When capacity is less than a slice of 1000 elements will always be multiplied capacity. Once the number of elements than 1000, the capacity factor will increase to 1.25,
that is, each increase 25%
of capacity, with the evolution of language, this growth algorithm may change.
Test code & results
func main() {
l1 := []int{0: 1}
k := 1
last := 0
for k < 2000 {
l1 = append(l1, k)
k++
if cap(l1) != last {
fmt.Println(k, cap(l1))
last = cap(l1)
}
}
}
Slice spread function which is passed by reference
Code Testing
func foo(list []int) {
for i := 0; i < len(list); i++ {
list[i] = 10 + i
}
return
}
func main() {
list := []int{0, 1, 2}
foo(list)
fmt.Printf("%v", list)
}
// 结果
[10, 11, 12]
Slice and pointers
In 64位
the framework of the machine, it requires a slice 24字节
of memory, pointer field needs 8字节
, and capacity of each length need 8字节
.
## set of methods
Values | Methods Receivers
T |(t T)
T | (t T) and (t T)
Point T类型的值的
method set contains only 值接收者
method declaration.
Pointing T类型的指针
method set contains 值接收者
declarations and 指针接收者
method declarations.
Complicated by
go
The default will be assigned a physical processor for each available language runtime 逻辑处理器
.
If you create a goroutine
and ready to run, this goroutine
will be the global run queue scheduler in. Thereafter, the scheduler will these queues goroutine
assigned to a logic processor, and the logical processor into a corresponding local run queue.
Logical Processor
Local run queue
scheduler
Use go build -race
competition detector flag to the compiler
to run the program ./go_start.exe
with warnings.
It can be used atomic
and sync
the method or function packages under the guarantee thread safety
unbuffered := make(chan int)
buffered := make(chan string, 10)
The first unbuffered channel, the second channel is buffered
Task execution, the case needs to be considered:
- System outage
- Completion (complete or failure)
- time out
runner
runner comes with a timeout interrupt function
type Runner struct {
interrupt chan os.Signal
complete chan error
timeout <-chan time.Time
tasks []func(int)
}
var ErrTimeOut = errors.New("received timeout")
var ErrInterrupt = errors.New("received interrupt")
// new a Runner
func New(d time.Duration) *Runner {
return &Runner{
interrupt: make(chan os.Signal, 1),
complete: make(chan error),
timeout: time.After(d),
}
}
func (r *Runner) Add(tasks ...func(int)) {
r.tasks = append(r.tasks, tasks...)
}
func (r *Runner) Start() error {
signal.Notify(r.interrupt, os.Interrupt)
go func() {
r.complete <- r.run()
}()
select {
case err := <-r.complete:
return err
case <-r.timeout:
return ErrTimeOut
}
}
func (r *Runner) run() error {
for id, task := range r.tasks {
if r.gotInterrupt() {
return ErrInterrupt
}
task(id)
}
return nil
}
func (r *Runner) gotInterrupt() bool {
select {
case <-r.interrupt:
signal.Stop(r.interrupt)
return true
default:
return false
}
}
pool
Resource management pool
package pool
import (
"errors"
"io"
"log"
"sync"
)
type Pool struct {
m sync.Mutex
resources chan io.Closer
factory func() (io.Closer, error)
closed bool
}
var ErrPoolClosed = errors.New("Pool has been closed")
func New(fn func() (io.Closer, error), size uint) (*Pool, error) {
if size <= 0 {
return nil, errors.New("size value too small")
}
return &Pool{
factory: fn,
resources: make(chan io.Closer, size),
}, nil
}
// get a resource
func (p *Pool) Acquire() (io.Closer, error) {
select {
case r, ok := <-p.resources:
log.Println("Acquire:", "shared Resource")
if !ok {
return nil, ErrPoolClosed
}
return r, nil
default:
log.Println("Acquire:", "New Resource")
return p.factory()
}
}
// release to reasoure
func (p *Pool) Release(r io.Closer) {
p.m.Lock()
defer p.m.Unlock()
if p.closed {
r.Close()
return
}
select {
case p.resources <- r:
log.Println("Release:", "In queue")
default:
log.Println("Release:", "Closing")
r.Close()
}
}
// Close
func (p *Pool) Close() {
p.m.Lock()
defer p.m.Unlock()
if p.closed {
return
}
p.closed = true
close(p.resources)
for r := range p.resources {
r.Close()
}
return
}