Go project combat: to build highly concurrent log collection system (b)

The whole idea is to monitor the system log statistics log each folder, real-time access log is written once and write kafka queue, the queue can write kafka queued at high concurrency, and achieve the purpose of de-coupled logic. Then data is read from the queue kafka, displayed on a web page or the like according to the actual needs of the console.

Recap

We finished on a target below
1 configuration kafka, and start the message queue.
2 coding Kafka to message entry, and reads messages from kafka.

This section target

1 kafka write code from the message read, read and write capabilities to ensure correct kafka message.
2 With tailf implement document control, and simulation tests facts when writing files and file backup function correctly.
3 series Go development language use

Read the message from kafka

func main(){
	fmt.Println("consumer begin...")
	config := sarama.NewConfig()
	config.Consumer.Return.Errors = true
	wg  :=sync.WaitGroup{}
	//创建消费者
	consumer, err := sarama.NewConsumer([]string{"localhost:9092"},config)
	if err != nil {
		fmt.Println("consumer create failed, error is ", err.Error())
		return
	}
	defer consumer.Close()
	
	//Partitions(topic):该方法返回了该topic的所有分区id
    partitionList, err := consumer.Partitions("test")
    if err != nil {
		fmt.Println("get consumer partitions failed")
		fmt.Println("error is ", err.Error ()) 
		// ConsumePartition method according to the subject,
    }
		return

	for partition := range partitionList {
		// partition and create a given offset to create the appropriate partition consumer 
		// if the consumer has consumed the partition information returns error 
		// OffsetNewest latest consumption data 
        pc, err: = consumer.ConsumePartition ( " Test ", Int32 (Partition), sarama.OffsetNewest) 
        IF ERR! = nil { 
            panic (ERR) 
		} 
		// close asynchronously, the data off the disk to ensure that 
        the defer pc.AsyncClose () 
        wg.Add (. 1) 
        Go FUNC (sarama.PartitionConsumer ) { 
            the defer wg.Done () 
            // the messages () method returns the type of a read-only message channel consumption, generated by the agent 
            for MSG: = Range pc.Messages () { 
				fmt.Printf ( "% S --- the Partition :% D, Offset:% D, Key:% S, the Value:% S \ n-",  
				msg.Topic, msg.Partition, msg.Offset, String (msg.Key), String ( msg.Value))
            } 
        } (PC) 
    } 
    wg.Wait () 
    consumer.Close ()
	
}

  

After we started zookeeper and kafka, respectively, before running to the written text to achieve kafka code data, and the current code from kafka consumption, see the following results
1.jpg

Monitor the implementation file

When implementing file monitoring, the main content is written in a file, you can obtain timely written content, similar to the Linux command tailf -f function of a file.
golang provides tail library, we use this library to complete the monitoring of the specified file, my file is organized as follows
4.jpg

 logdir folder log.txt record is increasing the log file
tailf folder logtailf.go achieve log.txt monitoring.
writefile folder writefile.go to achieve is to write the log log.txt file and backup functions.

main FUNC () { 
	logrelative: = `../ logdir / log.txt` 
	_, filename, _, _: = runtime.Caller (0) 
	fmt.Println (filename) 
	Datapath: = path.Join (path.Dir ( filename), logrelative) 
	fmt.Println (Datapath) 
	tailFile, ERR: = tail.TailFile (Datapath, tail.Config { 
		// file is removed or packed, need to re-open the 
		rEOPEN: to true, 
		// real-time tracking 
		Follow: true , 
		// if the program is abnormal, save the last reading position, to avoid re-read 
		the location: & tail.SeekInfo {Offset: 0, WHENCE: 2}, 
		// support file does not exist 
		MustExist: false, 
		Poll: to true, 
	} ) 

	! IF ERR = nil { 
		fmt.Println ( "ERR tail File:", ERR) 
		return 
	} 

	for {to true 
		MSG, OK: = <-tailFile.Lines 
		! {IF OK
			fmt.Printf("tail file close reopen, filename: %s\n", tailFile.Filename)
			time.Sleep(100 * time.Millisecond)
			continue
		}
		//fmt.Println("msg:", msg)
		//只打印text
		fmt.Println("msg:", msg.Text)
	}
}

To test the function of monitoring. We realize a line "Hello + time stamp" is written to the log log.txt in every 0.1s. When we write 20 content will log.txt backup renamed.
Then create a new log.txt continue writing.
Implement a function in the timing of writing writefile.go, and the backup function

func writeLog(datapath string) {
	filew, err := os.OpenFile(datapath, os.O_APPEND|os.O_CREATE|os.O_RDWR, 0644)
	if err != nil {
		fmt.Println("open file error ", err.Error())
		return
	}

	w := bufio.NewWriter(filew)
	for i := 0; i < 20; i++ {
		timeStr := time.Now().Format("2006-01-02 15:04:05")
		fmt.Fprintln(w, "Hello current time is "+timeStr)
		time.Sleep(time.Millisecond * 100)
		w.Flush()
	}
	logBak := time.Now().Format("20060102150405") + ".txt"
	logBak = path.Join(path.Dir(datapath), logBak)
	filew.Close()
	err = os.Rename(datapath, logBak)
	if err != nil {
		fmt.Println("Rename error ", err.Error())
		return
	}
}

  Then we realize the main function, called three times writeLog, which will produce three backup file

func main() {
	logrelative := `../logdir/log.txt`
	_, filename, _, _ := runtime.Caller(0)
	fmt.Println(filename)
	datapath := path.Join(path.Dir(filename), logrelative)
	for i := 0; i < 3; i++ {
		writeLog(datapath)
	}
}

 

We are monitoring startup files and program files are written, the following effects can be seen, when the content is written log.txt, logtailf.go achieve a dynamic monitoring, and when the file backup, logtailf.go prompt the file is renamed backup. Ultimately, we see the results in three backup files
2.jpg


3.jpg

to sum up

Currently we have completed kafka news reader, file monitoring, dynamic writing and backup functions, then we realize the project configuration and overall code.
Source download
https://github.com/secondtonone1/golang-
thank my public concern No.
wxgzh.jpg

 

Guess you like

Origin www.cnblogs.com/secondtonone1/p/11944360.html