Say up front:
It is very difficult to get an offer now , and I can't even get a call for an interview.
In Nien's technical community (50+), many small partners have obtained offers with their unique skills of "left-handed cloud native + right-handed big data", and they are very high-quality offers. It is said that the year-end awards are all for 18 months .
The second case is: some time ago, a 2-year buddy wanted to raise his salary to 18K. Nien wrote the project structure of the GO language into his resume, which made his resume shiny and reborn, and he could definitely get the headlines , Tencent and other 30K offers, the annual salary can be directly increased by 20W .
The second case is: a 6-year-old partner relies on the Java+go dual-language cloud-native architecture, with an annual salary of 60W .
From the perspective of Java high-paying positions and employment positions, cloud native, K8S, and GO are now becoming more and more important for senior engineers/architects.
Therefore, from the perspective of an architect, Nien wrote a "GO Study Bible" based on the Nien 3 high-architecture knowledge universe, please go to the end of the article [Technical Freedom Circle] to get it.
The completed contents of "GO Study Bible" include:
Go Study Bible: 0 Basics Proficient in GO Development and High Concurrency Architecture
Go Study Bible: Queue peak shaving + batch writing ultra-high concurrency principle and practice
The ultimate goal of the GO Study Bible PDF
Our goal is not only the freedom of GO application programming, but also the freedom of GO architecture.
In addition, Nine's cloud native did not involve GO, but cloud native without GO is incomplete.
Therefore, after learning the GO language and GO architecture, let's go back to the carbine and complete the second part of cloud native: "Istio + K8S CRD architecture and development practice" to help everyone thoroughly penetrate cloud native.
Article Directory
-
- Say up front:
- Design and development of Restful API interface layer for business CRUD
- Design and development of Restful API service layer for business CRUD
- Dao layer design and development of business CRUD
- Write public components
- Paging Response Handling
- Swagger interface documentation
- Perform parameter verification for the interface
- Module Development: Commodity Management
- "Golang Bible" still has 50,000 words to be released
- References
- The realization path of technical freedom PDF:
Design and development of Restful API interface layer for business CRUD
This article introduces a simple commodity microservice CRUD case.
Design of Restful API interface for business CRUD
Refer to the design specifications of the CRUD Restful API interface to complete the CRUD interface design similar to the following.
Function | HTTP method | path |
---|---|---|
new article | POST | /articles |
Delete specified article | DELETE | /articles/:id |
update specified article | PUT | /articles/:id |
Get the specified article | GET | /articles/:id |
get list of articles | GET | /articles |
The RESTful API design and writing of adding, deleting, modifying and checking, the behaviors corresponding to the HTTP methods in the RESTful API are as follows:
- GET : read/retrieve action.
- POST : add/new action.
- PUT : The update action is used to update a complete resource and is required to be idempotent.
- PATCH : The update action is used to update a component of a certain resource, that is, you only need to update a certain item of the resource, you should use PATCH instead of PUT, and it may not be idempotent.
- DELETE : delete action.
Restful API routing management, similar to SpringMVC controller
Restful API routing management, similar to SpringMVC controllers.
Restful API routing management, placed internal/routers
in the directory, and create a new router.go file, code reference:
func NewRouter() *gin.Engine {
r := gin.New()
r.Use(gin.Logger())
r.Use(gin.Recovery())
article := v1.NewArticle()
apiv1 := r.Group("/api/v1")
{
apiv1.POST("/articles", article.Create)
apiv1.DELETE("/articles/:id", article.Delete)
apiv1.PUT("/articles/:id", article.Update)
apiv1.PATCH("/articles/:id/state", article.Update)
apiv1.GET("/articles/:id", article.Get)
apiv1.GET("/articles", article.List)
}
return r
}
Design and implementation of Handler processor
The Handler processor corresponds to the Controller of SpringMVC.
The location of the Handler processor, here is placed in internal/routers/api/v1
the folder.
The Handler processor here, the file name is article.go.
The code for reference is as follows:
type Article struct{
}
func NewArticle() Article {
return Article{
}
}
func (a Article) Get(c *gin.Context) {
}
func (a Article) List(c *gin.Context) {
}
func (a Article) Create(c *gin.Context) {
}
func (a Article) Update(c *gin.Context) {
}
func (a Article) Delete(c *gin.Context) {
}
Start the Gin WEB server
After completing the code writing of the model and routing, modify the main.go file and transform it into the startup file of this project.
Modify the code as follows:
func main() {
router := routers.NewRouter()
s := &http.Server{
Addr: ":8080",
Handler: router,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
s.ListenAndServe()
}
Through customization http.Server
, we set basic parameters such as the monitored TCP Endpoint, the processing program, the maximum allowed read/write time, and the maximum number of bytes in the request header, and finally call the ListenAndServe
method to start monitoring.
Design and development of Restful API service layer for business CRUD
The responsibilities and functions of the service layer are similar to the SpringMVC service layer.
Base class for service layer
package service
import (
"context"
"crazymakercircle.com/gin-rest/common/global"
"crazymakercircle.com/gin-rest/internal/dao"
otgorm "github.com/eddycjy/opentracing-gorm"
)
type Service struct {
ctx context.Context
dao *dao.Dao
}
func New(ctx context.Context) Service {
svc := Service{
ctx: ctx}
svc.dao = dao.New(global.DBEngine)
svc.dao = dao.New(otgorm.WithContext(svc.ctx, global.DBEngine))
return svc
}
The base class encapsulates two objects:
- context object of type context.Context
- dao layer base object of type dao.Dao
context object of type context.Context
In the Server of the Go http package, each request has a corresponding goroutine to process it.
Each request handler processing function, in many cases, also needs to be processed concurrently, often launching additional goroutines to access back-end services, such as accessing databases and calling RPC services.
Although the db coroutine, rpc coroutine and req coroutine are all executed concurrently.
However, when the req coroutine request is canceled, all goroutines used to process the request should exit quickly, and then the system quickly releases the resources occupied by these goroutines.
The principle of context
Go1.7 added a new standard library context, which defines the Context type.
The Context context is specially used to simplify the data related to the request, cancellation signal, deadline and other related operations between multiple goroutines of a single WEB request, and these operations may involve multiple API calls.
Incoming requests to the server should create the context, and outgoing calls to the server should accept the context.
The chain of function calls between them must pass a context, or a derived context created with WithCancel, WithDeadline, WithTimeout, or WithValue can be used.
Note: When a context is canceled, all contexts it derives from are also canceled.
Context interface
context.Context is an interface that defines four methods that need to be implemented. The specific signature is as follows:
type Context interface {
Deadline() (deadline time.Time, ok bool)
Done() <-chan struct{
}
Err() error
Value(key interface{
}) interface{
}
}
in:
- The Deadline method needs to return the time when the current Context is canceled, that is, the deadline for completing the work (deadline);
- The Done method needs to return a Channel. This Channel will be closed after the current work is completed or the context is canceled. Calling the Done method multiple times will return the same Channel;
- The Err method will return the reason for the end of the current Context, and it will only return a non-null value when the Channel returned by Done is closed;
- If the current Context is canceled, a Canceled error will be returned;
- If the current Context times out, a DeadlineExceeded error will be returned;
- The Value method will return the value corresponding to the key from the Context. For the same context, calling Value multiple times and passing in the same Key will return the same result. This method is only used to transfer cross-API and inter-process and request domains data;
Built-in functions Background() and TODO()
Go has two built-in functions: Background() and TODO(), which return a background and todo that implement the Context interface respectively.
At the beginning of our code, these two built-in context objects are used as the top-level partent context, and more sub-context objects are derived.
Background() is mainly used in the main function, initialization and test code, as the topmost Context of the Context tree structure, that is, the root Context.
TODO(), it doesn't know the specific usage scenario yet, if we don't know what Context to use, we can use this.
Both background and todo are essentially emptyCtx structure types, which are non-cancellable, have no deadline set, and do not carry any value in the Context.
Context's With series of functions
Four With series functions are defined in the context package.
Incoming requests to the server should create a Context and outgoing calls to the server should accept a Context. The chain of function calls between them must propagate the Context, optionally replacing it with a derived Context created using WithCancel, WithDeadline, WithTimeout, or WithValue.
When a context Context is cancelled, all contexts derived from it are also cancelled.
WithCancel
The function signature of WithCancel is as follows:
func WithCancel(parent Context) (ctx Context, cancel CancelFunc)
WithCancel returns a copy of the parent node with a new Done channel. The Done channel of the returning context will be closed when the returned cancel function is called or when the parent context's Done channel is closed, whichever happens first.
Canceling this context releases the resources associated with it, so code should call cancel as soon as operations running in this context complete.
func generate(ctx context.Context) <-chan int {
dst := make(chan int)
n := 1
go func() {
for {
select {
case <-ctx.Done():
return // return结束该goroutine,防止泄露
case dst <- n:
n++
}
}
}()
return dst
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // 当我们取完需要的整数后调用cancel
for n := range generate(ctx) {
fmt.Println(n)
if n == 5 {
break
}
}
}
In the example code above, the generate function generates integers in a separate goroutine and sends them to the returned channel.
The caller of generate needs to cancel the context after using the generated integer to avoid leaking the internal goroutine started by generate.
WithDeadline
The function signature of WithDeadline is as follows:
func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc)
Return a copy of the parent context with deadline adjusted to be no later than d.
If the deadline of the parent context is already earlier than d, it WithDeadline(parent, d)
is semantically equivalent to the parent context.
The Done channel of the returning context will be closed when the deadline expires, when the returned cancel function is called, or when the parent context's Done channel is closed, whichever happens first.
Canceling this context releases the resources associated with it, so code should call cancel as soon as operations running in this context complete.
func main() {
d := time.Now().Add(50 * time.Millisecond)
ctx, cancel := context.WithDeadline(context.Background(), d)
// 尽管ctx会过期,但在任何情况下调用它的cancel函数都是很好的实践。
// 如果不这样做,可能会使上下文及其父类存活的时间超过必要的时间。
defer cancel()
select {
case <-time.After(1 * time.Second):
fmt.Println("overslept")
case <-ctx.Done():
fmt.Println(ctx.Err())
}
}
In the above code, a deadline that expires after 50 milliseconds is defined,
Then we call to context.WithDeadline(context.Background(), d)
get a context (ctx) and a cancel function cancel, and then use a select to make the main program fall into waiting: wait for 1 second and then print overslept to exit, or wait for ctx to expire and then exit.
Because ctx expires after 50 seconds, ctx.Done() will receive the value first, and the above code will print the reason for ctx.Err() cancellation.
WithTimeout
The function signature of WithTimeout is as follows:
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc)
WithTimeout returns WithDeadline(parent, time.Now().Add(timeout))
.
Canceling this context will free resources associated with it, so code should call cancel immediately after operations running in this context complete, typically for timeout control of database or network connections.
Specific examples are as follows:
package main
import (
"context"
"fmt"
"sync"
"time"
)
// context.WithTimeout
var wg sync.WaitGroup
func worker(ctx context.Context) {
LOOP:
for {
fmt.Println("db connecting ...")
time.Sleep(time.Millisecond * 10) // 假设正常连接数据库耗时10毫秒
select {
case <-ctx.Done(): // 50毫秒后自动调用
break LOOP
default:
}
}
fmt.Println("worker done!")
wg.Done()
}
func main() {
// 设置一个50毫秒的超时
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*50)
wg.Add(1)
go worker(ctx)
time.Sleep(time.Second * 5)
cancel() // 通知子goroutine结束
wg.Wait()
fmt.Println("over")
}
WithValue
The WithValue function can establish a relationship between the data in the request scope and the Context object.
WithValue is declared as follows:
func WithValue(parent Context, key, val interface{
}) Context
WithValue returns a copy of the parent node with the value associated with key being val.
The following example uses context to propagate log codes among all coroutines.
package main
import (
"context"
"fmt"
"sync"
"time"
)
// context.WithValue
type TraceCode string
var wg sync.WaitGroup
func worker(ctx context.Context) {
key := TraceCode("TRACE_CODE")
traceCode, ok := ctx.Value(key).(string) // 在子goroutine中获取trace code
if !ok {
fmt.Println("invalid trace code")
}
LOOP:
for {
fmt.Printf("worker, trace code:%s\n", traceCode)
time.Sleep(time.Millisecond * 10) // 假设正常连接数据库耗时10毫秒
select {
case <-ctx.Done(): // 50毫秒后自动调用
break LOOP
default:
}
}
fmt.Println("worker done!")
wg.Done()
}
func main() {
// 设置一个50毫秒的超时
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*50)
// 在系统的入口中设置trace code传递给后续启动的goroutine实现日志数据聚合
ctx = context.WithValue(ctx, TraceCode("TRACE_CODE"), "12512312234")
wg.Add(1)
go worker(ctx)
time.Sleep(time.Second * 5)
cancel() // 通知子goroutine结束
wg.Wait()
fmt.Println("over")
}
The break LOOP above means to break out of the loop
Go语言中 break 语句可以结束 for、switch 和 select 的代码块,
break 语句可以在语句后面添加标签,表示退出某个标签对应的代码块,
break 后面的标签,要求必须定义在对应的 for、switch 和 select 的代码块上。
A final word: the key provided by the context.WithValue context must be comparable, and should not be of type string or any other built-in type, to avoid conflicts between packages using the context. context.WithValue The key of the context should be a user-defined type.
Notes on using Context
- Rule 1: It is recommended to display and pass Context in the form of parameters
- Rule 2: Function methods that take Context as a parameter should take Context as the first parameter.
- Rule 3: When passing Context to a function method, do not pass nil. If you don’t know what to pass, use context.TODO()
- Rule 4: The Value-related methods of Context should pass the necessary data of the request field, and should not be used to pass optional parameters
- Rule 5: Context is thread-safe and can be safely passed among multiple goroutines
Nien’s reminder: In our case, for simplicity of coding, instead of displaying and passing the Context as a parameter, it is placed in the service base class in the form of encapsulation, which violates rule 1.
In short, our case violates rule 1.
Dao layer design and development of business CRUD
The base class of the Dao layer
package dao
import "github.com/jinzhu/gorm"
type Dao struct {
engine *gorm.DB
}
func New(engine *gorm.DB) *Dao {
return &Dao{
engine: engine}
}
Note that this engine is injected externally and is mainly built externally.
When is the gorm.DB ORM engine object created? When building the service:
type Service struct {
ctx context.Context
dao *dao.Dao
}
func New(ctx context.Context) Service {
svc := Service{
ctx: ctx}
//svc.dao = dao.New(global.DBEngine)
svc.dao = dao.New(otgorm.WithContext(svc.ctx, global.DBEngine))
return svc
}
When building a service, a dao object is built, and a global gorm data persistence component object is injected as a parameter.
Look at the definition of this global global.DBEngine object:
package global
import (
"github.com/elastic/go-elasticsearch/v8"
rdb "github.com/go-redis/redis/v8"
"github.com/gomodule/redigo/redis"
"github.com/jinzhu/gorm"
"github.com/opentracing/opentracing-go"
"github.com/sirupsen/logrus"
"go.mongodb.org/mongo-driver/mongo"
)
var (
DBEngine *gorm.DB
RedisPool *redis.Pool
Redis *rdb.Client
Logger *logrus.Logger
Tracer opentracing.Tracer
Es *elasticsearch.Client
Mongo *mongo.Client
ModelPath string
ModelReplace string
)
Initialization of the global.DBEngine object
The global global.DBEngine object is initialized in the model base class of the model module.
package model
import (
"fmt"
"crazymakercircle.com/gin-rest/common/global"
"crazymakercircle.com/gin-rest/pkg/toolkit/cast"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
log "github.com/sirupsen/logrus"
"time"
cfg "crazymakercircle.com/gin-rest/internal/config"
)
func Init() {
var err error
//关于读写分离我们可以定义两个变量,一个读和一个写的,
//然后分别初始化,
//然后在查询和写入操作的时候使用不同的连接,或者使用一个map保存读和写的连接
global.DBEngine, err = NewDBEngine()
if err != nil {
panic(err)
}
}
func NewDBEngine() (*gorm.DB, error) {
dsn := cfg.AppDbDsn.Load()
driver := cfg.AppDbDriver.Load()
switch driver {
case "mysql":
log.Printf("%s:%s",
driver,
dsn)
default:
log.Fatalf("invalid db driver %v\n", cfg.AppDbDriver.Load())
}
db, err := gorm.Open("mysql", dsn)
if err != nil {
log.Fatalf("Open "+cfg.AppDbDriver.Load()+" failed. %v\n", err)
return nil, err
}
db.LogMode(true)
db.DB().SetConnMaxLifetime(cast.ToDuration(cfg.AppDbMaxLifetime.Load())) //最大连接周期,超过时间的连接就close
db.DB().SetMaxOpenConns(cast.ToInt(cfg.AppDbMaxOpens.Load())) //设置最大连接数
db.DB().SetMaxIdleConns(cast.ToInt(cfg.AppDbMaxIdles.Load())) //设置闲置连接数
//设置全局表名禁用复数
db.SingularTable(true)
return db, nil
}
.....
model domain object layer
The model layer is similar to the PO layer in SpringMVC.
The following is a class of the model layer:
package model
import (
"crazymakercircle.com/gin-rest/pkg/app"
"github.com/jinzhu/gorm"
)
type ArticleSwagger struct {
List []*Article
Pager *app.Pager
}
type Article struct {
Id uint64 `json:"id"`
Title string `json:"title"`
Content string `json:"content"`
Introduction string `json:"introduction"`
Views int `json:"views"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"-"`
}
func (a Article) TableName() string {
return "article"
}
func (a Article) Count(db *gorm.DB) (int, error) {
var count int
if a.Title != "" {
db.Where("title like ?", "%"+a.Title+"%")
}
if err := db.Model(&a).Count(&count).Error; err != nil {
return 0, err
}
return count, nil
}
func (a Article) List(db *gorm.DB, pageOffset, pageSize int) ([]*Article, error) {
var list []*Article
if a.Title != "" {
db.Where("title like ?", "%"+a.Title+"%")
}
err := db.Limit(pageSize).Offset(pageOffset).Find(&list).Error
return list, err
}
Note that this is different from springmvc:
- The business model (domain object) of springmvc is an anemic model without any operations on the database
- The model domain object here is the hyperemia model
The anemia model means that the model object is only used to transfer data between layers. There are only data fields and Get/Set methods, and there is no logic in the object.
The bloody model is the essence of object-oriented design, and an object has state and behavior. Put most of the business logic and persistence in domain objects, and the business logic only completes the processing of business logic encapsulation, transactions, permissions, verification, etc.
Of course, here is just to apply the concept in DDD. It is not completely consistent with the DDD model.
The Dao layer here weakens the access to data, and a lot of ORM data access logic is entrusted to the model domain object layer.
Use GORM chain operation to complete the access of orm persistence layer
The above code uses GORM chain operation when accessing the database.
For example, when obtaining Article data in the Count method, the GORM chain operation is used, as follows:
func (a Article) Count(db *gorm.DB) (int, error) {
var count int
if a.Title != "" {
db.Where("title like ?", "%"+a.Title+"%")
}
if err := db.Model(&a).Count(&count).Error; err != nil {
return 0, err
}
return count, nil
}
GORM allows chaining of operations, so you can write code like this:
db.Where("name = ?", "jinzhu").Where("age = ?", 18).First(&user)
There are three types of methods in GORM: chained methods, finalized methods, and new session methods.
chain method
Chained methods are Clauses(sql字句)
methods that modify or add to the current Statement, for example:
Where
, Select
, Omit
, Joins
, Scopes
, Preload
, etc.
db.Table("users").Select("name, email").Where("age > ?", 18).Find(&users)
The Select and Where above are all chain methods
Finalization method (Finisher)
Finishers are methods that immediately execute registered callbacks, then generate and execute SQL, such as these methods:
Create
, First
, Find
, Take
, Save
, Update
, Delete
, Scan
, Row
, Rows
… etc.
db.Table("users").Select("name, email").Where("age > ?", 18).Find(&users)
The Find above are all final methods, which will generate and execute SQL.
Here are a few examples of finishers:
// 获取第一条记录,按主键排序
db.First(&user)
//生成的SQL: SELECT * FROM users ORDER BY id LIMIT 1;
// 获取一条记录,不指定排序
db.Take(&user)
//生成的SQL: SELECT * FROM users LIMIT 1;
// 获取最后一条记录,按主键排序
db.Last(&user)
//生成的SQL: SELECT * FROM users ORDER BY id DESC LIMIT 1;
// 获取所有的记录
db.Find(&users)
//生成的SQL: SELECT * FROM users;
// 通过主键进行查询 (仅适用于主键是数字类型)
db.First(&user, 10)
//生成的SQL: SELECT * FROM users WHERE id = 10;
new session method
After the chained method, finalizer method, GORM returns an initialized *gorm.DB
instance. Note that it is *gorm.DB
not safe to reuse, and the newly generated SQL may be polluted by previous conditions, for example:
queryDB := DB.Where("name = ?", "jinzhu")
queryDB.Where("age > ?", 10).First(&user)
//生成的SQL: SELECT * FROM users WHERE name = "jinzhu" AND age > 10
queryDB.Where("age > ?", 20).First(&user2)
//生成的SQL: SELECT * FROM users WHERE name = "jinzhu" AND age > 10 AND age > 20
The SQL generated by the second where in the above code obviously does not need the conditions of the previous where, so the newly generated SQL is polluted by the previous conditions.
In order to reuse the initialized *gorm.DB
instance, create a shareable using the new session method *gorm.DB
, for example:
queryDB := DB.Where("name = ?", "jinzhu").Session(&gorm.Session{
})
queryDB.Where("age > ?", 10).First(&user)
// SELECT * FROM users WHERE name = "jinzhu" AND age > 10
queryDB.Where("age > ?", 20).First(&user2)
// SELECT * FROM users WHERE name = "jinzhu" AND age > 20
For the session introduction of gorm, please refer to the link below:
https://gorm.cn/zh_CN/docs/session.html
Write public components
In every project, there will be a class of components, which we often call basic components, or public components, which do not have strong business attributes and connect the entire application in series.
Public components are sorted out and written by architects, senior developers, or colleagues in the technical middle-stage team, and are responsible for unified maintenance. If centralized and unified management, it is very bad for each business team to write a set, which will not only bring repeated construction Repeated development will also bring difficulties in code reuse and porting by different business teams.
There are actually a lot of public components, here are a few brief introductions.
error handling component
In the operation of the application, we often need to interact with the client, and the interaction is two points, one is the return of the result set under the correct response, and the other is the return of the error code and message body of the error response, which is used to tell the client end, what happened to the request this time, and why it failed.
Therefore, at the beginning of a new project, one of the important preparatory tasks is to standardize our error code format to ensure that the client "understands" our error code rules and does not need to write a new set every time.
Define common error codes
Create a new common_code.go file in the directory under the project directory pkg/errcode
, which is used to predefine some common error codes in the project, which is convenient for guiding and standardizing everyone's use, as follows:
var (
Success = NewError(0, "成功")
ServerError = NewError(10000000, "服务内部错误")
InvalidParams = NewError(10000001, "入参错误")
NotFound = NewError(10000002, "找不到")
UnauthorizedAuthNotExist = NewError(10000003, "鉴权失败,找不到对应的 AppKey 和 AppSecret")
UnauthorizedTokenError = NewError(10000004, "鉴权失败,Token 错误")
UnauthorizedTokenTimeout = NewError(10000005, "鉴权失败,Token 超时")
UnauthorizedTokenGenerate = NewError(10000006, "鉴权失败,Token 生成失败")
TooManyRequests = NewError(10000007, "请求过多")
)
define handle public method
Create a new errcode.go file in the directory under the project directory pkg/errcode
, write some common error handling public methods, and standardize our error output, as follows:
type Error struct {
code int `json:"code"`
msg string `json:"msg"`
details []string `json:"details"`
}
var codes = map[int]string{
}
func NewError(code int, msg string) *Error {
if _, ok := codes[code]; ok {
panic(fmt.Sprintf("错误码 %d 已经存在,请更换一个", code))
}
codes[code] = msg
return &Error{
code: code, msg: msg}
}
func (e *Error) Error() string {
return fmt.Sprintf("错误码:%d, 错误信息::%s", e.Code(), e.Msg())
}
func (e *Error) Code() int {
return e.code
}
func (e *Error) Msg() string {
return e.msg
}
func (e *Error) Msgf(args []interface{
}) string {
return fmt.Sprintf(e.msg, args...)
}
func (e *Error) Details() []string {
return e.details
}
func (e *Error) WithDetails(details ...string) *Error {
newError := *e
newError.details = []string{
}
for _, d := range details {
newError.details = append(newError.details, d)
}
return &newError
}
func (e *Error) StatusCode() int {
switch e.Code() {
case Success.Code():
return http.StatusOK
case ServerError.Code():
return http.StatusInternalServerError
case InvalidParams.Code():
return http.StatusBadRequest
case UnauthorizedAuthNotExist.Code():
fallthrough
case UnauthorizedTokenError.Code():
fallthrough
case UnauthorizedTokenGenerate.Code():
fallthrough
case UnauthorizedTokenTimeout.Code():
return http.StatusUnauthorized
case TooManyRequests.Code():
return http.StatusTooManyRequests
}
return http.StatusInternalServerError
}
In the writing of the error code method, we declare that Error
the structure is used to represent the error response result, and use it codes
as a storage carrier for the global error code, which is convenient for checking the current registration status, and sorting out the duplicates when calling NewError
to create a new instance Error
check.
In addition, the relatively special StatusCode
method is mainly used to convert status codes for some specific error codes, because different internal error codes represent different meanings in HTTP status codes, we need to distinguish them for the convenience of customers Identification and monitoring of terminals and monitoring/alarm systems.
common logging component
Generally speaking, in the application code, log output is performed directly using the Go standard library log.
What's wrong with logging output using the standard library log?
In a project, our logs need to record some public information in a standardized manner, such as: code call stack, request link ID, public business attribute fields, etc., but if we directly output the log of the standard library, we do not have these data , is not flexible enough.
Whether the log information is complete or not is a very important part in troubleshooting and debugging problems, so we will also have a standard log component in the application for unified processing and output.
Use of logrus log components
The logrus log library is a structured and plug-in logging library.
Fully compatible with the log module in the golang standard library.
It also has two built-in log output formats JSONFormatter and TextFormatter to define the output log format.
GitHub address: https://github.com/sirupsen/logrus
Here the logrus log component is used as the basic component.
Log component initialization
pkg/
Create a new directory in the directory under the project directory logger
, and create a logrus.go file, and write the log component initialization code:
package logger
import (
"crazymakercircle.com/gin-rest/common/global"
"crazymakercircle.com/gin-rest/internal/config"
"crazymakercircle.com/gin-rest/pkg/helper/files"
"github.com/evalphobia/logrus_sentry"
"github.com/sirupsen/logrus"
)
func Init() {
global.Logger = logrus.New()
if config.AppSentryDsn.Load() != "" {
hook, err := logrus_sentry.NewSentryHook(config.AppSentryDsn.Load(), []logrus.Level{
logrus.PanicLevel,
logrus.FatalLevel,
logrus.ErrorLevel,
})
if err == nil {
global.Logger.Hooks.Add(hook)
hook.Timeout = 0
hook.StacktraceConfiguration.Enable = true
}
}
// 设置日志格式为json格式
global.Logger.SetFormatter(&logrus.JSONFormatter{
})
//设置文件输出
f, logFilePath := files.LogFile()
// 日志消息输出可以是任意的io.writer类型,这里我们获取文件句柄,将日志输出到文件
global.Logger.SetOutput(f)
// 设置日志级别为debug以上
global.Logger.SetLevel(logrus.DebugLevel)
// 设置显示文件名和行号
global.Logger.SetReportCaller(true)
// 设置rotatelogs日志分割Hook
global.Logger.AddHook(NewLfsHook(logFilePath))
}
package global variables
After completing the writing of the log library, we need to define a Logger object for our application to use.
So we open the common/global/global.go file in the project directory and add the following content:
var (
...
Logger *logrus.Logger
)
We have added a Logger object in the package global variable for initialization of the log component.
Paging Response Handling
Create a new pagination.go file under the directory of the project directory pkg/app
, as follows:
package app
import (
"crazymakercircle.com/gin-rest/internal/config"
"crazymakercircle.com/gin-rest/pkg/toolkit/cast"
"crazymakercircle.com/gin-rest/pkg/toolkit/convert"
"github.com/gin-gonic/gin"
)
func GetPage(c *gin.Context) int {
page := convert.Str(c.Query("page")).ToInt()
if page <= 0 {
return 1
}
return page
}
func GetPageSize(c *gin.Context) int {
pageSize := convert.Str(c.Query("page_size")).ToInt()
if pageSize <= 0 {
return cast.ToInt(config.AppDefaultPageSize.Load())
}
return pageSize
}
func GetPageOffset(page, pageSize int) int {
result := 0
if page > 0 {
result = (page - 1) * pageSize
}
return result
}
response processing
Create a new app.go file under the directory of the project directory pkg/app
, as follows:
package app
import (
"bytes"
"crazymakercircle.com/gin-rest/common/dict"
"crazymakercircle.com/gin-rest/pkg/helper/gjson"
"github.com/gin-gonic/gin"
"io/ioutil"
"net/http"
"time"
)
type Response struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data interface{
} `json:"data"`
Elapsed float64 `json:"elapsed"`
}
type Pager struct {
Page int `json:"page"`
PageSize int `json:"page_size"`
TotalRows int `json:"total_rows"`
}
//Success 正常返回
func Success(c *gin.Context, data interface{
}) {
if data == nil {
data = make([]string, 0)
}
response := Response{
Code: 0, Msg: "success", Data: data, Elapsed: GetElapsed(c)}
c.Set("responseData", response)
c.JSON(http.StatusOK, response)
}
//SuccessList 分页返回
func SuccessList(c *gin.Context, list interface{
}, totalRows int) {
data := gin.H{
"list": list,
"pager": Pager{
Page: GetPage(c),
PageSize: GetPageSize(c),
TotalRows: totalRows,
},
}
e := dict.Success
response := Response{
Code: e.Code(), Msg: e.Msg(), Data: data, Elapsed: GetElapsed(c)}
c.Set("responseData", response)
c.JSON(http.StatusOK, response)
}
//Error 使用公共配置的消息返回
func Error(c *gin.Context, err *dict.Error) {
response := Response{
Code: err.Code(), Msg: err.Msg(), Elapsed: GetElapsed(c)}
details := err.Details()
if err.Level() == "" {
//默认错误返回为warn,不记录日志到sentry
err = err.WithLevel("warn")
}
SetLevel(c, err.Level())
if len(details) > 0 {
SetDetail(c, err.Details())
if err.Level() != dict.LevelError {
response.Data = details
}
}
c.Set("responseData", response)
c.JSON(err.StatusCode(), response)
}
func SetLevel(c *gin.Context, level interface{
}) {
c.Set("level", level)
}
func SetDetail(c *gin.Context, detail interface{
}) {
c.Set("detail", detail)
}
func GetLevel(c *gin.Context) interface{
} {
return Get(c, "level")
}
func GetDetail(c *gin.Context) interface{
} {
return Get(c, "detail")
}
func Get(c *gin.Context, key string) interface{
} {
val, _ := c.Get(key)
return val
}
func GetElapsed(c *gin.Context) float64 {
elapsed := 0.00
if requestTime := Get(c, "beginTime"); requestTime != nil {
elapsed = float64(time.Since(requestTime.(time.Time))) / 1e9
}
return elapsed
}
func JsonParams(c *gin.Context) map[string]interface{
} {
b, err := ioutil.ReadAll(c.Request.Body)
if err != nil {
panic(err)
}
// 将取出来的body内容重新插入body,否则ShouldBindJSON无法绑定参数
c.Request.Body = ioutil.NopCloser(bytes.NewBuffer(b))
return gjson.JsonDecode(string(b))
}
func Params(c *gin.Context) string {
b, err := ioutil.ReadAll(c.Request.Body)
if err != nil {
panic(err)
}
// 将取出来的body内容重新插入body,否则ShouldBindJSON无法绑定参数
c.Request.Body = ioutil.NopCloser(bytes.NewBuffer(b))
return string(b)
}
Use of paging returns
We can find one of the interface methods and call the corresponding method, as follows:
func (a *Article) ArticleList(c *gin.Context) {
param := struct {
Title string `form:"title" binding:"max=100"`
}{
}
valid, errs := app.BindAndCheck(c, ¶m)
if !valid {
app.Error(c, dict.InvalidParams.WithDetails(errs.Errors()...))
return
}
pager := app.Pager{
Page: app.GetPage(c), PageSize: app.GetPageSize(c)}
svc := service.New(c.Request.Context())
totalRows, err := svc.CountArticle(param.Title)
if err != nil {
app.Error(c, dict.ErrGetArtCountFail)
return
}
articles, err := svc.GetArticleList(param.Title, &pager)
if err != nil {
app.Error(c, dict.ErrGetArtListFail)
return
}
for _, article := range articles {
num, err := redigo.GetNum("article" + strconv.Itoa(int(article.Id)))
if err != nil {
num = 1
}
article.Views += num
}
//分页返回的使用
app.SuccessList(c, articles, totalRows)
return
}
Swagger interface documentation
How to maintain the rest api interface document is a problem that most developers have experienced, because front-end, back-end, test development and other personnel have to read it. If everyone gives a copy, how to maintain it will be a very difficult task. big head problem.
For this kind of problem, there are a lot of solutions on the market, and Swagger is the best among them. It is more comprehensive and perfect, and has an associated ecosystem.
It is designed based on the standard OpenAPI specification. As long as you write your annotations according to this set of specifications or generate annotations by scanning the code, you can generate uniform standard interface documents and a series of Swagger tools.
OpenAPI & Swagger
We mentioned OpenAPI above, you may have doubts about this, what is the relationship between OpenAPI and Swagger?
In fact, the OpenAPI specification was donated to the Linux Foundation by the OpenAPI Initiative in 2015, and Swagger further provides a large number of matching toolsets for the OpenAPI specification, which can make full use of the OpenAPI specification to map and generate all associated Resources and operations to view and call RESTful interfaces, so we often say that Swagger is not only a "standard", but also a framework.
In terms of functional use, the OpenAPI specification can help us describe the basic information of an API, such as:
- A description of the API.
- Available paths (/resources).
- Available operations on each path (get/commit...).
- The input/output format for each operation.
Install Swagger
Swagger-related tool sets will generate various types of interface-related content according to the OpenAPI specification. The common process is to write annotations = "call generation library -" generate standard description file = "generate/import to the corresponding Swagger tool.
Therefore, the next step is to install the library associated with the open source Swagger corresponding to Go, and execute the installation command in the root directory of the project blog-service, as follows:
$ go get -u github.com/swaggo/swag/cmd/[email protected]
$ go get -u github.com/swaggo/[email protected]
$ go get -u github.com/swaggo/files
$ go get -u github.com/alecthomas/template
Verify whether the installation is successful, as follows:
$ swag -vswag version v1.6.5
If the command line prompts that the swag file cannot be found, you can check whether the corresponding bin directory has been added to the environment variable PATH.
write comments
After completing the installation of the Swagger-associated library, we need to write annotations for the API interfaces in the project so that they can run correctly later when generating. Next, we will use the following annotations:
annotation | describe |
---|---|
@Summary | Summary |
@Produce | A list of MIME types that the API can generate. MIME types can be simply understood as response types, for example: json, xml, html, etc. |
@Param | Parameter format, from left to right: parameter name, input type, data type, required, comment |
@Success | Successful response, from left to right: status code, parameter type, data type, comment |
@Failure | Response failure, from left to right are: status code, parameter type, data type, comment |
@Router | Routing, from left to right: routing address, HTTP method |
Writing swagger annotations
We switch to the directory under the project directory internal/routers/api/
, open the article.go file, and add swagger annotations in front of the ArticleList method:
// ArticleList
// @Summary 获取列表
// @Produce json
// @Param name query string false "名称" maxlength(100)
// @Param page query int false "页码"
// @Param page_size query int false "每页数量"
// @Success 200 {object} model.Article"成功"
// @Failure 400 {object} dict.Error "请求错误"
// @Failure 500 {object} dict.Error "内部错误"
// @Router /api/articles [get]
func (a *Article) ArticleList(c *gin.Context) {
Here we only show the writing of one interface annotation, and then you should follow the meaning of the annotation and refer to the above interface annotation to complete the writing of other interface annotations.
Generation of swagger configuration files
After completing all the annotation writing, we return to the project root directory and execute the following command:
$ swag init
After executing the command, you will find that three files, docs.go, swagger.json, and swagger.yaml are generated in the docs folder.
Registration of swagger middleware
After the annotations are written, all the files required by the Swagger API are generated through swag init, so how do we access the interface documents next?
In fact, it is very simple. We only need to perform default initialization and register the corresponding routes in routers. Open the internal/routers
router.go file in the directory under the project directory, and add the following code:
import (
...
_ "github.com/go-programming-tour-book/blog-service/docs"
ginSwagger "github.com/swaggo/gin-swagger"
"github.com/swaggo/gin-swagger/swaggerFiles"
)
func NewRouter() *gin.Engine {
r := gin.New()
r.Use(gin.Logger())
r.Use(gin.Recovery())
r.GET("/swagger/*any", ginSwagger.WrapHandler(swaggerFiles.Handler))
...
return r
}
On the surface, it mainly does two things, namely, initialize the docs package and register a route for swagger. After the docs package is initialized, its swagger.json will point to swagger/ under the domain name started by the current application by default. doc.json path, if you have additional requirements, you can specify it manually, as follows:
url := ginSwagger.URL("http://127.0.0.1:8000/swagger/doc.json")
r.GET("/swagger/*any", ginSwagger.WrapHandler(swaggerFiles.Handler, url))
View interface documents through swagger
http://localhost:9099/swagger/index.html
After completing the above settings, we restart the server, visit the Swagger address in the browser, and you can see the Swagger document display in the above picture,
It is mainly divided into three parts, namely project main information, interface routing information, and model information. These three parts together constitute our main content.
What happens behind Swagger
You may be wondering, I just initialized a docs package and registered a Swagger-related route, how is the Swagger document associated, and where are the annotations I wrote on the interface?
In fact, the main body is related to the files generated by swagger init, which are:
docs
├── docs.go
├── swagger.json
└── swagger.yaml
Initialize docs
In the first step, we initialized the docs package, which corresponds to the docs.go file, because there is only one go source file in the directory, and its source code is as follows:
package docs
import (
"bytes"
"encoding/json"
"strings"
"github.com/alecthomas/template"
"github.com/swaggo/swag"
)
var doc = `{
"schemes": {
{ marshal .Schemes }},
"swagger": "2.0",
"info": {
"description": "{
{.Description}}",
"title": "{
{.Title}}",
"contact": {},
"license": {},
"version": "{
{.Version}}"
},
"host": "{
{.Host}}",
"basePath": "{
{.BasePath}}",
"paths": {
"/api/articles": {
"get": {
"produces": [
"application/json"
],
"summary": "获取列表",
"parameters": [
{
"maxLength": 100,
"type": "string",
"description": "名称",
"name": "name",
"in": "query"
},
{
"enum": [
0,
1
],
"type": "integer",
"default": 1,
"description": "状态",
"name": "state",
"in": "query"
},
{
"type": "integer",
"description": "页码",
"name": "page",
"in": "query"
},
{
"type": "integer",
"description": "每页数量",
"name": "page_size",
"in": "query"
}
],
"responses": {
"200": {
"description": "成功",
"schema": {
"$ref": "#/definitions/model.Article"
}
},
"400": {
"description": "请求错误",
"schema": {
"$ref": "#/definitions/dict.Error"
}
},
"500": {
"description": "内部错误",
"schema": {
"$ref": "#/definitions/dict.Error"
}
}
}
}
}
},
"definitions": {
"dict.Error": {
"type": "object",
"properties": {
"code": {
"type": "integer"
},
"details": {
"type": "array",
"items": {
"type": "string"
}
},
"level": {
"type": "string"
},
"msg": {
"type": "string"
}
}
},
"model.Article": {
"type": "object",
"properties": {
"content": {
"type": "string"
},
"id": {
"type": "integer"
},
"introduction": {
"type": "string"
},
"title": {
"type": "string"
}
}
}
}
}`
type swaggerInfo struct {
Version string
Host string
BasePath string
Schemes []string
Title string
Description string
}
// SwaggerInfo holds exported Swagger Info so clients can modify it
var SwaggerInfo = swaggerInfo{
Version: "1.0",
Host: "",
BasePath: "",
Schemes: []string{
},
Title: "gin系统",
Description: "gin开发的系统",
}
type s struct{
}
func (s *s) ReadDoc() string {
sInfo := SwaggerInfo
sInfo.Description = strings.Replace(sInfo.Description, "\n", "\\n", -1)
t, err := template.New("swagger_info").Funcs(template.FuncMap{
"marshal": func(v interface{
}) string {
a, _ := json.Marshal(v)
return string(a)
},
}).Parse(doc)
if err != nil {
return doc
}
var tpl bytes.Buffer
if err := t.Execute(&tpl, sInfo); err != nil {
return doc
}
return tpl.String()
}
func init() {
swag.Register(swag.Name, &s{
})
}
Through the analysis of the source code, we can know that when the docs package is initialized, the init method will be executed by default, and in the init method, related methods will be registered. The main logic is that swag will retrieve the annotation information under the project when it is generated , and then generate project information and interface routing information into the package global variable doc according to the specification.
Next, we will do some template mapping and other work in the ReadDoc method to improve the output of the doc.
swagger registration route
In the previous step, we know where the generated annotation data source is, but how are they related, which is actually related to what we call, as follows ginSwagger.WrapHandler(swaggerFiles.Handler)
:
func WrapHandler(h *webdav.Handler, confs ...func(c *Config)) gin.HandlerFunc {
defaultConfig := &Config{
URL: "doc.json"}
...
return CustomWrapHandler(defaultConfig, h)
}
In fact, after calling WrapHandler, swag will internally set its default call URL to doc.json, but you may be entangled, obviously there is no doc.json in the file we generated, where does this come from, let’s go on Look down, as follows:
func CustomWrapHandler(config *Config, h *webdav.Handler) gin.HandlerFunc {
...
switch path {
case "index.html":
index.Execute(c.Writer, &swaggerUIBundle{
URL: config.URL,
})
case "doc.json":
doc, err := swag.ReadDoc()
if err != nil {
panic(err)
}
c.Writer.Write([]byte(doc))
return
default:
h.ServeHTTP(c.Writer, c.Request)
}
}
}
In the CustomWrapHandler method, we can find a more classic switch case logic.
In the first case, the processing is the index.html, what is this, in fact, you can review, we used to pass
http://localhost:9099/swagger/index.html
Access to the Swagger document corresponds to the logic here.
Perform parameter verification for the interface
Next, we will formally code. When developing the corresponding business modules, the first step to consider is how to check the input parameters. We need to specify the components of the entire project and even the entire team to form A general specification, and complete the input parameter verification of the interface of the label module.
validator introduction
In this project, we will use the open source project go-playground/validator as the basic library of our project, which is a validator for value verification of structures and fields based on tags.
So, should we introduce this library separately? In fact, it is not, because the gin framework we use uses go-playground/validator for parameter binding and verification by default for internal model binding and verification, which is very convenient to use. .
Execute the command in the project root directory to install:
$ go get -u github.com/go-playground/validator/v10
Business interface verification
Next, we will formally start writing the validation rules for the input parameters of the interface, that is, write the validation rules on the field labels of the corresponding structures. Common label meanings are as follows:
Label | meaning |
---|---|
required | required |
gt | more than the |
gte | greater or equal to |
lt | less than |
lte | less than or equal to |
min | minimum value |
max | maximum value |
oneof | one of the parameter sets |
len | The length requirement is consistent with that given by len |
Article interface verification
Let's go back to internal/service
the article.go file in the directory of the project, and add a binding/verification structure for input parameter verification.
This block is similar to the validation rules of the login module, mainly required, minimum and maximum length, and one of the required parameter values must be in a certain set.
The validation rules for the login module are reviewed as follows:
type LoginForm struct {
Username string `json:"username" binding:"required" validate:"required"`
Password string `json:"password" binding:"required" validate:"required"`
}
func Login(c *gin.Context) {
var form LoginForm
if err := c.ShouldBindWith(&form, binding.JSON); err != nil {
c.JSON(http.StatusBadRequest, gin.H{
"error": err.Error()})
return
}
validate := validator.New()
if err := validate.Struct(form); err != nil {
c.JSON(http.StatusUnprocessableEntity, gin.H{
"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"msg": "login",
})
// TODO: 处理登录逻辑
}
Module Development: Commodity Management
As a reference, next we will formally enter into the business logic development of a business module, the development of product management,
Firstly, design the rest api, and the involved api interfaces are as follows:
Function | HTTP method | path |
---|---|---|
new product | POST | /seckillskus |
Delete specified product | DELETE | /seckillskus/:id |
Update designated products | PUT | /seckillskus/:id |
Get product list | GET | /seckillskus |
Commodity model domain layer development
First, according to the table seckill_sku in the database, produce a simple model
After production, it is placed in internal/model
the directory of the project.
The corresponding model file seckillsku.go file.
Next, add the method of orm operation in the seckillsku.go file, and only add the method related to the orm entity, the code is as follows:
package model
import (
"github.com/jinzhu/gorm"
"time"
)
type SeckillSku struct {
SkuId int64 `json:"sku_id"` // 商品id
CostPrice float32 `json:"cost_price"` // 秒杀价格
CreateTime time.Time `json:"create_time"`
EndTime time.Time `json:"end_time"`
SkuImage string `json:"sku_image"`
SkuPrice float32 `json:"sku_price"` // 价格
StartTime time.Time `json:"start_time"`
StockCount int `json:"stock_count"` // 剩余库存
SkuTitle string `json:"sku_title"`
RawStock int `json:"raw_stock"` // 原始库存
ExposedKey string `json:"exposed_key"` // 秒杀md5
}
func (s SeckillSku) TableName() string {
return "seckill_sku"
}
func (s SeckillSku) Count(db *gorm.DB) (int, error) {
var count int
if s.SkuTitle != "" {
db = db.Where("sku_title = ?", s.SkuTitle)
}
if err := db.Model(&s).Count(&count).Error; err != nil {
return 0, err
}
return count, nil
}
func (s SeckillSku) List(db *gorm.DB, pageOffset, pageSize int) ([]*SeckillSku, error) {
var SeckillSkus []*SeckillSku
var err error
if pageOffset >= 0 && pageSize > 0 {
db = db.Offset(pageOffset).Limit(pageSize)
}
if s.SkuTitle != "" {
db = db.Where("sku_title = ?", s.SkuTitle)
}
if err = db.Find(&SeckillSkus).Error; err != nil {
return nil, err
}
return SeckillSkus, nil
}
func (s SeckillSku) Create(db *gorm.DB) error {
return db.Create(&s).Error
}
func (s SeckillSku) Update(db *gorm.DB) error {
return db.Model(&s).Where("sku_id = ? ", s.SkuId).Limit(1).Update(s).Error
}
func (s SeckillSku) Delete(db *gorm.DB) error {
return db.Where("sku_id = ?", s.SkuId).Delete(&s).Error
}
- Model : Specify the model instance to run the DB operation. By default, the name of the parsed structure is the table name, and the format is uppercase camelcase to lowercase underscore camelcase. In special cases, you can also write the TableName method of this structure to specify the corresponding returned table name.
- Where : Set filter conditions, accept map, struct or string as conditions.
- Offset : An offset that specifies the number of records to skip before starting to return records.
- Limit : Limit the number of records retrieved.
- Find : Find records that meet the filter criteria.
- Updates : Updates the selected fields.
- Delete : Delete data.
- Count : Statistical behavior, used to count the number of records of the model.
It should be noted that in the above code, we adopted db *gorm.DB
the method of passing in as the first parameter of the function, and there is another way in the industry, which is based on the structure of passing in, both of which can achieve the purpose in essence , readers can choose according to the actual situation (usage habits, project specifications, etc.), each has its own advantages and disadvantages.
Product dao layer development
We internal/dao
create a new seckillSku.go file in the directory of the project, and write the following code:
package dao
import (
"crazymakercircle.com/gin-rest/internal/model"
"crazymakercircle.com/gin-rest/pkg/app"
)
func (d *Dao) CountSeckillSku(title string) (int, error) {
tag := model.SeckillSku{
SkuTitle: title}
return tag.Count(d.engine)
}
func (d *Dao) GetSeckillSkuList(title string, page, pageSize int) ([]*model.SeckillSku, error) {
seckillSku := model.SeckillSku{
SkuTitle: title}
pageOffset := app.GetPageOffset(page, pageSize)
return seckillSku.List(d.engine, pageOffset, pageSize)
}
func (d *Dao) CreateSeckillSku(title string, rawStock int, createdBy string) error {
tag := model.SeckillSku{
SkuTitle: title,
RawStock: rawStock,
}
return tag.Create(d.engine)
}
func (d *Dao) UpdateSeckillSku(id int64, title string) error {
seckillSku := model.SeckillSku{
SkuTitle: title,
SkuId: id,
}
return seckillSku.Update(d.engine)
}
func (d *Dao) DeleteSeckillSku(id int64) error {
seckillSku := model.SeckillSku{
SkuId: id}
return seckillSku.Delete(d.engine)
}
In the above code, we mainly encapsulate the data access object in the dao layer, and process it for the fields required by the business.
Commodity service layer development
We internal/service
create a new service.go file in the directory of the project and write the following code:
type Service struct {
ctx context.Context
dao *dao.Dao
}
func New(ctx context.Context) Service {
svc := Service{
ctx: ctx}
svc.dao = dao.New(global.DBEngine)
return svc
}
Next, create a seckillSku.go file at the same level to process the business logic of the product module, and write the following code:
package service
import (
"crazymakercircle.com/gin-rest/internal/model"
"crazymakercircle.com/gin-rest/pkg/app"
)
type SeckillSkuListRequest struct {
title string `form:"title" binding:"max=100"`
}
type CreateSeckillSkuRequest struct {
title string `form:"title" binding:"required,max=100"`
rawStock int `form:"rawStock,default=1" binding:"required"`
}
type UpdateSeckillSkuRequest struct {
ID int64 `form:"id" binding:"required,gte=0"`
title string `form:"title" binding:"required,max=100"`
}
type DeleteSeckillSkuRequest struct {
ID int64 `form:"id" binding:"required,gte=0"`
}
func (svc *Service) CountSeckillSku(param *SeckillSkuListRequest) (int, error) {
return svc.dao.CountSeckillSku(param.title)
}
func (svc *Service) GetSeckillSkuList(param *SeckillSkuListRequest, pager *app.Pager) ([]*model.SeckillSku, error) {
return svc.dao.GetSeckillSkuList(param.title, pager.Page, pager.PageSize)
}
func (svc *Service) CreateSeckillSku(param *CreateSeckillSkuRequest) error {
return svc.dao.CreateSeckillSku(param.title, param.rawStock)
}
func (svc *Service) UpdateSeckillSku(param *UpdateSeckillSkuRequest) error {
return svc.dao.UpdateSeckillSku(param.ID, param.title)
}
func (svc *Service) DeleteSeckillSku(param *DeleteSeckillSkuRequest) error {
return svc.dao.DeleteSeckillSku(param.ID)
}
The service mainly calls the methods in the dao, of course, some simple logic can also be written in the service.
This is mainly for demonstration, without adding any business logic.
Add business error code
We write the following error codes for the commodity module in the common/dict/errcode.go file of the project:
var (
ErrorGetSeckillSkuListFail = NewError(20010001, "获取商品列表失败")
ErrorCreateSeckillSkuFail = NewError(20010002, "创建商品失败")
ErrorUpdateSeckillSkuFail = NewError(20010003, "更新商品失败")
ErrorDeleteSeckillSkuFail = NewError(20010004, "删除商品失败")
ErrorCountSeckillSkuFail = NewError(20010005, "统计商品失败")
)
Add the handler method of the product
We open internal/api/
the seckillsku.go file in the project directory and write the following code:
func (t SeckillSku) List(c *gin.Context) {
param := service.SeckillSkuListRequest{
}
response := app.NewCtxResponse(c)
valid, errs := app.BindAndValid(c, ¶m)
if !valid {
global.Logger.Errorf("app.BindAndValid errs: %v", errs)
response.ToErrorResponse(dict.InvalidParams.WithDetails(errs.Errors()...))
return
}
svc := service.New(c.Request.Context())
pager := app.Pager{
Page: app.GetPage(c), PageSize: app.GetPageSize(c)}
totalRows, err := svc.CountSeckillSku(&service.SeckillSkuListRequest{
Title: param.Title})
if err != nil {
global.Logger.Errorf("svc.CountSeckillSku err: %v", err)
response.ToErrorResponse(dict.ErrorCountSeckillSkuFail)
return
}
tags, err := svc.GetSeckillSkuList(¶m, &pager)
if err != nil {
global.Logger.Errorf("svc.GetSeckillSkuList err: %v", err)
response.ToErrorResponse(dict.ErrorGetSeckillSkuListFail)
return
}
response.ToCtxResponseList(tags, totalRows)
return
}
In the above code, we have completed the processing method of obtaining the product list interface,
In the method, we have completed the logical concatenation, logging and error handling of the four major functional blocks, including input parameter verification and binding, obtaining the total number of products, obtaining the product list, and serializing the result set.
We continue to write the interface processing methods for creating, updating and deleting products, as follows:
func (t SeckillSku) Create(c *gin.Context) {
param := service.CreateSeckillSkuRequest{
}
response := app.NewCtxResponse(c)
valid, errs := app.BindAndValid(c, ¶m)
if !valid {
global.Logger.Errorf("app.BindAndValid errs: %v", errs)
response.ToErrorResponse(dict.InvalidParams.WithDetails(errs.Errors()...))
return
}
svc := service.New(c.Request.Context())
err := svc.CreateSeckillSku(¶m)
if err != nil {
global.Logger.Errorf("svc.CreateSeckillSku err: %v", err)
response.ToErrorResponse(dict.ErrorCreateSeckillSkuFail)
return
}
app.Success(c, map[string]string{
"title": c.Param("title")})
return
}
func (t SeckillSku) Update(c *gin.Context) {
response := app.NewCtxResponse(c)
_, err := cast.ToInt64E(c.Param("id"))
if err != nil {
global.Logger.Errorf("svc.UpdateSeckillSku err: %v", err)
response.ToErrorResponse(dict.ErrorUpdateSeckillSkuFail)
return
}
param := service.UpdateSeckillSkuRequest{
}
valid, errs := app.BindAndValid(c, ¶m)
if !valid {
global.Logger.Errorf("app.BindAndValid errs: %v", errs)
response.ToErrorResponse(dict.InvalidParams.WithDetails(errs.Errors()...))
return
}
svc := service.New(c.Request.Context())
err1 := svc.UpdateSeckillSku(¶m)
if err1 != nil {
global.Logger.Errorf("svc.UpdateSeckillSku err: %v", err)
response.ToErrorResponse(dict.ErrorUpdateSeckillSkuFail)
return
}
app.Success(c, map[string]string{
"id": c.Param("id")})
return
}
func (t SeckillSku) Delete(c *gin.Context) {
response := app.NewCtxResponse(c)
_, err := cast.ToInt64E(c.Param("id"))
if err != nil {
global.Logger.Errorf("svc.Delete err: %v", err)
response.ToErrorResponse(dict.ErrorUpdateSeckillSkuFail)
return
}
param := service.DeleteSeckillSkuRequest{
}
valid, errs := app.BindAndValid(c, ¶m)
if !valid {
global.Logger.Errorf("app.BindAndValid errs: %v", errs)
response.ToErrorResponse(dict.InvalidParams.WithDetails(errs.Errors()...))
return
}
svc := service.New(c.Request.Context())
err1 := svc.DeleteSeckillSku(¶m)
if err1 != nil {
global.Logger.Errorf("svc.DeleteSeckillSku err: %v", err)
response.ToErrorResponse(dict.ErrorDeleteSeckillSkuFail)
return
}
app.Success(c, map[string]string{
"id": c.Param("id")})
return
}
Add the api route of the new product
seckillSku := api.NewSeckillsku()
apiRouterGroup.GET("/seckillskus", seckillSku.List)
apiRouterGroup.GET("/seckillskus/:id", seckillSku.Detail)
Verify commodity interface
We restart the service, that is, execute it again go run main.go
. After checking that the startup information is normal, we verify the interface of the commodity module
(1) Verification of the product list
$ curl -X GET http://192.168.56.1:9099/api/seckillskus?page_size=2
The effect of execution is as follows
(2) Verify and obtain the product
$ curl -X GET http://192.168.56.1:9099/api/seckillskus/1?title=demo
The effect of execution is as follows
The DAO layer and Service layer codes for obtaining product details here have not been implemented, and everyone is left to implement it by themselves.
The verification of deleting products and deleting products is also left to everyone to implement by themselves.
Also note: when updating, when using struct type in GORM to update, GORM will not change the field with a value of zero. This is why, the fundamental reason is that when identifying the value of this field in this structure, it is difficult to determine whether it is really a zero value, or the external input happens to be a zero value of this type. GORM does not There is not too much special identification.
"Golang Bible" still has 50,000 words to be released
This article is only part 4 of the "Golang Bible". The content behind the "Golang Bible" is more exciting, involving high concurrency, distributed micro-service architecture, and WEB development architecture .
"Golang Bible" PDF, please go to the [Technical Freedom Circle] at the end of the article to pick it up.
Finally, if you encounter problems during the learning process, you can come to Nien's high-concurrency community of 10,000 people to communicate.
References
- Tracing Garbage Collection - wikipedia
- On-the-fly Garbage Collection: an exercise in cooperation.
- Garbage Collection
- Tracing Garbage Collection
- Copying Garbage Collection
- Generational Garbage Collection
- Golang Gc Talk
- Eliminate Rescan
The realization path of technical freedom PDF:
Realize your architectural freedom:
" Have a thorough understanding of the 8-figure-1 template, everyone can do the architecture "
" 10Wqps review platform, how to structure it? This is what station B does! ! ! "
" Peak 21WQps, 100 million DAU, how is the small game "Sheep a Sheep" structured? "
" How to Scheduling 10 Billion-Level Orders, Come to a Big Factory's Superb Solution "
" Two Big Factory 10 Billion-Level Red Envelope Architecture Scheme "
… more architecture articles, being added
Realize your responsive freedom:
" Responsive Bible: 10W Words, Realize Spring Responsive Programming Freedom "
This is the old version of " Flux, Mono, Reactor Combat (the most complete in history) "
Realize your spring cloud freedom:
" Spring cloud Alibaba Study Bible "
" Sharding-JDBC underlying principle and core practice (the most complete in history) "
Realize your linux freedom:
" Linux Commands Encyclopedia: 2W More Words, One Time to Realize Linux Freedom "
Realize your online freedom:
" Detailed explanation of TCP protocol (the most complete in history) "
" Three Network Tables: ARP Table, MAC Table, Routing Table, Realize Your Network Freedom!" ! "
Realize your distributed lock freedom:
" Redis Distributed Lock (Illustration - Second Understanding - The Most Complete in History) "
" Zookeeper Distributed Lock - Diagram - Second Understanding "
Realize your king component freedom:
" King of the Queue: Disruptor Principles, Architecture, and Source Code Penetration "
" The King of Cache: The Use of Caffeine (The Most Complete in History) "
" Java Agent probe, bytecode enhanced ByteBuddy (the most complete in history) "
Realize your interview questions freely:
4000 pages of "Nin's Java Interview Collection" 40 topics
Please go to the following "Technical Freedom Circle" official account to get the PDF file update of Nien's architecture notes and interview questions↓↓↓