DevOps to large data

Unknowingly a year, I still remember this time last year met the master, listened to the micro-service architecture, domain-driven design, test driven design of doubts, as well as automated application deployment, was feeling really sobering.

Soon we heard the end of the symposium's master speech, very excited.

He moved and excitement, this is also holding another purpose to attend the forums, from an architectural transition to DevOps.

 

Year upgrade

Between year, the integration of thought and traditional architectural thinking and practice of micro-services architecture, can only say that in reality difficult to project than the theory too much compromise on the budget is not enough server design, operation and maintenance of limited capacity, hire operation and maintenance, training and compromise the operation and maintenance personnel in implementing agile development team from top to bottom, coding standards and code review.

Between year, automated deployment team development and production environments, and now the project is really ZF difficulty than the average large project too many Internet entrepreneurs, the business would have been relatively complex and different regional demand, coupled with recent years policy driven various district and municipal data interoperability, so that technically passive to touch the ceiling, but also the history of technology compared to venture burden.

Party actually bidding to container as a parameter of cluster technology, better well prepared.

Users have a variety of large data requirements, the amount of data which we have also reached a critical point, that is, NoSQL, MQ have to spend on the still barely, but we also feel that the scale of Hadoop is not enough, patchwork can with, docker, k8s may also be on the upper edge.

About container and automated deployment, only in the development team and development test server use, really helped us a lot. But to get a production environment, it has been limited in all aspects, such as the company's organizational structure, operation and maintenance personnel capacity, customer area, do not like Amazon's "who develops operation and maintenance."

 

For example micro Services Architecture landing

Use of automated container development team to deploy micro-services, we benefit from it, since that also reached the ceiling of the city within the peer technology.

Due to network reasons, can not get it directly from their development environment to a production environment.

Then we adopt a compromise, in a production environment to build another set of automated deployment, operation and maintenance personnel to get the package compiled from the hands of developers, in the specified directory, click on the "build once," and then wait for the automatic release of carry out.

This is already a compromise, but the reality of compromise to make you vomit blood, the user may not be able to provide all-linux servers will provide windows server, interrupting your set of deployment scenarios. Not over yet, there will be enough to let you figure server budgetary reasons, no way slightly, but to remove the registry, and reduce the load node, Fortunately, all this just modify the configuration file without having to change the business code.

You say docker container deployment to resolve differences in development and production environments is it? Server hardware provided by the user directly to a virtualization technology is not supported.

And technology is in transition from the old time, old SOA, ESB + ETL models and new micro-services architecture api-gateway secure authentication + MQ co-exist, can not do, new for old is not always an easy thing, it must have a transition .

Stand in front of a mountain, you have to go to registration centers, and then to automate the deployment, then deployment to the container, and finally to the user's environment back to the original look. When the user's demand out there, the expectations of users out there, the future direction of policy has been put in there, we have to use the latest technology is compatible with the most primitive practice law.

 

Big Data

Currently, it is only likely to demand big data, can be said to be a temptation to try and ZF, if there is a company or a pilot to become success stories, so in the future, this demand will become certain, necessary.

Big Data, may become our future, but now, we have to proceed step by step, the pre-skills of big data to tie steady.

Pre-skills: NoSQL, data mining, real-time analysis, business intelligence, full-text search, containers and clustering.

 

Guess you like

Origin www.cnblogs.com/13yan/p/11136001.html