In 2018, the microservice architecture will explode along these 5 trends

In 2017, a large number of ecosystem players have been added to the DevOps space, so what will change in 2018? This article looks forward to 5 possible development trends of microservices in 2018, and introduces each trend in detail.

2017 was a big year for DevOps, with not only a huge increase in the number of ecosystem players, but a tripling of CNCF projects. Looking ahead to the year ahead, we expect further acceleration in innovation and market change. Here's our take on microservices trends in 2018: service meshes, event-driven architectures, container-native security, GraphQL, and chaos engineering.

We'll be looking at these trends, and the companies that will build their businesses around them in the year ahead. What trends do you see? Leave a comment below to let us know what's been missed, or if you agree/disagree with what we've outlined.

1. Service meshes are hot

A service mesh is an infrastructure layer used to improve communication between services and is currently the most popular cloud-native category. As containers become more commonplace, service topologies become more dynamic, requiring more advanced networking capabilities. A service mesh can help manage traffic through service discovery, routing, load balancing, health checks, and observability. Service meshes try to tame the unmanageable complexity of containers.

Its service mesh is becoming more popular as load balancers like HAProxy, Træfik, and NGINX start to repurpose as the data plane. We haven't seen widespread deployment, but do know the business of running a service mesh in production. Furthermore, service meshes are not limited to microservices or Kubernetes environments, but can also be applied to VM and serverless environments. For example, instead of running containers, the National Center for Biotechnology Information uses Linkerd.

Service meshes can also be used for chaos engineering, "the discipline of conducting experiments on distributed systems to build confidence in the system's ability to withstand chaotic conditions." Instead of installing a daemon on each host, a service mesh will Delays and failures are injected into the environment.

Istio and Buoyant Linkerd are the most high-profile offerings in this space. Note that Buoyant released Conduit v0.1, an open source service mesh for Kubernetes, last December.

In 2018, the microservice architecture will explode along these 5 trends

2. The rise of event-driven architectures

As business agility requirements increase, we're starting to see a trend toward "push" or event-based architectures, where a service sends an event, and one or more observer containers run logic asynchronously in response to that events without the need to notify event producers. Unlike a request-response architecture, in an event-driven system, the functional flow and transaction load in an initiating container does not depend on the availability and completion of remote processes in downstream containers. Another benefit is that developers can be more independent when designing their respective services.

While developers can build container environments as event-driven architectures, Functions-as-a-Service (FaaS) embodies this capability in its own right. In a FaaS architecture, functions are stored as text in a database and triggered by events. Once the function is called, the API controller receives the message and sends it through the load balancer to the message bus, which schedules it and provides it to a calling container. After execution, the result is stored in the database and sent to the user, then the function is decomposed until triggered again.

The benefits of FaaS include: 1) The time from writing code to running the service is shortened because no additional operations are required after creating or pushing the source code. 2) When functions are managed and scaled by a FaaS platform such as AWS, the overhead is reduced. However, FaaS is not without its own challenges. Because FaaS requires decoupling every part of the service, there can be a proliferation of functions that are difficult to discover, manage, orchestrate, and monitor. Finally, without comprehensive visualization of dependencies, it is difficult to debug FaaS systems, and infinite loops can occur.

Currently, FaaS is not suitable for processes requiring long calls, loading large amounts of data into memory, and strongly consistent performance requirements. While developers use FaaS for background work and ephemeral events, we believe use cases will expand over time as storage layers accelerate and platform performance improves.

In the fall of 2017, the Cloud Computing Foundation (CNCF) surveyed 550 people, of whom 31% use serverless technologies and 28% plan to use serverless within the next 18 months. The next investigation was to ask which specific server platform was being used. Of the 169 items using serverless technologies, 77% said they used AWS. While Lambda may be the leading serverless platform, we believe there may be other opportunities in edge requirements. Edge computing is especially effective for IoT and AR/VR use cases.

3. Security needs to change

Applications packaged in containers are fundamentally more secure due to kernel access controls. In a VM environment, the only visible point is the virtual device driver. Now that the application moves to the container environment, the OS has syscalls and semantic meaning. This is a richer signal. The previous operator implemented some signaling by putting an agent into the VM, but it was more complex and required a lot of management. The container environment needs to provide clear visualization and integration capabilities, and these workloads are trivial compared to the workload of the VM environment.

With this in mind, the 451 Research survey reported that security is the biggest barrier to container adoption - and challenges remain! Initially, vulnerabilities were a major security concern in container environments. As the number of ready-to-use container images in public registries multiplies, it becomes important to ensure that they are free of vulnerabilities. Manual processing, image scanning and authorization authentication have become the norm.

Unlike virtualized environments where the hypervisor is the point of access and control, any container with access to the root of the kernel can eventually access all containers on the kernel. In turn, organizations must ensure how containers interact with the host and which containers can perform certain actions or system commands. Enhancing host controls to ensure proper configuration of cgroups and namespaces is also important for maintaining security.

Finally, traditional firewalls rely on IP address rules to gate network traffic. This technique does not scale to container environments because the dynamic orchestrator reuses IP addresses.

Runtime threat detection and response is critical to production environments, and by fingerprinting the container environment and building a detailed baseline image of behavior, anomalous behavior and attacker sandboxes can be easily detected. A 451 research report noted that 52% of the companies surveyed are running containers in production environments, indicating that the development of runtime threat detection solutions for containers is accelerating.

4. Shift from REST to GraphQL

GraphQL is an API specification that is a query language and a runtime to perform operations on queries. It was created by Facebook in 2012 and open sourced in 2015. The GraphQL type system allows developers to customize data schemas. New fields can be added at any time and can be updated without affecting existing queries or refactoring client applications. GraphQL is powerful because it is not tied to a specific database or storage engine.

The GraphQL server runs as a single HTTP endpoint, which represents the full functionality of the service. By defining relationships between resources between types and fields (rather than endpoints like REST), GraphQL can follow references between properties, so services can receive data from multiple resources using a single query. In addition, REST api needs to load multiple urls for a single request, which leads to increased network hops and slows down the query speed. With fewer communications, GraphQL reduces the amount of resources required per data request, and the returned data is usually formatted as JSON.

Using GraphQL can get additional benefits over REST. First, the client and server are decoupled, so they can be maintained separately. Unlike REST, GraphQL uses a very similar language between client and server, making debugging easier. The data shape of the query statement exactly matches the shape of the data fetched from the server, making GraphQL more efficient and effective than other languages ​​such as SQL or Gremlin. Query statements reflect the shape of their responses, so deviations can be detected and fields that cannot be parsed correctly can be identified. Since the query is simpler, the stability of the whole process is stronger. The spec is best known for supporting external APIs, and we find it used for internal APIs as well.

Users of GraphQL include Amplitude, Credit Karma, KLM, NY Times, Twitch, Yelp, and more. In November, Amazon proved the popularity of GraphQL with the launch of AWS AppSync, which includes GraphQL support. It will be interesting to see how GraphQL will evolve in gRPC and alternative environments like Twitch's Twirp RPC framework.

Recommend an exchange and learning group: 575745314    will share some videos recorded by senior architects: source code analysis of Spring, MyBatis, Netty, principles of high concurrency, high performance, distributed, microservice architecture, JVM performance optimization, these become architects Necessary body of knowledge. You can also receive free learning resources, which are currently benefiting a lot

5. Chaos engineering has become better known

Originally popularized by Netflix and later used by Amazon, Google, Microsoft, and Facebook, it conducts chaos engineering experiments on systems to improve their certainty on production issues. Chaos engineering has evolved over the past decade. It started with Chaos Monkeys, which shut down services in production, and expanded to larger environments using Failure Injection Testing (FIT) and Chaos Kong.

On the surface, Chaos Engineering is just about injecting chaos. While breaking a system can be fun, it doesn't always work or provide useful information. Chaos engineering covers a wider range than just injecting faults, but also other scenarios such as traffic spikes, unusual request combinations, etc., to find existing problems. In addition to validating assumptions, it should also reveal new properties of the system. By uncovering weaknesses in the system, teams can help improve resiliency and prevent poor customer experiences.

For complex new technologies like neural networks and deep learning, figuring out how they work may become less important than proving their effectiveness. Chaos engineering helps address this challenge by performing holistic testing of the system to identify instabilities. This may become a more common practice as engineers work to make their increasingly complex systems more robust.

As chaos engineering becomes more mainstream, it can be implemented in the form of existing open source projects, commercial products, or in the form of service meshes mentioned above.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324421305&siteId=291194637