Service Weaver is a programming framework that makes it easy to write, deploy, and manage high-performance distributed applications in Go. Service Weaver improves application latency by up to 15x and reduces cloud costs by up to 9x compared to a typical deployment in the cloud using microservices.
1. Programming Model
The developer writes the application like it’s a traditional, single-process Go executable that runs on the local machine, that is modularized into logically distinct components. The runtime will take care of cloud configurations and integration with the cloud provider (e.g., breaks down the components into a set of connected microservices, monitoring, tracing, logging).
Benefits of this programming model
* The developers can focus solely on writing their application code (e.g., don’t have to setup networking endpoints, to create network stubs, to do service discovery);
* The developers can modularize their code without paying the performance overhead caused by over-splitting into microservices;
* The developer can change the network topology of their application easily and dynamically;
* It enables the runtime to provide optimized runtime solutions and enable new usecases.
Manages the execution of an application (e.g., colocate components, assign them to OS processes, replication, resource management, etc.).
Provides different plugins that enable to run the same application binary in any distributed environment. Out of the box, Service Weaver supports three runtime plugins:
* Local runtime, which runs the application as a set of OS processes on the local machine;
* SSH runtime, which runs the application across a set of machines using SSH;
* GKE runtime, which runs the application as Pods on GKE;
* Easy to write new plugins for AWS, Azure, other clouds.
What runtime enables
* Birds-eye view into the app - leads to smarter scaling, placement, co-location decisions;
* Because all components run at the same version - it enables the implementation of highly-efficient serialization and transport protocols;
* Provides affinity based routing embedded in the application itself - easy to create stateful applications and route requests to different component replicas based on load information;
* Same testing, profiling, debugging experience on the local machine as in the cloud.
3. More details about the talk
We will talk about our own experience developing cloud applications, and also key findings based on conversations with various infrastructure teams at Google. The way people write cloud applications today is very cumbersome, hence it slowdowns innovation and hinders applications’ performance. We argue that the key reason for this is the way people organize their application code around different binaries and run them independently as microservices. Instead, we believe that (1) people should just focus on the application business logic, and split the code at logical boundaries based on the business logic; and (2) let the runtime deal with the execution challenges (e.g., how to split into microservices, how to connect them, resource management, etc.).
We will walk though our framework that uses a custom transport protocol, and custom serialization, through our Go code generator, and how easy is to run/test/manage/debug the applications both locally and in the cloud. We will also talk on how we leverage open source projects like Kubernetes, Prometheus, Perfetto, Jaeger and Open Telemetry to deploy and instrument applications.