Solving the combined concerns of the development, deployment and operational phases of a machine learning application is called MLOps. In this post we will explain what MLOps is, and why MLOps is important to extract value from machine learning.Continue reading
Proper business integration of AI requires three key ingredients. In this blog post we will describe what they are and how keeping this in mind helps increase the effectiveness of data teams.Continue reading
Regulations in AI become more and more important for organizations to take into account. Managing these burdens efficiently is essential to stimulate innovation and speed.Continue reading
Type safety in microservices using GraphQL and TypeScript
Part II: GraphQL schema combination
This is part two in a two-part series of blog posts about type safety in a microservices architecture. In part I, we have looked at a way to ensure type safety between the backend and frontend. In this part, we will generalize this approach to communication between any two (micro)services and discuss a novel, code-first approach to combining GraphQL schemas into a single endpoint.
In the previous part of this blog series, we have set up a frontend and backend that are able to communicate in a type-safe manner. To do this, we used two pieces of tooling:
- TypeGraphQL, used to generate a GraphQL schema from TypeScript code + decorators
- graphql-codegen, used to generate code on the client from the GraphQL schema exposed on the server
Generalizing the approach
The techniques used in part I are by no means specific to a frontend-backend relation. We will be able to set up any typesafe client-server relation in much the same way. Let’s consider the case in which we have our frontend clients, we have our Gateway API, and we have some other microservices. Also, we would like everything to communicate using GraphQL for type safety:
Now, let’s say that the shipping service has a query that will take a shipment tracking code, and return the status of a shipment (pending, in transit, delivered), the weight of the package, and any other relevant details. We would like to expose this functionality to the frontend. We could start out by writing a resolver in the shipping service, use TypeGraphQL to generate a GraphQL schema, and use graphql-codegen in the GraphQL API gateway to generate types from the GraphQL schema:
However, we now end up with non-decorated code in the GraphQL API gateway. There is no way for us to use the types from the shipping service and expose them to the frontend. To tackle this problem, Cubonacci has written a plugin for graphql-codegen to generate TypeGraphQL compatible types. We can replace the @graphql-codegen/typescript package with @graphql-codegen/typescript-type-graphql, and that’s it. The types generated by graphql-codegen can now be directly fed into TypeGraphQL to generate the GraphQL API gateway schema, closing our code-schema generation cycle.
Now that all the puzzle pieces required to consume GraphQL endpoints and re-expose them in another service are available, we can combine the GraphQL schemas from multiple microservices into one schema that will be exposed by the gateway API. This comes with some clear benefits:
- All communication in the application is now type safe. Attempted calls to services that are incompatible are discovered during build time, saving development time and ultimately money.
- The frontend can now talk to all pieces of the application if necessary.
- The gateway API can be configured to expose any query, mutation or type from any of the microservices, but is agnostic to most of their implementation details.
- This is a code-first solution, giving developers the benefit of using their existing knowledge instead of having to dig deep into the GraphQL SDL.
In this article we have described our solution to making a microservices architecture type safe and our method of combining schemas into one. We know of some alternative methods that do the same, and they are listed here together with the reasons why we chose not to go with these methods.
Redefining types in gateway API
It is not necessary to immediately re-expose types consumed from downstream services in the Gateway API in an automated way. Our first attempt was actually to generate (undecorated) types from downstream services, and then redefining them in the Gateway API in a TypeGraphQL compatible way. You could even argue that this is safer because there is no way for the Gateway API to expose types and fields that it is not aware of.
However, this method led to a lot of code duplication. This code duplication led to a lot of confusion with our developers. Since confusion leads to bugs, we decided fairly quickly that we wanted to remove the duplication in our code and use graphql-codegen to generate TypeGraphQL compatible code.
Apollo Schema Stitching
One of the most well known methods of combining multiple GraphQL schemas into one is Apollo schema stitching. Apollo schema stitching has recently been succeeded by Apollo Federation, described below. Although obsolete, we list it here because it is a fairly well-known method of combining GraphQL schemas.
Apollo federation is the successor to Apollo Schema Stitching. It is a schema-first method of combining multiple schemas into one, which is the number one reason we chose not to use it. Apart from that, it has been very recently released for the first time and seems to be lacking some basic features as of yet, like hiding fields from the federated Gateway schema. This is a big deal, since microservices usually also communicate internally using non-public methods, and there is currently no way to prohibit the outside world access to these methods through federation.
Additionally, this is a Apollo-specific approach using GraphQL directives that will only be understood by Apollo GraphQL clients and servers. Even if other vendors would embrace Federation as a standard, and all missing features would be in place, we believe a schema-first approach will never be as powerful as a code-first approach, nor will it be as intuitive to developers.
At Cubonacci, we are very happy with the type safe microservices architecture we have set up. Build time type checking on communication protocols is already saving us a tremendous amount of development time! However, for us the GraphQL journey does not end here.
In a proper microservices architecture, every single element in the architecture should be expected to fail at some point, and a good architecture should be able to handle failure gracefully. Synchronous communication between microservices like we are using now (GraphQL over HTTP) does not suit this requirement well enough. If a component of the microservice architecture fails, the synchronous connection breaks, and the client will receive an error message.
As an alternative, message brokers such as RabbitMQ or Kafka can be used. Any request to a microservices can be pushed onto a queue, and the microservice can take one item off the queue at a time and process it. If at some point the service crashes, it will restart, get the last unprocessed item from the queue and keep doing what it is doing. No client will receive an error message in this scenario, and we will have successfully handled the failure.
Because GraphQL does not make any specification with regards to the underlying network infrastructure, it should be possible to build a GraphQL infrastructure on top of a message broker, making communication more robust. In fact, some work has been done in this domain, such as a subscriptions library built on top of RabbitMQ. None of this work seems mature enough to use at the moment though, so we will be continuing our search towards better architecture. Once we have it figured out, we will be sure to write a blog post about it.
If you have any remarks, opinions, or questions following from this article, do not hesitate to get in touch!
Type safety can save developers a lot of time. This blog post discusses a method of applying type safety in a microservices architecture using GraphQL and TypeScriptContinue reading
Companies struggle to operationalize machine learning models. This blog post dives into the machine learning lifecycle and how that helps industrialization.Continue reading