YTread Logo
YTread Logo

GraphQL, gRPC or REST? Resolving the API Developer's Dilemma - Rob Crowley

Jun 06, 2021
Until noon, I think it is necessary to congratulate you on your last work session of this year. Alright Doug, you've done well, so this would be a pretty calm session, so I'll try to keep it high and calm. It's going to be about 4550 minutes long, so I'll leave plenty of time for questions at the end, but the main question we'll be looking to answer today is how we, as API

developer

s, choose between grukk UL g PRC and

rest

to get there. answer, we're actually going to do quite a few things, so first of all we're going to walk through the API styles and look at them over time to see if my clicker was working, so the API styles over time and What we're going to do from here is get a sense of how thinking has evolved in our industry.
graphql grpc or rest resolving the api developer s dilemma   rob crowley
We're actually going to look back about 20 or 30 years and then look at all the major technologies that we've seen over time. Now we'll look at how they're different and, in fact, where there are a lot of similarities between them and then we'll take a little turn and look at how we actually make architectural decisions so that we'll look at properties with rural water systems to expose them to a beer and then we'll look at how we make decisions or we implement constraints that will then produce those properties and, to a large extent, an outside-in view, so with that context established within, let's look at some of the conversations that are happening in the API industry today and take some of your rather general black and white statements and remove the covers and then see how they actually go up, so add a little bit.
graphql grpc or rest resolving the api developer s dilemma   rob crowley

More Interesting Facts About,

graphql grpc or rest resolving the api developer s dilemma rob crowley...

There's a little bit of nuance to each of those points, so that will allow us to make a more informed decision, so So if you want to choose one of these APIs, I still style it maybe in your company or give it maybe in your own time. They will be able to make a more informed decision and eventually with that knowledge they will look at some sample scenarios and then choose based on that, maybe which one is the simplest of the most appropriate ones to choose from, so before all that, a little about me, so my My name is Rob Crowley.
graphql grpc or rest resolving the api developer s dilemma   rob crowley
I'm the head of engineering for a company called Vista Technology, so automated fare collection is very similar to the oyster grooming you might know in London and before, because I was a lead consultant for Telstra Purple in Australia. Much of my career has been focused on operational design and execution of distributed systems, so APIs are very close to my hat. I'm also a co-organiser of DDD Perth, the largest IT conference in Western Australia, so around 800 attendees each year, so it's grown wonderfully. and I'm also available on Rob D Crowley's social media, so if any questions arise throughout this session or if you'd like to get in touch, maybe I've picked up one of the ideas you love, please get in touch.
graphql grpc or rest resolving the api developer s dilemma   rob crowley
I'd be absolutely delighted to help, so I'd like to share a little bit about why I'm giving this talk today and I really have to confess that it's a bit of a selfish motivation because my Twitter timeline is an absolute fire, see what? I joined Twitter and started reading around the same time my inte

rest

in rest began and I followed my alligators at Eric wild and will be hosting other luminaries in the API space and it was largely all of this rest-oriented content that I I was understanding that everything was going very well for about four or five years until 2015 and graphic, you were all released.
I started following a lot of new people. It turns out that these groups don't agree very much, so all of a sudden I had a tweet that would come out and say you should really do this and then 17 18 different rebuttals would come out that would say don't touch it and actually there's a lot of the conversations that We'll see today too, so maybe a little example. On May 25 last year, my birthday, I saw this tweet, it's 2019, you should learn it for free. I said absolutely why don't I share the love so they retweeted it 15 seconds ago.
Aaron Powell, he's actually one of my good friends. Again, it's just fancy OData, well this is a bit of a funny exchange, often the ones I see are quite closed minded and again as a community I really want us to be able to learn and keep our minds open, and it's not. It's actually not just on Twitter, but also blog posts, so it came out in 2017 and basically declares that soaking is extinct, calling it the new soaking, which is obsolete, but at the same time we have other blog posts What do they say rest api zar.
Rest in peace and all you know is that building from now on should be great. Well, which is a little confronting for me because I've spent 10 years of my life building REST APIs? ​​Suddenly, the information becomes obsolete and is no longer valuable. Probably not. but there is also this. I had a strange feeling of deja vu when I was reading some of these articles and history has a strange way of repeating itself and I actually think we've been here before so if you're a 2017, come on. the clock went back a little further to 2010 and in this case Roy Fielding had just published his thesis, the rest was really hot and we were calling it a day and that break was going to replace everything soap related, another article from about same era, so soap is officially dead, hurray for arrests, so there are a lot of similarities here, so what can we take from this?
So if soap is dead and the rest is the new soap, that makes the QL graph the new rest, surely not, but yes, you see that the QL graph is indeed the new rest paraquad. I've never actually seen the pair two but I'm sure it will be fantastic but chances are a lot of these articles are very professional on one hand and not balanced so again whenever we want. take it to the companies or make a decision ourselves. I often feel like we may end up choosing the technology itself and not the properties or characteristics that are absorbed and how it then adapts to the problem at hand, so I guess.
What can we learn from that? So, quite capriciously, we could just say that maybe we just don't like XML and move on, but I think we'd be missing a pretty important point because I really feel like the problem goes deeper. I feel like a lot of our API consumers, so the people who consume our APIs and use them every day, read our documentation and call them, are dissatisfied and frustrated, and I think that's a much deeper problem, so, How can we create products that truly provide the experience we want? and I repeat, our community is small, so, once again, let's not get obsessed with one technology or another, let's work together and see from all the tools we have available, how we can work and choose the most effective one so that everything sounds like a What a dream, isn't it?
Imagine if there were no protocols, no hypermedia hills to die on, no media types, wouldn't that be a wonderful world? We wouldn't have to argue about technology at all, we would all get along wonderfully, unfortunately, that's how it is. It's not the way the API works. API is that we need to have a contract between two parties, we need to have protocols and specifications so that the two otherwise separate parties can have a conversation and understand the semantics and syntax of the messages that are passed over the network. wire there is no escape from this, so we will always be stuck and looking at wire formats, the semantic constructs have our messages, so we need to find a way to understand them better and see some of the similarities between them, so let's look at Technologies API over time and we'll start in 1991 with a middleware called core map and this was an RPC framework so remote procedure calls is a phrase coined by Bruce Nelson and one of the oldest styles of distributed computing interactions and, in fact this was actually considered at the time to be the future of e-commerce on the web, however it soon fell out of favor for XML and then in 1993 remote data access was launched and that's one way to query data from a database over a network interface, so if you're thinking from there, it sounds like a query interface and it absolutely is for the client to define the form of the data that was sent to the server, the server will then process them and pass the query. recover the data, send it back to the client In 1998, we got a formal specification for RPC published by Microsoft and it was based on XML Sapphire.
We had a firm contract on how to build these distributed systems, however, a year later, again from Microsoft, so Doug Box. One of the authors of the four principles of SOA released Soap and Soap was again largely based on RPC and we had wisdom as a contract force interface and this really formed the vast majority of the APIs that would be built in this phase of principles 2000, so In 2000, Roy Fielding published his break dissertation, which imagine if we could build a point of sale that was in harmony with the web and we had this bit of twos and fours for the next three or four years spinning practically in parallel sometimes.
Soap is starting to take a backseat and the rest API is becoming a bit more popular mainly because of companies like Twitter or Facebook who released public APIs at that time in 2005, we see Jason starting to take over from XML and in 2007 we see Oh, data under the name of a better way to rest, released by Microsoft again playing the same style as remote data access, allowing customers great flexibility in the queries they configure instead of a canonical URI as a uniform interface in The rest would define that we could have clients define any form of data they want in an incredibly flexible way to empower the client instead of the server defining it and that's a paradigm that was actually adopted by Graph Tol in 2015 and where they have many mutations. for the rights which is actually very similar to our PC and then they query the APIs for the reading side so it's actually not that different from a lot of the other techniques that we've been using before and then in 2016 we got the latest incarnation of an RPC. framework, so G RPC and a big focus on performance and code generation, but again building on what we saw with JSON RPC and XML RPC before us, so those are the technologies themselves, let's take a look at what was happening in the industry at the time.
Over time when we were building APIs we went back to 230mm, it was very much for a provider and a consumer and they were very direct integrations and this changed quite a bit with web 2.0 in 2010 when we started building APIs. for many consumers, so instead of specializing for one consumer and one partner, you could have many consumers, so you would fall more into a conformist model if you follow Eric Evans and that was really a huge change, so instead From trying to optimize for a use case, we really wanted very flexible APIs and I think we'll see those continue to evolve as we build richer, more complex systems and really embrace cloud native.
I think we'll see too many interactions so far, more server-to-server service API networking. building and again, I did mention a couple names as we go, like zenyk Zdenek, who talks quite a bit about this upcoming api loose eva, so throughout all this time and different techniques, men are still ranking all of these technologies and we put them in buckets like this, so the query API is where we want to empower our customers to retrieve the data they want or we're favoring flexibility. We have flat file API which was very popular in early 2000 so EDI or Electronic Data Interchange is still absolutely perfect, maybe if you have a flat file extract of your financial system and you need to redo the report at the end of the month.
It doesn't need to be real-time, it's still an absolutely valid form of API and if it's batch and synchronous, this is the Streaming APIs are absolutely real-time, so you can think about serving video or maybe stopping price updates, but also think about very fast potentially two-way communication over the cable. We have RPC API, so again G RPC XML RPC with many calls. and again with the RPC paradigm, it's one component calling another component over a distributed network but almost pretending that distributed network isn't there, so we treat it as a façade, whereas what web API is it, we spit out an HTTP client that we adopt. the web and this web API is really where most of our REST APIs or HTTP endpoint APIs would land, so I guess if the general angle of this talk is to say what is the best API style, I guess I have to come clean now and It can be difficult to speak for the last time and on the last day to suddenly be told that there is no best style of API that computing has ever had, so there is always a betterstyle for your problem in question and that is the key I want to take from here. today don't pick a technology and fit a problem to it, let's look at the shape of your system characteristics and then we could work backwards to see what that process would look like so we can get some guidance here again from Roy, the man behind it. rest and says that the properties of this obvious system are induced by the set of constraints within an architecture, so rest defines a set of constraints, some of them are the uniform interface constraint, so it says that any resource in a restful API it can be accessed through a canonical that you arrived at and then what you can do with it is what is defined by the HTTP method, the HTTP methods or the verbs that are associated with that particular resource and what What we get as a property of that constraint is the simplicity that you have a uniform interface and regardless of the API, you know how to interact with it, however there is a trade-off because we are incorporating an abstraction that could potentially cost us efficiency or performance, so which, again, there are not always advantages to these that we have. to compensate for some of the limitations, but it's not just about the architecture itself, there are many other types of constraints that we need to take into account in the systems we are building, so there are business constraints.
Who are you building for? Are you building for a client who has their own release schedule? can't align with maybe you don't have driver influence on us, maybe there are specific capabilities, so if your client is only capable of consuming flat files, good luck creating a G RPC API for them, so again, it's possible that has certain defined restrictions. so or performance constraints, scalability constraints, we can also have technical constraints, so again, how is this system going to operate? How is it going to be robust or resistant? Thinking again, how do we build those features? You look at potentially the native architecture in the APIs that we build, so again it's very different if you were deploying in your own data centers and finally we have the most interesting, all the human factors around our systems, so if we're building products So how do we keep these products healthy and sustainable over time?
So we have to take into account things like the knowledge and experience of the teams. Unless your teams know HTTP very well, it can be a challenge to create a truly effective rest system. api because what it does takes advantage of all the semantics of the underlying protocol, there is the problem that you need to understand it inside and out and it could be quite a steep learning curve, you also have to think about how you own those systems, so the law Conway's is an observation that says that the systems we build reflect them in the communication patterns within an organization, so how do we build our teams that align them with the products we build so that once we take all of those constraints, we already whether commercial techniques or sociotechnical, can we see them all? of the properties that will be imbued into the system, both around the ecosystem and all the operational concerns, but also the software system itself and what I do every time I design a new system.
I always start here. I always say what are the properties that my system needs to have how will security be scalability what are other features that I should do and then once you have a clear idea of ​​the characteristics of your system you will see what constraints you can apply that will produce those properties, So how could we do it in a broader sense? Again we have a bit of guidance here that says there is no one best architectural style that is right for every problem. We can compare architectural stars by looking at properties. and again looking at the limitations that the particular architectural style will apply and this was actually my original premise for today's talk so I was a little disappointed to discover after ten minutes of my research that someone had already done it so this gentleman named Zdenek Nemec. who gave the keynote at Nordic API at the end of 2018 made a very good comparison between break and QL graph at a low level of detail taking all the constraints defined by break itself and then additional operational constraints and now also meets and contrasts it with the UL chart and this is the UL chart of the ecosystem from about 18 months ago so some things have changed but you're still going to make a comparison between the two for a broad viewer so again I'll share this. and all the subsequent slides and resources, so there's no need to take pictures as we go, but we'll do something a little bit different at that point and we'll go back and look at some of the conversations that are happening in the In the industry we're going to take these myths or general statements, then we'll ask a little more questions and dig a little deeper and see if this is actually true or to what extent, so first of all, let's go Let's go back to this article that I referenced in the first few slides that stated that the rest API is a long lived QL chart of the rest API so clearly the author here didn't see more value in the rest APIs and we have another article at the same time saying The UL chart would make for rest what Jason did with XML.
Is it really that clear? So again let's look at the paradigms thinking in the QL pinning graph, so on the query side there is very remote data access and on the right side. is RPC so there's really nothing incredibly revolutionary there and it actually has a fair amount of HTTP or rest semantics as well and then let's look at G RPC itself, again a little different but G RPC ties specifically to HTTP 2, while the QL graph declares itself to be protocol agnostic similarly to the QL graph G RPC always uses HTTP 2 as the transport and HTTP 2 is actually very specific here, so I'd hate for you to be the X point that must be. 2 at a minimum and the reason for this is that it takes advantage of a lot of the streaming capabilities of HT db2 because of its functionality on the transport and schema side, it uses protocol buffers or protobuf and that's both for the schema, so it's sage. in a soap API or an open API or an API blueprint contract in a quiet API and then also as a wired data exchange format and g RPC is incredibly flexible for the types of interaction it supports between a client and the server, so it supports traditional request response, so a client request for a response is also known as a unary operation, but also more complex interactions, such as service broadcast, so a client request can generate multiple responses from the client. server.
We also have the inverse that the client could broadcast multiple updates to the server and then get only a single response or we could also have bi-directional broadcast, so again, G RPC is wonderful for real-time interactions between APIs where data changes very quickly and it is necessary to have synchronization. interactions or integrations between them, but if we look at the rest, it's quite different again, so if G RPC and QL graph are really data over HTTP or data over the wire, the rest is a state machine over of HTTP or a workflow and this is a really key point and when I talk about a state machine what I'm really looking at is the word hypermedia so again the always part throws people off if you're looking at the model Richardson's maturity level, is this at layer 3, but in reality it is not.
So much for a concept, if it's actually everywhere, is the underlying paradigm of the web, so if you just look at this really basic HTML page, we have hypermedia controls here and the fact that we have three of them, in First of all, we have an image tag. and the semantics of an image tag, so any resource that is referenced by this URL when we dereference it will take the content of that and place it in the alignment space. We have an anchor tag that says that when you remove the reference to that link, it will be navigated.
On that page we have a form that says that when we submit it, it will take the data that we have captured in that form and we will send it to the server. These are all features that have been sent by the server and then have - I understood the semantics of the client, so in the same way we could have a calm response where we can provide links to the client and the client could then use the link relationship that would follow that link. It's terrible, where could I go next? So if you have an approval process so I can place an order and then I can go to a pending state, invite them to be approved when I have received payment and finally shipped and at any time you control the following valid state transitions navigations for us, so anywhere. where I think there is a flow and I want to be able to tie the transitions between that rest is a wonderful choice for that and as I mentioned quite a bit about hypermedia, I really have to say that the next line is that break requires hating EOS, so hypermedia is the application state engine and I think this is a pretty important point and I don't mean for it to be confrontational, I apologize if it is, but I feel a lot of pain, a lot of the reaction we're seeing about the rest API is that our rest plate API is that we have taken some of the parts of the arrest and then other parts are a little difficult or maybe we have not been able to implement them properly for various reasons. and it offers a suboptimal experience and I think that's a lot of the tension we see in the community right there, so I really implore you not to build, rest assured I'm firmly of the opinion that many APIs are for Anyone the reason you can't make the use case forced or knowledge within the team or the client, you could actually get a much better result if you build it in an RPC style or a QL graphical style, so With our least effort we obtain a better result. result for us and that's actually okay because again, we shouldn't start by saying I want to build a G RPC API and work backwards, it's just saying okay with all the constraints I have, what's the best result I could have and if your API is mostly made up of actions, so an example of this could be the lazy API, it's an RPC API, it's heavily action based and again the tension with the REST API is where we have an interface oriented to the action and we're trying to turn it into a quiet URL, so we end up with this resource slash increment or those kinds of actions where you spend a lot of time on URL structures and yeah, I don't really feel that valuable , but yes, however, your API is mainly producer and you.
If you are manipulating related resources or there is a workflow, then resting is a wonderful option and in fact, if there is a person who you should read a lot of content from, someone who I have taken quite a lot of content from for their blog post is in reality. Phil Sturgeon, so we can learn a lot from him, writes a blog post called Piz Won't Hate and it has a lot of very strong opinions and a lot of things, there's a lot of experience behind this so I encourage you to do it. Follow Phil and this is again just proof that even the creators of these technologies don't see them as perfect for all use cases.
This is Lee Byron. I like Graph QL creations and he says Graph QL. It's a wonderful option for Clyde's interactions with the server, but he doesn't really see its value for server-to-server communications, so when you have a client that has or maybe multiple clients with different data needs, that's wonderful because we. We are empowering the client to select the data they need, but for server to server calls, gr pc or even rest, it is a better option, so API styles address different problem spaces if we are creating rest APIs, either internally within your team or two clients consider whether you could get a better result with less effort by exploring the QL or RPC graph and I'm not saying they are better, I'm just saying that creating a REST API requires a high level of experimentation. a dedication and often you don't see the reward in a short time, but it really is being able to evolve over decades, is where the power of breaks comes in and that might not be the shape of your problem at all, so consider that and G.
OPC is wonderful for synchronous communications between internal services, this for sports with many different interaction bottles, you get a lot of flexibility there, so with another statement I saw it being passed around quite a bit and this one still shows up: QL graph breaks storage incache, but what kind of caching is surely not all caching, because if we look at caching, actually there are many types, we can charge on the client, so it could be your browser cache, we can store in cache on the server side, also known as application shell, or we can have an Asian network. or middle cage so again case boxes that could sit between the client and the server so let's take a look at that so with HTTP caching we can have these middle case proxies so you might think in squid our varnish, however, with the QL and G RPC graph while we Absolutely still can do service education, so retrieving something from the database can be saved in Redis and then send it back to the client or in on the client side, we can still have, in the case of the QL graph, a denormalized client cache or you could store it in the browser cache, but you can't do the intermediary caching proxy and that's for a number of reasons , not only in the graft you generally use posts that can be charged, but why middle asian proxies are important if we could still get a lot of benefits from the client server side and it really comes down to proximity to the client, so that the speed of light is fast, about 300,000 kilometers per second, it's not infinitely fast, especially if you're passing through a copper wire, so let's go with the refractive index works with Wally, so again, if you can bring your data closer to the customer, they'll get it faster, so again, from a performance perspective, it's very valuable.
He could also locate data for multiple customers and prevent them from receiving more expensive calls again. network too, so how do we start trying to achieve this support for intermediary proxy caches with the QL Lin graph? There's been a bit of a shift in the community over the last year about this and instead of using publishing and pretty much using HTTP like a dumb pipe, Apollo doesn't have any sort of champion in the node space or one of the providers that allows it to using queries with an HTTP GET request and also using persistent queries which again are precomputed queries and exist on the server.
So instead of having to pass large queries over the wire, you can send it an ID and then hydrate the server, but this is not part of a standard, so these are still things that have come from various areas of the community, which It's a shame because that's actually one of the great powers of the UL graph that you had a specification for the QL graph that removed a lot of the ambiguity or challenges that people were building with the rest, which was more of an architectural style. and there was no specification we could implement; However, there is no Q-graph specification for HTTP, so again, there is still a big lack, there is still a big gap, many of the problem cases that have actually been solved by the rest of the community ago. several years ago, decades ago, we are still solving them today in the QL graph, so there are still many Another thing to also consider is whether your API is authenticated, so whether it is open or not, it will have an authorization header with a session cookie or a bearer token or some other form of identity or personalized data. because what we need to keep in mind is that intermediary caches will not cache authenticated traffic by default, so again, if all your content is fairly real-time and tied to a specific identity, that also changes your picture in particular and, in fact, is an intermediary.
Occasions may be a concern for you, so we already have some nuances, so caching is broken, but there are three different types of caching. They all apply to me, maybe they don't and there's another thing you need to keep in mind. Also, if they are more customizable, your API will generally be less chargeable, so in the case of a REST API where we have a uniform interface, we could have users cut one and we'll get a very good case log for that because by the time we retrieve that record there is nothing else you can do with it, so the uppercase version, whether based on expiration or e-tags, can generally be delivered to other users as well as other clients; however, the moment we start adding query parameters to So, maybe by being able to select a subset of fields or order or audit again, we are creating a unique identity there and we are reducing the number or the probability that someone else make the same request and now graphically imagine where each client can make any query they want, so that's a huge level of power that you're giving to your clients, so again the keying capacity of that will go down, so the more customizable you make your API again, just consider that it actually will. be losing or complicating your life on the caching side again so it's all a trade off so I might actually be thinking about creating APIs no Casey its that important to me so Maybe I'll veer towards one option or another?
So the bottom line is that sometimes there are many different types of caching and they apply to different scenarios, so a highly customizable API is where customers get a lot of flexibility in the data they request and benefit from. less if the network is valuable to you. that looking at a quiet API could be a very good option because you will be able to get data more easily based on age and best practices are still emerging for QL and G RPC graphs in this space, as I mentioned before there are still a lot of the communities are still relatively immature compared to the rest, so there is still a lot of bike sharing in the area.
This is one of my favorite REST API czar inefficient, what do we do with this? How can you respond? I think what we really need to start looking at is what rest is for, as if rest was never about efficiency, its purpose never existed, so we were looking to build systems that could evolve over decades so that the client-server architecture where each could evolve independently. We were looking for longevity and tight coupling, and in fact, many of the decisions that were the architectural limitations of arrest were absolutely against short-term efficiency, so it's a very deliberate choice for long-term flexibility and evolution, but that doesn't mean that when we say they are inefficient, we should confuse them with non-performance because there are absolutely things we could do with this and in fact http/2 has eliminated much of the pain that might have been associated with previous versions .
The rest API is usually htb, it wouldn't be a web client or a handheld device or view those clients? you can usually display different amounts of data, but if you only have one representation then you could be sending too much data to a client and then just sending slower requests, perhaps exhausting your available availability and perhaps your data allocation on the other side, we could be under lookup, which would typically result in me going to retrieve the master record and then for each of its children, I have to dereference the children independently, which is often known as the M problem more one, but there are techniques we could apply to address them. for over fetch we can use sparse field sets so let the client specify individual fields which at a time or maybe feel grouped together so again while there is no standard on this there are a number of options you have, but it effectively allows clients to say No No It's what we all just give me a subset of the representation and, to avoid a lot of road trips, we can compare documents and composite documents that are really the main aspect of some of hypermedia types, such as JSON API or Hyper-V. application language that was also quite popular five six years ago and they were really looking to mitigate a lot of the issues that we had with HTTP and the Onix APIs and this was really one of the first things that Graph Kewell came to the community for. and it says that whatever form of data your client requires, you can retrieve it all exactly in one round trip to the server and this was considered a big help, so instead of all these individual quests we can now create an application. a process on the server and returning to the client doesn't sound brilliant, what we've inadvertently done is now tied a response to the client to the slowest piece of data that we've coupled to all those individual requests, so if we have some data that could be retrieved very quickly to the client, maybe just the domain layout data elements and then we have some more expensive calculated fields, all of that data will need to be available on the server before the client sees anything for people to actually start building. api and I myself got stuck with the QL graph and started putting them into production.
In fact, we saw that it was necessary to invest a lot of effort into fine-tuning their queries and actually then moving some of the computed elements from larger queries and actually doing themselves separate the elements so that we reduce the volatility of those requests and again, when there's a will, there's a way for the community to think, well, what if you could then mark certain items as deferred and then send them back to the client as? an HTTP patch and then the service, then we look at all those elements, finally we put them together, it adds a lot of complexity and what we should really think about at this stage is that it shouldn't be that complicated, where is that simplicity that we really want?
It's irrelevant to think about that simple customer experience and if we really look at all these composite documents, it was actually a result of hahdve 1x, where establishing a connection at the TCP level was very expensive, so we had window scaling and we also had window squeezes. hands on the level. At the TLS level, we had to establish a new connection, so we had to make the server say hello again and that takes a long time not only to establish the connection but also to increase the bandwidth on that connection, so establishing a new connection for each request was very expensive and then you could also consider techniques like main sharing, but in general we were looking to avoid tricks or bugs or limitations of the underlying protocol, well, yes I would hate for the result to solve a lot of these because we can multiplex streams over a single connection so you can have one long libs connection and then many requests going through it to avoid all the hassle of having to create the TCP connection again and, even scarier, and do it again the TLS handshake.
Also, it's not just that there's also another feature called server push and what this allows the server to do is push resources or representations to the client without the client actually requesting them, so what you could do from here is if the client was ask the users maybe from there you can also proactively send a number of child logs to the server and from here by allowing this you allow the server to take much more control over how it can optimize sending traffic to In fact, one of The big goals of this for customers was for the service to optimize web page loading, but we found that their preload or prerefresh links were actually much more effective in the browser space, but there are open source libraries like e.g. example, volcano. which was released last year and which now allows clients to send additional binding headers in the request which the server will then return to the client asynchronously when needed and will be available on the server in the cache or in the HTTP connection for them to Sounds wonderful, this is something I hope becomes more popular over time, but there is still a fair amount of profiling we need to do on the server.
Also in the links I have included a link to that library called volcano, but it is an interesting way that as API

developer

s we can look at different ways of serving resources to the client to improve performance, so with HTTP one the cost of a handshake was really high and in fact there were multiple of us doing them very frequently and We created a lot of tricks or workarounds for those typically compound documents but when we created compound documents we ruined the case sensitivity of those requests again because again they were as cacheable as the most cacheable dataNoah HTTP volatile - we can send all those individual requests back to the client without having to create new connections, so we can do it much more effectively, and because we can still do it in a more granular sense, we still get an ability to very high cases, so it's actually a win. -Win is beautiful and the push server creates new possibilities again to potentially profile the use of our API, so we are seeing traffic patterns where clients generally have the set of interactions and again have that data ready for when they use it and, in fact, they save the PC. uses a lot of HTTP all the way and we're actually going to talk a little bit about web G RPC, which you may have heard in some of Blazer's talks at this conference as well and that's for streaming methods again, so too many requests are originated from the the server or the client or in some cases both directions, this is another great one, so Graph UL eliminates the need for versioning, so the graph view or we no longer need a version or a POS, it is a little strange because we never had to version our APIs.
It was always a strategy, it was never a requirement, so why would we version our APIs? It was a strategy to handle breaking changes where you changed the contract on the server and you wanted to isolate a client from that effect, then you have to do a Breaking changes create a new version that the client could decide when it wants to adopt that new version, so it is absolutely a strategy, but there are other ways to handle evolving capabilities and from there it is an elegant evolution and this is really what has been promoted in the rest. space for quite a few years on the web and that web page has evolved elegantly in the same way that we can build our APIs with continuous changes and in my opinion the goal should always be an elegant evolution so that where possible it supports multiple different versions of an API as a provider can get very expensive very quickly and complicates the picture, so not only from a client perspective but also from a server perspective, you could make your life a lot easier if you choose elegant evolution as a strategy;
However, it requires discipline, so some One of the implications of this is that we cannot add new required inputs to an operation, we cannot remove outputs or make them optional because again we could be breaking its dependency on the client, we can change the type of a field, so if it was a string, we can convert it to a number and we should also follow the robustness principle or the law of pies, so we are incredibly broad in what we accept but incredibly suspicious about what we send, so that again we are well-behaved citizens and this requires discipline, but what?
It takes discipline on the server side, it can't be treated in isolation and another massive call is that you can't communicate too much with users, so even if you are going to make a change, don't rely on headers at dusk or deprecated in your API Responses and expect your clients to understand that changes are coming. Send a newsletter. Update them with change requests. Let them know a number of friends, in particular, who operate where we build many partners, a POS for system integrators, but also internally. Well, and having a regular caterer where we could discuss API changes has reduced a lot of friction around timing them, so if I talked about the server earlier, there are a lot of things customers can do to be well-behaved citizens, so if you're consuming an API and you only need three of those fields.
Don't get interrupted if the server includes additional fields that don't interest you. Don't be interrupted if the server sends the fields in a slightly different order, so don't do it. If we take that response and serialize it directly into a class, this tolerant reading pattern is actually defined in the book Enterprise Integration Patterns by American Fowler and it provides a lot of guidance on when you're consuming APIs, how do you make sure you can do it in a solid way? In fact, there are also a number of other patterns in that book that can provide great insight into how to build resilient integrations between systems, but if necessary. you make a change and want to deprecate a certain feature, how do we communicate that, so I already mentioned the deprecation and expiration headers in the response graph.
QL actually comes with a built-in feature around this and it's called deprecations where I can flag a certain field. As I say, this will soon be phased out and because Graph QL knows exactly what fields or what data each customer needs, it can create a map of who would be interested or who is in this field and who should I inform if I'm going to change it. and that sounds wonderful, in fact, this was something that I was really excited about because I was hoping to be able to have really specific conversations with a subset of my users and say these fields are going to be removed, are you okay with that?
We work over time, but unfortunately when your API usage reaches a certain level of scale or a certain number of users, it is observed in Hiram's law, everyone depends on something, so there are always uses that require at least one of the camps. but there are also implicit dependencies so behavioral systems or behavioral functions on them and that's not always something you can document in your API so you have to be very careful when you think about evolving your API and you're not just thinking. about contractual changes, but there could also be behavioral changes in your API change that customers depend on as well, so it becomes a lot more complicated than just looking for a deprecation tag, and in fact in many cases if you're looking for perform a migration. to a new service or a new version, sometimes you have to deal with compatibility errors because you can control those clients even if it doesn't behave exactly like it was a rigid documentation, that was the expectation that the client now has, so if we are refactoring existing systems, so again, dealing with legacy code from Michael Feathers provides a lot of guidance on how to deal with these integrations and splitting separate components, but the most important thing we can do with the Libre engine is take advantage of this.
It's about defining interfaces, so making systems available starts with having the right interfaces and if we don't do a good amount of upfront design, we pass along a lot of those change requests, type landscapes, we were making big changes to the clients, so the biggest thing I could advise on this is to use a contract design approach first, so what would that look like? In the QL graph, that's your scheme in GRP, see it's a protocol buffer and in a REST API, you have a lot of options so you can I have arrogance now known as open API rambling planes and many others, but start with a common contract that can then be developed in parallel, maybe by different build teams from a client, maybe you're actually building the API implementation and because you have a contract it's very easy. mock that contract as well because you have that shared interface and then when both are ready it could be integrated and then the cycle can continue again and we found again that starting with the contract it was very easy to evolve.
We'll have shared design sessions between consumers and producers instead of having to write code and that feedback loop on the BK blog or on Bo odorous, so again we'll be able to create a vibe, but where it's really easy to change and update that behavior quickly, we feel a door for the long term, so version control is a technique for managing major changes, it should be a last resort, it should not be your first choice, only use it when absolutely necessary, prefer evolution with Thank you, it is not at all a substitute for communication with your users.
The most important thing about managing changes to your API is that it also doesn't protect users from relying on the app's listed behavior, so sometimes it's not just about interface changes, another one we ran into was that Domain modeling is purely a rest concern, so if I am using any other style. I don't really have to worry about that and that's not true. If you do, you will have design problems from day one, so always start with your users in mind to get both G RPC and everyone else started. With an outside-in approach, the server defines the shape of the operations or queries and then starts building the mouth graph.
QL allows you to take a slightly different approach to this, so if it were billed you wouldn't be too sure about the interaction. patterns of your customers, you could actually make a small profit with the goal chart on this because the QL chart delays the moment responsible for identifying your query needs to profile those queries, so with the chart for you, you don't actually I define individual endpoints. or operations to call, instead I define a data graph and each query is then mapped to a subsection or tree of that graph, so we are not optimizing for an individual use case, but the risk you pay for not implementing that approach is again having a higher cost or complexity of profiling those queries later, but again, it's a good tactic to deal with uncertainty or if you have to have multiple consumers, each with very different needs, so which API design is absolutely critical, regardless of style, both rest and G are PCP. follow it outside of an approach where you have to challenge the QL interface graph operations is different and the lesson can allow you to deal with uncertainty in a slightly different way and really allows you to see how users are actually running the graph in time. execution. but again, there's also a hair at risk, so if those are all the myths we discuss, let's take a quick look at the eight or so minutes we have left through some sample scenarios and given what we've seen about the architectural decisions . and some of the general statements we've made, let's see if we can come up with appropriate technologies, just think in terms of some of the use cases when you get back to your offices next week, okay, first up, we might have the simplest thing that we could do on the application web form, so it could be to request to buy a ticket or something like that or buy a ticket to an event.
I don't personally see why you would use anything other than Rest for this. I see a lot of teams creating quiet build graph QL APIs from very simple things like this, but here all we're doing is taking a certain amount of data, it's a fine operation, and they send it back to the client unless you have Wildly varying data query requirements. I think there are simpler options ahead as well and in fact I again thought we'll see that this is the year where a lot of discontents will start to grow with the Graph Gol API, it's where we've built them, maybe not. for the appropriate use cases that were getting burned by trying to trick them or operate on them or the overhead of the inherent complexity of them, so really understand what the simplest API approach you could have here, another case that comes up quite often. in particular, the value of a customer who has built highly granular services, sometimes called microservices, and then looks at how that impedance mismatch is actually taken, a very granular set of APIs here, versus what a customer needs, a SBA and this is where they normally installed and an added API or maybe a backend for the front end where we can add a lot of the calls and then send them back to the client and in fact this is a place where the graph QL absolutely shines because it allows you to connect data from many different data sources, so it could be a relational database, it could be a graph, it could be a file graph.
The URL is absolutely independent of the data source and that allows each of the clients to then query the data they need and is optimized over the wire, so in many cases we consider that placing the QR graphic as a layer on top of the existing set of services to optimize delivery to different digital channels is a very sweet spot for the QL chart, and the QL chart can usually be incorporated later. Teams have been struggling with an endpoint-based approach where maybe we don't profile or define those endpoints appropriately, so we're having a lot of over- or under-pulls with those customers and that's actually a pattern that we follow on quite a few. occasions. our consulting engagements as well and that also allows you to create an abstraction point where you can start refactoring some of these implementations that I get without affecting the client,so let's uncouple the change.
If that were web, what about mobile? So if we want to have a BFF, so again a backend to frontend and in this architecture the API is actually owned by the client team and this is something I've been playing with quite a bit and I'm actually thinking that G RPC is actually a great fit for this, particularly in the native space, I'll explain it a little bit because what this guy allows you to do is get an end-to-end protocol into your stack so you can have one place to share all your files. proto and then be able to generate all the code generation from all those files and the beauty with a native mobile app, as best they can.
You can use HTTP end-to-end, whereas there is a problem with G RPC and that is in Browser Space where the browser doesn't expose an API at a sufficient level of granularity to ensure it uses HTTP 2, so we can't use G Pure RPC in a browser or if it is in a web component or a Blazer application. Also, there are a couple of techniques we could do, but particularly in the native G RPC space they are really well suited for microservices, so again I'm not going to say in this particular case any event-driven microservices, so if we're looking at synchronous coupling , so again, more orchestration style, I see a lot of value in G RPC and co jet, especially if you haven't standardized your runtimes, so if you're looking to have maybe some of your API. written in java summin summin c sharing something in c++ any other language having addressed having a protocol file a fighter pro tip could co-host the client on the server side and keep them in sync that way is a very powerful pattern but what can we do then with G RPC in a web application?
There is actually a complementary protocol for GRP. It's called G web RPC and it's actually quite different because the purpose of G web RPC is to allow you to create G compatible. RPC traffic is over HTTP 1x so that's really the goal, for environments where HTTP 2 cannot be guaranteed then G web RPC can be used. So what does this look like? Normally you would have a browser or in fact it could be a side of the service. like an application service in a zoo because that doesn't allow it to support HTTP; yeah, neither, so you can also think of it as an outbound call from an application service and then you could start with the RPC traffic and do your geo.
PC web traffic goes - 1. So, but it's considering introducing a separate G RPC proxy, so two days ago James Luton King published a pretty interesting post saying that at dotnet they've actually admitted, they haven't actually created an experimental package that allows you to issue G RPC web requests directly and make them housed and cared for. a server without an intermediary proxy, meaning you could have in SBA or you could have a blazer web component application making a call to an azure application service over G RPC compatible traffic or b8g RPC web traffic and what this allows us to do is get the workflow driven by the common protocol and then, when the work to complete G RPC support for application services becomes available, it might be transparent to change it to allow you to adopt some of these techniques.
This is still an experiment, but I'm pretty excited. about the separate piece of infrastructure that is removed, there is a web schuette RPC, although you lose a couple of interaction types, so bidirectional broadcast and client broadcast and there are no more options, you can still do the two removals or you can still do it. streaming service, but you don't get all the options, what have we covered in today's talk? So we started looking at API technologies over time and actually saw that there are a lot of similarities between them, where a lot of them are remote. data access and then some form of RPC on the right, so it's actually a lot of the arguments that a lot of the discussions that are happening in the industry are actually about what's different, not what's the same.
From there, we delve into a norm of the conversations that are happening in the industry and look at a little more data so that if you're looking to pick one of these technologies that you can get past first, it never works because we'll be able to have better conversations about it. . Finally, we take a very brief look at some of the main use cases that I could see in my experience where I used what these technologies are or not, but please, I really see a successful talk as a sales pitch for your future, So yes I encouraged you to research one of these technologies or learn a little more or have an effective conversation with your teammates; that would be absolutely a wonderful result for me, so thank you very much for your time.
I realize there was a lot of competition in all spaces, so I really appreciate your assistance, thank you.

If you have any copyright issue, please Contact