Stefan Tilkov: Welcome, listeners, to a new episode of the CaSE Podcast, another conversation about software engineering. Today I'm very happy to have Gustaf Nilsson Kotte as my guest. Hi, Gustaf.
Gustaf N. Kotte: Hi, Stefan. Nice to be here.
Stefan Tilkov: Gustaf is an architect at IKEA, but I guess I'll leave the introduction to himself. Gustaf, why don't you tell our listeners a bit about who you are and what you do?
Gustaf N. Kotte: Sure, thanks. My name is Gustaf Nilsson Kotte, I live in Southern Sweden, in Ystad, near Malmö. I have studied at Chalmers in Gothenburg, master of science computer engineering, and in 2008 I started working with .NET on a lot of web, and it's essentially been my whole career, going back and forth between back-end, front-end, doing a lot of web-related projects. I work as a consultant at Jayway, and I've been working with IKEA on the M2 initiatives since the beginning of 2016.
Stefan Tilkov: The topic of today's conversation is micro frontends, or frontends for microservices... So obviously, you will have to start talking about all the bad things that we want to get away from once we do microservices, so let's start there and talk a little bit about monoliths and the typical things that people want to address.
Gustaf N. Kotte: I think there's a common pattern, especially in the enterprise world where you have this awareness that you can't have your whole enterprise is one service. Of course, that's why SOA exists, and even way before SOA existed, we split up the enterprise architecture in different services. But the frontend has always kind of not really been thought about as a service, so that has still been a monolith. Now with the microservices IDE, which in my mind is really a retake on SOA with the good parts.
Gustaf N. Kotte: People from various places in the world have started to see that we have the same problem with a frontend monolith that we have with a backend monolith. Everything from not being able to release autonomously, also growing code size, lots of bugs happening, lots of risks, you have to plan more, and you get into this evil feedback cycle where things just take longer and longer time.
Gustaf N. Kotte: With this micro frontend architecture, we try to break that up, so that teams are allowed to deploy autonomously and don't have to wait for each other, basically. So it's basically continuous delivery for web frontends.
Stefan Tilkov: If I understand you correctly, what you're suggesting is that each of the teams would be responsible for their own frontend in this organizational model, right?
Gustaf N. Kotte: Yes, exactly. How this is split, from a high-level or the default way of splitting, is that you have these verticals, self-consistent systems that I know that you are quite familiar with, Stefan...
Gustaf N. Kotte: But at a large enough enterprise company, you tend to get also splits in the horizontals. There's basically a network of services, and it goes quite deep. So it really depends on the size and the context of the organization. But to not have this big piece of frontend in front of everything, and to have some form of autonomous services or teams in the frontend, that's the goal.
Gustaf N. Kotte: Then how many services to have... You can think of a backend for frontend concept where you have some kind of backend service, and then you have the frontend that's presenting those things. But of course, the idea will be that the backend and frontend are the same team, and service up to a certain point where it kind of doesn't make sense to have that anymore. It's more of an art. There's no truth, or like "This is the way it's going to have to be" in my mind. Again, the key part is the autonomous deployment of frontend services.
Stefan Tilkov: What's the breaking point, what's the threshold, in your view...? How many people do you need to have collaborating on a system for it to make sense to split it this way?
Gustaf N. Kotte: I'm quite inspired by the Amazon team size of two pizza teams. If you're not familiar with it, it's two quite large American family-style pizzas. So 10-12 people, something like that. It's the people aspect, so the number of relations between people are growing, and squared, I think. At some point, the team gets less and less effective for the team size. I think that 10-12 people is okay. 15 is a bit of a long stretch, and at 20 it really starts to hurt. That's how I think of it.
Gustaf N. Kotte: If you then think about that the team has this constraint in size, which I think is a good starting point, if you take into account Conway's Law, which basically says that the teams and services are isomorphic in some way, I think that teams should be allowed to have more than one service... But there's this relationship of essentially one to one between teams and service. You can have maybe a few small services as well within the team... But that kind of puts a pressure on the architecture, that if you want to have small, autonomous teams, the architecture has to follow. You can't have a monolith with these two pizza teams... Or you could have, and that would be the inverse Conway maneuver of breaking up a monolith into smaller services.
Stefan Tilkov: Let me try to get back to the different options that you have here. Let's say you have 24 people collaborating on a larger system. That's not atypical at all, it's typical, even somewhat small size for a large project, right? It might seem huge to some of our listeners who have only worked on teams of five people, but for many enterprise developers in large-scale projects this is not at all atypical.
Stefan Tilkov: So if you have 24 people, you could split them into a front-end team with six people, and then split the remaining 18 to two different backend service teams. Then you end up with three teams; one team, the frontend team, would be responsible for maintaining a consistent, well-defined, nice frontend part, and the other two teams would be responsible for their services. What you seem to be suggesting is instead split them into two teams of 12 and have each of them responsible for the front-end parts, as well as the backend parts. If that's the case, can you elaborate a bit on why you think that's a better model?
Gustaf N. Kotte: Yes, sure. I tend to have this three-tier architecture in my mind always when I think about this problem. Let's go back to your model - so the problem for the backend teams will be that if they have a story that they implement and are ready to deploy, the value will most likely not be delivered for the end user, unless there's a change in the frontend as well... And since we have two service teams with one frontend, you will most likely have an increasing queue of frontend stories to expose the thing that the backend teams are working on in order to deliver customer value in the end. And this is the same for however many layers you have.
Gustaf N. Kotte: This gives kind of a bad effect; from a high-level perspective you see that, okay, so you have a capacity problem in the front-end team, so you have to add more people then, and adding more people adding more code - you're just growing the monolith. Then you need more planning, you need more middle managers, and you need more control, testing... So you have, again, this big bad feedback cycle.
Gustaf N. Kotte: If this is a scenario that you see is happening or that you can kind of simulate that "Hm, if we do it like this, what would happen then?", I think that it would be better to split the frontend into two different services, where the backend services have the competence of frontend, and being able to deploy the frontend themselves.
Gustaf N. Kotte: Of course, this brings a whole other set of problems of collaboration and reuse of components, which is kind of the starting point of the microservice or micro frontends. Splitting services is really not hard, it's recombining them that is the actual problem, and where I see that there's a few options that kind of turns your frontend into something that you don't want in different kinds of ways.
Stefan Tilkov: Let me maybe tie this back to Conway's Law... Now, a really hard problem for this particular episode is that I have so many opinions on what you're saying, and I mostly agree with what you're saying, so I'm trying to turn it a little down and just ask the kinds of questions that I think people will ask... So let me see - if you phrase it this way, it seems to make sense, or I guess it will probably make sense to most people that it is useful for a team to be able to deliver all of its story, not just parts of a story and then have to sync with another team, have a meeting and schedule something. That's one part.
Gustaf N. Kotte: That will be the typical way to approach this. I'm quite active on Twitter, and I'm searching micro frontends and different keywords, and I tend to see that -- I think I need to back up here a bit, because... So for me, microservices is really about a diverse and heterogeneous architecture, but it allows to have different, different technologies as long as they follow the same kind of interface.
Gustaf N. Kotte: The typical thing now seems to be that "Oh, we can solve this micro frontend thing with a single library or framework, be that React or Angular, and just combine -- we'll have a frame, and then we combine the components from the different teams in some kind of build system." That would be what I saw a couple of years ago, but now we kind of tend to be more dynamic - dynamic loading - which is good, because you don't get the release trains and synchronization between teams. But still, there's this core problem that -- with more and more teams it kind of becomes obvious... With a small set of teams -- I think this 24 people example is really a good example. Is it worth it to support many different frameworks or libraries, or could you just go with one?
Gustaf N. Kotte: I tend to think that we undervalue the -- we think it's easier than it is to replace the frontend framework. I've seen this a couple of times where you have this large rewrite project of the frontend... Of course, you have to rewrite or do something with your enterprise software sometimes, but to have like a 2, 3, 4 years cycle of rewrites is really not good for business, and it would be better then to have support for a diverse technology stack in the frontend from the start.
Gustaf N. Kotte: I think one of the core problems is that the frontend landscape is still changing a lot. We see that React is quite stable, many teams like it, but Vue.js is coming faster and faster, more and more growth... For long-term use - and here is where I think maybe the startup world separates from the enterprise world - maybe it's worth it for startups (for some reason) to go with a monolith or a single library. But if you're, for example, a bank, or e-commerce, or what have you, and you will be in business (you hope) for the next ten years, and you don't want to rewrite your homepage two or three times during that period of time, I think you want to have support for a diverse set of technologies.
Gustaf N. Kotte: That brings us back to - if you can't really rely on having a single framework, for example React, as a component library, having the alternative to write everything in vanilla JS is just writing your own framework or library, which is a trap. So instead, I think having some form of transclusion mechanism is, as far as I know - and I haven't found anything else in the world that supports what we want... But transclusion is the only thing I have found to be this mechanism of loosely-coupled micro frontends.
Stefan Tilkov: We'll obviously get into that in a lot more detail, but just to make it clear... So we're not talking about micro frontends as components within a single application that are somehow modular; we're talking about different ways... We're talking about ways to achieve the kind of modularity and independence and autonomy that people expect from backend microservices, in the frontend part as well. Is that a fair way to phrase it?
Gustaf N. Kotte: Yes, definitely.
Stefan Tilkov: Okay. You mentioned the term transclusion. Can you briefly explain what that is?
Gustaf N. Kotte: Yes, sure. It's a bit confusing term, because I think it's basically inclusion, but on the web. There's a Wikipedia article for transclusion... The best example I know if you haven't heard of transclusion anytime before, is that you use it every time you browse the web. For example, with images - you have a reference, an image tag in an HTML document, and that image tag has a source attribute which points to a URL, and the browser will fetch the resource of that URL and replace the image tag during rendering with the image. So it will transclude the HTML document with the image document, making the image part of the HTML document, basically.
Gustaf N. Kotte: This will be client-side transclusion. I think iFrames are also a good example of that. What is not very well known is that you can also transclude on the server side. One of those technologies that can trasclude on the server side is Edge Side Includes, but also server-side includes in the old way of building web.
Stefan Tilkov: Okay. So what is Edge Side Includes?
Gustaf N. Kotte: The Edge part in Edge Side Includes is for me the last layer where you have some form of control in your application or infrastructure. Typically, on the internet you have a lot of proxies and stuff between the client and server... For example, if you use a CDN, that could be the Edge for you. Or you maybe have some caching layer as the last layer of your architecture.
Stefan Tilkov: So what's a CDN?
Gustaf N. Kotte: A content delivery network. For example, you have Akamai, Fastly... Those actually do have ESI support. I can't really remember, but there's a lot of CDNs. I think you can google it. It's typically a thing that people have used to store resources, to cache images, and scripts and CSS... But you can also use it for site acceleration, so that, for example, instead of having -- it's a bit of a cloud thing, I would say now... Instead of having to go into an origin server directly - maybe if you're in Germany, I'm in Sweden, and if the content is cacheable, your request can go to a server which has the request cached, and you can get much faster response times from the CDN. The CDN can also do a bit more interesting things, for example Edge Side Includes, if that's supported in the CDN.
Gustaf N. Kotte: The Edge Side Includes tag I think is quite old, I think. I can't really remember, but I think it's like 15 years old, or so. More than that... I think it's 1999. Edge Side Includes was based within Akamai, and there's a name that constantly pops up there - Mark Nottingham. I think he lives in Australia. He was the editor for the Edge Side Includes standard proposal, and there were more companies - I think Oracle was one of them - trying to have a way to include documents on the server side, to do a transcription.
Gustaf N. Kotte: ESI is basically like the image tag, but for HTML and on the server side. So you have a source attribute, and the URL will indicate that someone needs to fetch that URL from that origin and include a result in the resulting document. It's as simple as that.
Stefan Tilkov: What's a good use case for that? Do you have use cases for that in your system?
Gustaf N. Kotte: Yes. I will say that it's a fundamental technology in IKEA for the micro frontends concept. It's a really core part of that. So what we are doing is that we have this notion of pages and fragments, and that's basically the ESI lingo, I guess. So a team could have responsibility for a set of pages and/or fragments, and fragments - those are HTML documents that are not complete; they don't have a body tag, for example. They're just basically a div, or something... That's not complete.
Gustaf N. Kotte: Pages can then have ESI references to fragments, and those references can then cross the team boundaries, so you can, for example, if you have a product page, that page can refer to the header and footer fragment, which makes the menu header/footer reusable, so that the product team doesn't have to reinvent the wheel.
Gustaf N. Kotte: The same goes in the other direction, that maybe the product team has some fragments for product thumbnails - now I'm in the eCommerce domain, basically - which makes it possible for other teams to not reinvent the wheel... So they can refer to the product thumbnails from the product team, for example. So there's a reuse over team boundaries, and that's what I said before - it's easy to split up something, but it's hard to glue the parts together again, and ESI is the core technology that we have found to be really useful for our services.
Stefan Tilkov: One of the things that I find interesting about ESI as opposed to SSI (Server-Side Includes) - which many listeners will know, I guess - is that ESI has its origins in the caching world. Do you actually exploit that? Do you use the fact that ESI supports caching?
Gustaf N. Kotte: I'm not 100% clear on what you mean, but we do cache a resulting ESI-processed document for 15 minutes, and we also cache the requests - that's maybe what you refer to... Maybe there's only a fraction of the page that's actually changed, so we can reuse the cache in the edge layer then.
Stefan Tilkov: Actually, I was wondering - the way you explained things just now, separating a page into its parts, its fragments, it seems that some parts of the page are a lot more static than other parts of the page...
Gustaf N. Kotte: Yes.
Stefan Tilkov: What's the relation of static versus dynamic aspects of different pages?
Gustaf N. Kotte: That's a good question. I will say that there's for sure different caching profiles, and all web pages have this property - some parts move much more slowly than other parts. For example, the header/footer is much less updated than maybe price information, or what have you... So those things are typically nice to cache. If one thing has changed, it doesn't make sense to go to the origin for the rest of the components in the page. So instead of having the page as one monolith, that if anything has changed in that page, you have to invalidate the whole page and go to fetch everything. That's happening automatically with caching and Edge Side Includes. It will just fetch the things that have actually gone out of cache for various reasons. That's a really, really nice property. That being said, there's a distinction that if you cache the results of an ESI processing, you can't really have dynamic or personalized responses. I think of caching as a way of reusing requests that different people have requested... So if you have requested a page and cache it and there's a caching mechanism in the CDN, then I can also reuse it if we are closely located. But if that's your user profile page, you really don't want that to be cached, right? And that's also a caching profile thing. Some things should really not be cached, for obvious reasons. I think that's a good variable to look for, for different caching profiles. And my conclusion from the last years here is: things that you can cache typically can be quite heavily cached if you can invalidate the cache with some kind of cache purge mechanism to push a cache invalidation to the edge. Then you have some personalized use - for example, a user profile - that maybe even could work out well as a single-page application, because you have a lot of state and you don't want to keep that state on the server-side, because it's kind of clunky. I think that's maybe a bit of a difference between you and I Stefan, that maybe I am a bit more friendly to single-page applications, to be honest, as long as you have a foundation that has kind of a server-side lingo or concept base.
Stefan Tilkov: Okay. So for example your product pages - I seem to recall that products are something that you can actually pre-generate. Do I remember that correctly?
Gustaf N. Kotte: Yes, exactly. That's another nice relation with being able to cache something quite heavily. Typically, we're caching the results of an ESI processing for 15 minutes, but the underlying documents are cached for 48 hours right now, which is quite long. But then we have the ability to invalidate the cache when we change things.
Gustaf N. Kotte: Historically -- we didn't start here, but you can start anywhere... We started with generating static pages. What we started with was basically a static site generator, which is nice for cache invalidation because when you know that something has happened, you upload a file and then you push the cache. So things that don't change are not invalidated, and they just live on and have a long cache time. And if you know that the document has changed, you upload it and you push the cache.
Gustaf N. Kotte: The browser also caches 15 minutes, so you basically have 15 minutes time totally for things that have changed and not been updated yet. At the IKEA M2 project --there's a lot of products, of course, at IKEA, and different markets, but right now I think we are uploading between 3 and 4 million files that we have in scope... Which is a lot, but we don't update them regularly. It depends on if they change, and if we change the template, or what have you.
Gustaf N. Kotte: I really like the static site generator architecture for things that support caching... And then I kind of really like the single-page application architecture for things that are personalized and that have more client-side near state. And to be able to have these two very different kinds of web architectures, and also of course to be able to do some rendering maybe in a legacy Java server pages application... To have the ability to have these three and more really diverse ways of doing web I think is what not proves, but shows that this micro frontend thing is really valuable... Because you're able to lift services from maybe legacy applications to something new and still have the same web architecture overall, how you actually integrate between the different web architectures, you might say.
Stefan Tilkov: Right. So I don't think we actually disagree. Just for the record, I don't like single-page applications if they're really single. If there's just one single application for a large system, then I typically think that's a problem. If it's multiple, then it's perfectly fine. Anyway, I think that none of the things that we're talking about - not microservices, not micro frontends, not ESI, not any of the other stuff we're going to talk about - is the right choice in every situation. It all depends. Every time somebody says "This is the only thing you can ever use" or implies it, then that's the point where I get sort of nervous... Because like you, I think there's value in different aspects.
Gustaf N. Kotte: Yes.
Gustaf N. Kotte: Yes... Basically, as a fragment producer, what can I assume of the surrounding environment where I'm going to be included?
Stefan Tilkov: Exactly.
Gustaf N. Kotte: That comes down to what do we think is a good common base for pages and fragments. We have to go back and say "If there's a difference between pages and teams, as a fragment producer you can't really use -- it has to be the same, because you can't really control who's using your fragments. If something is using React, you could of course cheat a little bit and use React in your fragments. But then if another team wants your fragments and they don't use React - maybe they don't use anything at all, or they have an Angular app - then would you force them to use React as well? This kind of becomes a virus that suddenly everyone needs to have everything. This is an extreme argument, of course... But especially on the frontpage, the landing page of a top domain, it tends to be that many teams want to be on the first page, which that page is the most sensitive part of the website for performance, and there you really don't want to have lots of libraries.
Gustaf N. Kotte: Instead, we need to shift and take the opposite approach, saying that we can have a really small and slim base of maybe a CSS set and a small amount of typography... Basically, things that never change, or change very seldom, and also are versionless. Maybe some polyfills would be nice here, as well. These have to be agreed upon, but it shouldn't really be hard to have to form disagreement. This is one approach.
Gustaf N. Kotte: The other approach is, which might scale for some scenarios and might not for others; I guess this really depends. But having a style guide or a style library could be a solution in some contexts. But for us, right now we don't think it's a good idea. Maybe in the future.
Stefan Tilkov: I think you'll have to explain cache busting.
Gustaf N. Kotte: Yes, thanks. So let's say that you have a CSS file, main.css, and also that you will let for that to be cached. You might make a thought experiment saying "Well, I want this to be cached for one year" and then after a few days maybe after releasing this you realize that "Hm, maybe it was not a very good idea to have a one-year long browser cache because I want to make an update now, and now I have to change the name main to .css, and that will work..." All forms of caching has a cache key, which is often the file name or URL, and instead of having this ad-hoc process of adding a suffix, a version number to a file that you want to cache, we can add a more -- you can have a date time, or in our case, we use some form of hash... It doesn't really matter what kind of hash, but a content hash so that we have a part in the file name that reflects the content of the file. So when we change the file, we get a new file name. Then of course all the references to that file have to change as well, which is a good reason to have the number of references to that file very small.
Stefan Tilkov: So the ESI include that you make would be the same, but it would be replaced by the actual reference to the right file.
Gustaf N. Kotte: Yes, the ESI and the ESI reference is versionless, but the fragment contains a reference to a file that is versioned, which is controlled by the producing team of that fragment or CSS file.
Stefan Tilkov: Okay. So instead of updating the version in place, you replace it with a link to the new file, so things become immutable and thus cacheable forever.
Gustaf N. Kotte: Yes. In our case, it will be like e-mailing all the teams that would have this resource... And you wouldn't even have the e-mail. You can't really know who's referring to that file, unless you do some kind of analysis on what's actually on the web. So it's better to have a decoupled strategy of using fragments as the way to export and import related resources for fragment types.
Gustaf N. Kotte: It's a bit complicated, but in the end it becomes really simple, because you don't really have to think about the dependencies for a fragment; you just include one ESI in the top for styles, and one ESI in the bottom for scripts, and then you're done. So it actually becomes very simple, but underlying, of course, there's a reasoning there why this is needed.
Gustaf N. Kotte: We don't really have that many rules for fragments from the top of my head... This is kind of a general approach that I like, as well - it's better to not be super smart and think very intensively everything that can go wrong. It's better when things go wrong to try to fix the errors and try to learn from that, so basically favoring mean-time-to-recovery over mean-time-between-failures.
Stefan Tilkov: But now I'm curious, what's a performance budget?
Gustaf N. Kotte: In the end, what you want is, of course, as good a performance as possible for all your end users... But they also have different networks, different CPUs, so it's hard to measure and it basically becomes a long tail of performance profiles. So performance budget is a proxy measurement for having something that's easily measured, basically.
Stefan Tilkov: Interesting. It reminds me - I think it's in Don Reinertsen's Product Development book where he mentions that at an airplane company every team had a budget in terms of weight, and they were measured by whether they were able to reduce the weight... Whatever they contributed to the overall plane was supposed to be as little as possible, because that influences everything... So kind of the same thing, because the weight of every fragment will influence the weight of the whole page, which will influence the happiness of the actual end user sitting in front of that thing.
Gustaf N. Kotte: Yes, that book si really good, and that example is great. Just from what I think right now, it's easy for developers or teams to take localizations which make sense for them, but can really hurt the system as a whole, or the page performance as a whole... So that's why there need to be some rules in place. We have no budget for fragments, but they should be fairly small... And then learn from that.
Stefan Tilkov: So you would advise against implementing a little shopping cart icon with Angular, or something like that?
Gustaf N. Kotte: Yes, because that shopping cart icon would be in the header/footer, which is included by all pages and teams, which will mean that all teams would have a dependency in Angular, which would mean that no team could use anything else than Angular, and which would break the property of microservices where you have support for diverse and heterogeneous architectures, basically.
Stefan Tilkov: Okay, very good. So we've talked about ways to do transclusion on the server side using ESI, and many of the same things could be said for SSI as well, or for other concepts that do -- maybe homegrown things that do the same thing on the server side... What about the client side?
Gustaf N. Kotte: For me, I see ESI as a base for doing micro frontends, and the problem is, of course, that you might not want to load the full page, for example, below the fold... And now I guess I should explain what "below the fold" is.
Stefan Tilkov: Yes, please.
Gustaf N. Kotte: There's of course a lot of different screen sizes, but at some point you will say "Okay, so here is the part of the page which basically no end user will see", and there you can do a bit of tricks to, for example lazy load, so that you don't load that part of the page until the user has scrolled to a close enough part of the page, and you start loading the components and fragments for that part of the page.
Gustaf N. Kotte: I'm not really talking about infinite scroll solutions, but more saying that it doesn't make sense to load something that's not visible by a lot of people... It would be better for the end user not having bandwidth for that because they didn't use it, and it didn't bring value for the company/organization because nobody really saw it. And it's also, I guess, better for the environment because you're sending bytes that are not really used.
Gustaf N. Kotte: Of course, you can make the argument "Why have it in the first place?", but maybe it's scrolled for 5% or 10% of the users. So having client-side includes for lazy loading would be a nice use case. There's also the scenario, for example, if you have a typical search application that can be written in a single-page application framework... So you have a search box where you type things, and the end user more and more expects to get results directly when they type, to get search results.
Gustaf N. Kotte: At that time, ESI is no longer an option, because you are only on the client-side, which means that you have to load fragments on the client-side then, instead of the server side. In our case, it would mean that you will have ESI references to the styles and scripts fragments, but then load the content fragments, the instances, using the client side.
Gustaf N. Kotte: That's really simple, because you basically have -- for example, in a search result you get the list of maybe product IDs, you transform them to a list of URLs where these fragments are for these products, and then you make an Ajax Fetch request for them and include them in some form of container.
Gustaf N. Kotte: There's also a bit more declarative approach to using client-side includes, and there are a few libraries on the web that have a more declarative way of client-side includes. Again, Mark Nottingham, one of the persons behind Edge Side Includes from Akamai - I found a small library by him called hinclude, which has support for doing a more declarative way of including.
Stefan Tilkov: So hinclude then is a custom tag...? Or can you explain a little more how would a web developer -- what would they have to produce and who would take care of transforming that into the actual intended result?
Gustaf N. Kotte: Hinclude is a custom element... So if you import the library, it will register the h-include tag as a valid HTML element type. Then the browser has some hooks where you get events when the tag element is created and inserted in a DOM, and when attributes are updated and when it's removed from the DOM and garbage-collected, something like that. I really like the idea of custom elements; it seemed like version zero - that was something mostly developed by Google - didn't really catch on, and version one of custom elements seems to catch on, but there seems to be some kind of a weird dependency with ECMAScript 2015 not release the polyfill... So for the hinclude example I think that version zero is still a good way to use it. There's a very small polyfill …
Stefan Tilkov: I think you've used the term two or three times now... Can you explain what a polyfill is?
Gustaf N. Kotte: So there's a detection mechanism that sometimes you have to implement yourself by feature detection, and then load the polyfill library to lift all the browsers.
Stefan Tilkov: So in your example, the hinclude custom element would require the browser to support custom elements, and if the browser didn't support them, then the polyfill would add that support...?
Gustaf N. Kotte: Yes, and that's not included in hinclude, so you have to do that for yourself.
Stefan Tilkov: Okay. So essentially it seems very similar to the ESI thing, in that it's declarative; you just render HTML that says what you want there. The only difference is that it's replacing the client, as opposed to the server side... Well, it's a very big difference, but not for the developer rendering that HTML.
Gustaf N. Kotte: Yes, exactly. There's an interesting difference here though, that is also one of the reasons why we went with fragment imports and exports for styles and scripts, and that is if you include a fragment with HTML on the client side, which contains references to scripts and CSS, the browser will not include or load those resources in the browser, because I guess it's a security risk... So you have to do something else with that. Basically, it becomes really complex to be able to handle that kind of script in responses on the client side. It's not impossible, but it's much simpler to have the default standard browser behavior when loading scripts and CSS... Which is, again, one of the reasons why you want to separate content from the resources.
Stefan Tilkov: Okay.
Gustaf N. Kotte: Also, that being said, I know that there's a nice micro frontend library called tailor, by Zalando, that is kind of doing this. I think they're using link rels in the HTTP headers, and they have support for loading things client-side... But they seem to be really smart people. So that's another way to do micro frontends, using the tailor library. I haven't really looked that much into it, but it seems to be nice.
Stefan Tilkov: Okay. So for those of our listeners who understand German, there's a podcast with a colleague of mine in German on our company podcast, InnoQ Podcast, that actually talks about tailor a bit as well, and we might link to that, as well... And of course, to the Zalando site and the framework itself. I think it's quite similar in terms of the fact that it supports both server-side and client-side transclusion; it's slightly different tags, different technologies, but the overall effect is sort of similar in that it allows for composition of frontends developed by different teams.
Stefan Tilkov: One of the things that comes up occasionally is that if you do things the wrong way, then you might hurt yourself in terms of search engine optimization. Can you talk a bit about that?
Gustaf N. Kotte: I think that teams that are able to pull that off seem to have a lot of good engineers in those teams, but I will say that I'd rather spend those engineering cycles or effort on something that actually solves business problems, and not accidental complexity basically, as I see it, which is another reason to start thinking of having server-side rendering or static site generation as the base architecture, and then for cases when you have a high amount of personalization or what have you, do it more client-side because that's not sensitive for search engines anyway.
Stefan Tilkov: Okay. So is that a full range of options that we now talked about, where you consider doing things on the server-side, on the client-side possibly adding a single-page app where it makes sense, taking search engine optimization into account... Is that the full range of technologies and architectural choices that you see for teams who want to go about building micro frontends? It's fine if it is, I'm just wondering - did we miss anything, or do you use anything else?
Gustaf N. Kotte: Let me think a little bit about this... I think one interesting thought here for micro frontends is the case that some organizations seem to have -- they have a lot of client-side interactions on their pages, and that should maybe have 3, 4, 5, 6 teams on the same page, where you basically have more an application... I'm thinking like Photoshop on the web, or what have you, where different teams can make different panels or components, which is really not the thing that I have worked on, so I don't really have an experience in that kind of setting... But I guess that having this ESI include as a base will still make sense, because then it will decouple the components or the teams from their respective versions... So maybe - and this is really a big maybe - micro frontends make less sense if you have a really, really complicated UI. Maybe you have a specialization there instead, and maybe time to market is not that valuable in that kind of setting.
Gustaf N. Kotte: This brings us back to the universal question of "Is this the right architecture for everyone?" and of course, it's not... I think you have to go back and look at the organizational level what are the trade-offs. You know that you're in a good place where you have contradiction in your requirements, like "Well, these are the pros of this approach, but there's also some cons, and we have to think a bit what we need, and the value, and the cost."
Stefan Tilkov: Awesome. That's a great summary of the whole thing. What's a good place for people to start learning about micro frontends if they want to go into more details?
Gustaf N. Kotte: I wrote an article called Microservice Websites where I tried to collect my thoughts, so we can link to that. I also did a little manifesto-style website with basically a summary, and I try to keep that updated when I find new ideas or learn things. I also have a dev talk on YouTube, or a presentation, which has become recently quite popular on YouTube... That's fun, we can link to that as well. There's also micro-frontends.org to collect these related to micro frontends. Maybe we can collect a few more links after this.
Stefan Tilkov: We can certainly do that.
Gustaf N. Kotte: Yes.
Stefan Tilkov: Excellent. Good, so I think we're at the end of our time slot. Gustaf, it's been great talking to you. Thanks for all the insights, thanks for taking the time.
Gustaf N. Kotte: Thanks, it was really nice to be here on the show.
Stefan Tilkov: Great to have you. Thanks to the listeners for listening, and until next time. Bye!
Gustaf N. Kotte: Great. Bye-bye!