YTread Logo
YTread Logo

Martin Splitt — Technical SEO 101 for web developers

Jun 06, 2021
Focus on the first three stages: crawling, rendering, and indexing. We're not going to talk about ranking today, so I'm not going to talk about ranking factors or the magic that happens in ranking. very different topic, very good, now we have discussed this and we know that we can do interesting things and in fact we announced on Google i/o that the Google part is now evergreen. What does that mean that we recently updated Googlebot to run Chrome 74? which is the current stable Chrome, but we also make sure that we will always be up to date within a couple of weeks after the release of stable Chrome, so we will no longer worry about what kind of version Googlebot is running, we are continually testing these new versions before releasing them. put them in small sets of URLs to ensure our indexing infrastructure works properly, but if you want to learn more, check out the blog post I wrote and there's also the I/O session video where we go into a lot more depth.
martin splitt technical seo 101 for web developers
The details about these things are great, so googlebot is evergreen, which means we can assume that if it works in our browser and probably works in googlebot as well, that's good news, so this HTML will run with JavaScript and will render your content. We can index this content. Great, but as you know, as

developers

, we make mistakes and sometimes these mistakes are small mistakes with big consequences. I think it was a few things where someone made a typo somewhere and then an entire financial trading company went bankrupt. These things happen and They can also happen with our websites, so you made a lot, you can make a small mistake and then it has an impact that you would not like to see if your SEO or you are monitoring your pages using the tools we provide like in the search console or mobile-friendly test, you have a good chance of spotting them, but you know they'll only give you an example.
martin splitt technical seo 101 for web developers

More Interesting Facts About,

martin splitt technical seo 101 for web developers...

This is me flying a glider. I learned to do it, but here I make a little mistake, one little mistake and boom, of course. It can happen to you and that can also happen to the Google robot we can't protect you from everything mistakes happen in this case you notice that I hold my arms like this I should have helped them like this then I would have been flying like this. It was flying into my face, good job, mine, so how do you spot these things and how do you handle them? Let's say you look at your server logs and discover hmm Googlebot because Google BOTS tells you in the user agent string that it is Googlebot, Googlebot is not.
martin splitt technical seo 101 for web developers
In fact, I go to some of my pages, some of my products are not there, some of my new stories are not there, some of my images are not there, whatever, some things are missing or you search your site like you search on Google and you see that some snippets and parts of your site aren't actually present, so the first question I usually ask when that happens is how are links made and I can't believe that in 2019 I have to talk about this, but here we are , people still make this mistake. so I'll talk to you about links this is the linked element well linked it has an H reference and that href doesn't have a url I can check now if this is a signal page application it probably intercepts navigation prevents the page from being refresh and it basically uses Ajax like xhr or fetch to get the data from the backend, that's cool, it's nice because it still has a URL that I can access, cool, this is not so cool Googlebot and other crawlers. don't click on your stuff we look for a URL this doesn't have a URL and then you could say oh I'm smart I'm going to play with you I give you a URL to go to no that doesn't actually have go anywhere it's not well, oh, but this is a URL, right, yes, but no, right, this still doesn't take us anywhere, so it's still not very good and why look, the general rule of thumb is that if it leads to your user to different content, it's a link if it does something else when clicked, great, then it's a well-placed button, but if it takes me somewhere else again, Googlebot and other crawlers don't click on your stuff, so no, it's not a good link and then, as I know, no, oh, chin, ha, don.
martin splitt technical seo 101 for web developers
Don't do these things around a Buddha, just use the first two, you'll be fine, the rest mm-hmm, okay, that's the point I want you to get, but I can't believe I have to talk about this, but everything. Now you have it, Google says not to do that. Okay, anyway, also mm-hmm, when I say these links go to URLs, look at your URLs very carefully, so here we have two examples, one has a host and a route. call and then the other one has a host path and a section fragment identifier. If you have a very long page, I would say the Wikipedia article on Jam.
I don't even know if it exists or if it's long, but let's say you have a very long article about Jam, the history of Jam for very popular flavors, different companies that make it, the jam wars of 2015. I don't know something like that and you can use fragments to identify which section you are interested in. but the crawlers don't care because if I search for jam history I want to go to this page I want to go to the same page as I search for popular jam flavors, so we ignore these things and that's been fine until the single page apps.
They were initially introduced because the only way to know that that content needs to change without anything triggering a navigation and us getting a JavaScript event was actually the history, sorry, hash change event, so what we did was have links to different hashes. and this would basically just cause an event that we can listen to and then we can change the content using Ajax cor cool but that's a hack originally this was never intended to work that way so we exploited an implementation detail instead the standard and since then we have the ace 2 API, the history API gives us the ability to have clean URLs that just work and when I say just work, they just work in Internet Explorer if you're using Opera Mini at a very low level. end device or if you're on a type of computer running IE 10 or IE 8, they won't be very fast, so I guess a page refresh that you know refreshes the page is better than waiting for this to be very slow. computer to run JavaScript to make the change I just want the content and this just works fine just use the history API, most frameworks do it by default, others allow you to switch to it, switch to it, don't use or trust on fragmented URLs and then you could use something like search console and you could look at your coverage report that tells you which pages are in search and in Google's search index and you could say, yeah, it was crawled, that's great, I mean, I'm sorry it was discovered, so links and snippet URLs are not my problem and they were discovered but not indexed at this point.
What does that mean? It means you are in the scan queue. We haven't crawled this page yet. That could persist for a while. It really could be fast. disappeared but could persist for a while why is it due to the tracking budget? Well, what is the tracking budget? Maarten, well, the crawl budget persists and consists of two things: the first is the crawl speed, how quickly we can crawl your pages, and the second is how often we should crawl. your pages and crawl your pages again, that's called crawl demand. Now let's look at the crawl speed to see a second crawl speed.
Crawl speed is the symptom of the problem or challenge that we have to master, that is, we want to crawl your pages as fast as possible without harming your users or your servers, so imagine you have a million products and we discover a million URL and we say, yeah, this is cool, well, it's like grab them all and that might be fine, but what if your web server is a Raspberry Pi on someone's desktop, you shouldn't necessarily do that, but you know we would probably disable your web server, so what we do is crawl a certain amount and look at things like response time and response code.
Here your service says everything is fine. 299 millisecond fast response and we can, we're going with like 300 requests per second, so maybe we can increase it a little bit. The server goes down, maybe this is not a good idea. Actually, we could take a step back, but let's say we would bet and hold. I like to try it even more crawling and your server isn't working this like what the heck and we're like oh sorry we'd like to crawl a little less don't worry I'm sorry about that so let's go. look at a bunch of stuff and try to figure out how much we can track.
You can also use the search console to tell us not to crawl too much. You can say, “Oh, there are Black Friday deals,” they are all available on our website, please just track a hundred. pages per second or something like that and then you can tell us not to go over this, that works too, but this has nothing to do with the quality of your website, don't worry, well, it's not low crawling, but it doesn't hurt your Ranking doesn't hurt your indexing it just means we're not going to update content as fast in our index but that's it, it doesn't necessarily matter also crawl demand we look at things like is this page really popular is this like a login page of the internet or is it actually something that a lot of people link to organically if you buy links that don't count, sorry, so we look at the organic links, we look at things like how often this appears and we look at how well does it rank normally and then We realize that it's actually a pretty good page?
We should probably go back, but is that or is it always true? We know that a Wikipedia article may not change that frequently, it may change every other day or maybe even every other day. a couple every two weeks, so we also look at how up-to-date this page is, when it was last updated, how often it's usually updated, we keep track of that, like, oh, this page hasn't been updated in five years. , but now it was updated and then updated again a week later, maybe we should check every two weeks and then if that doesn't happen again we'll roll back a bit again.
This has nothing to do and nothing to say about the quality or whether your page is ranking well, it doesn't matter, what really matters here is that we are trying not to delete your page and we are trying to make the right decisions and with what We frequently re-track your stuff, a news page with content that changes every pair. of weeks we can trace a little more often than a page about the history of Jam the history of Jam is not that exciting I think maybe I'm wrong I change my mind and also what needs to be understood with the burial with Kua's budget is which is not just the pages What we crawl is also the resources attached to them, so if you have a page with ten images and the style sheet, we retrieve your page, we retrieve the style sheet and we retrieve all ten images correctly.
The same goes for Ajax requests and Ajax requests count. your crawl budget because we're still calling a server somewhere so hmm, however we cache to get requests so let's say you have two pages with the same CSS but different images, one image each, come on to the first page we crawl or look for the style sheet we are looking for. the image we go to the second page we use the cache for the style sheet because we already have it and it hasn't expired since then and then we do the other image so the resource count is important but we can only get cache requests if you make post requests from your API and with an xhr we won't catch them, we will also crawl content that you might not want in search and we might also call things that are actually duplicate content or The same page can be accessed through multiple URL.
I'll talk about that in a second. Another thing to keep in mind: our URL parameters. If you use URL parameters instead of routes, that's perfectly fine, but if you have URL parameters that appear in links. that don't change the content of the website, you want to tell us that and I'll explain in a second why also soft errors, if you have an error condition, you need to tell us that in the HTTP status code because we're considering it. Okay, if you want to learn more about crawl budget in general or how to optimize it, we have a fantastic blog post and another blog post linked from there where you can learn more, but let's talk about duplicate content for a second, it I feel. you want to take a picture of this because I saw some phones working anyway one two three four missing so let's say you have a doctor rating website, you have pictures of dogs and people might like to rate them, rate them, comment on them or something So. you have a top dog of the day or week and then you have the pages ofindividual dogs today Leica is the top dog then / top - dog actually shows the page I like it, I like us, but it also cuts the dogs that like our page shows any day, what could you do to help us ?
What you could do is know that we're crawling both pages and we're dropping one so you can tell us that you know what's actually the same as this other page and then if we first crawl the top dog if we see that the canonical is this and say: "It's okay, we don't track the other one because we know it's the same thing." Great, thank you very much, you can save a request which is quite nice. If you have a lot of pages that might be specifically interesting, you can tell us what you think is the canonical URL we should use for this content, but we may not always use it, so thanks for the suggestion, but sometimes you notice. we find out that's not exactly what we need or what we want and then we choose a different canonical, that's not a problem unless we choose a completely wrong canonical, then you can totally let us know or use the public support channels we have, but usually it's things like a retailer's German website and a retailer's Swiss website, both written in German, have the same content, but one is f dot CH bonus a dot d e and then sometimes companies tell us that CH is canonical and de is the canonical, but there is the same content, so we think they are the same, we just have different entry points here, so for people searching in Switzerland we can still show the CH domain, but We'll say that the canonical one is German, for example, because the market is bigger and there are probably more links and signals that we get from it, so don't worry about that, but it's also good to give us this hint if you have content that You don't want it to end up in the search, let's say like me.
I have a really stupid high school photo on my website in a /private/Martin Schmidt dot JPEG high school photo. I don't want it to be tracked, so I can use a robots.txt file on my server to tell us not to track this. and that's great, I can say don't put this, don't crawl this, but if someone else links to it and we index and crawl this page, then we see that Martin is a link. Martin plays a high school photo to this particular URL and we're Okay, we put that in the crawl queue and then before we crawl it, we're like, oh, we can't crawl this, but we have the information that this is a split Martin high school photo, so we could still put it in the index because it might still be useful, but we're giving you a way to not have that happen to you, you can use an HTTP status code, a header, sorry , not a status code, an HTTP header in the X robots tag header to tell us not to index this and you can also put it in an HTML. page, you can tell us not to index this page, the HTML tag and HTTP HD header are quite complicated, if your robot is on this we will never see them, so we want to be careful with your robots, in general be careful with your robots, some people think they can use it to optimize their crawl budget and then they get too excited, so this page should have cats, but if you look at this, there are no cats here, it's just an empty white page, what happened ?
Well, if I checked it and I see that it makes an API request because it is a client-side rendered application, it makes this API request and it doesn't regret it, find the robots.txt file and the robots that you see do not request API for you. Sir, then we can't get the data from API, please be careful with your robots.txt file. By the way, this tool is called mobile compatibility testing. It's fantastic, not only does it tell you if your page is mobile friendly, but it also gives you page loading information and gives you a JavaScript error, so okay, now let's talk about your L parameters, so this is a specific cat URL and we can crawl, that's perfectly fine, not a problem, but what about those same two cats? there's something that adds a timestamp and then there's something that adds the session identifier which could be like someone posted this from their own Europe while they were logged in or something and now we found this URL and we'll crawl it.
What you can do to help us not crawl this unnecessarily is tell us this is exactly the same thing and redirect to the page without this URL parameter and then we'll realize that we'll eventually resolve it or this URL parameter won't. It doesn't matter and you'll be fine now let's talk about soft arrows. Soft arrows are complicated, they can happen with a lot of different things and they can also lead to what we call infinite crawl spaces, so I'm going to the URL no. It doesn't exist, but it's a single page application, so the server just serves index.html and JavaScript and says, yeah, whatever, here you go and then I see an error message, this is fine, well, no , because the service said that this page is fine.
Safe car, bye, and that doesn't mean that doesn't make us happy, right, Googlebot isn't exactly thrilled about this, so yeah, and now imagine you have a page that has a pagination error to which you have a link. the next page and that just increments the counter and becomes infinitely wide but your server says yes this is fine and then shows an empty page so we would start with likes on page 1. Well see the link to the page 2. Well, see the link to page 3. well and now we no longer have cats, but we see the link to plate 4 and your server says well, we see the link to page 5 you yourself say well, no good, It's not a good idea, but how do you solve this in a single page application, are you screwed now?
No, you are not the first option. It actually leads to things like this. We also have error detection for this. We try to catch this before it happens, but sometimes it happens, sometimes it doesn't. pick it up and then this ends up happening. The only way to fix it is to redirect us if you know it from your URL, so this tries to search for the dog. The dog does not exist. Alright. It's okay, it's not a problem. We'll just like it. redirect to a page that we know the server responds with the 404, so the not found bar will give us a 404 and we're like, okay, so this is a redirect from this other page, so we're tracking this less or If we find out after a couple of years and all the links disappear then we will stop tracking this.
You can also use a robots meta tag, so here we have a robots meta tag somewhere and it says everything like "go ahead, have fun and once we figure it out." We found out this page doesn't exist, we changed it to say don't index this and now you're wondering, ha, that's a cool way to do it isn't it, I just do it differently, like get it indexed from the beginning and then do it. I configure to index. once I know this exists it could be a good solution right it's not why it's not for our pipeline remember we take the url and get the HTML which now has no index in its rendering it says oh this is not an index so I can continue with the next URL because it doesn't want to be in the index hmm that means our JavaScript never ran so it never had a chance to delete the index without an index so I deleted all the pages from the index, now good job.
Other things that can go wrong is when you do things like this, let's say you have a landing page with the gdpr warning and the cookie policy and the privacy policy and what not, and you basically have the user click accept and then you set a cookie that the user accepted and then when the user accesses this news on your website or this blog post, you check if this agreement has occurred and if not, you simply redirect them to the home page and asks you to accept and set the cookie. and then they can click again.
The problem with this is that Googlebot will stop here because it will be on this redirect and we will say, "Oh, so this page doesn't actually exist or it doesn't exist anymore and now it is." just this main page is weird, yes, and that's because Googlebot doesn't have any state besides the cache, so we're not using cookies. Well, you can write to cookies, but we'll delete them before the next crawler happens, so no cookies. local storage no session storage no index TV you have all these interfaces but you can't use them on all page loads correctly, so if you write something to the index so that it is at the beginning of your JavaScript load and then read from there, will work, but it won't work if you want to go to the next URL and then the crawler selects it and then runs it because the cookies and everything else will have been cleared in between and it makes sense that this wouldn't be a bug if you think about it.
Imagine a user searches for something and finds the new story when a search result clicks on it but lands on your home page, says yes to the cookie and I was wondering: where is this article now? So what you should do is have this. The cookie appears on every page where it has not yet been set. We'll be fine with the cookie popup if you do it right, but the user will have a better experience because they'll come to your page and see the cookie. The pop-up says yes, of course, and then they can read the article without having to navigate to your home page.
Additionally, you want to use feature detection, because even though there are a ton of features present it doesn't mean that they always like to work or that they work correctly. or they are actually present, you know, just because I have a nail doesn't mean I also have a hammer to put it in the wall. I need to make sure I have the hammer in my other hand before using it, great, but that can sometimes go a little wrong, so I actually give an example, so let's check if this browser supports geolocation. It's great, so we upload local content for me, so if this is a new site again, I could upload new stories from Russia now and back.
Switzerland will load Swiss news if my browser doesn't support your location, that's fine, we just load it as global news. However, this API generates a permission popup that I can reject and sometimes, on one of my older phones, the GPS. It stopped working so it never got a GPS location and timed out. I wouldn't get any content here because my browser says yes, this geolocation is supported, but then error conditions occur and I don't actually get the content. Googlebot will reject these permission requests. What's the point of giving us an axis like the microphone or webcam in Googlebot?
How do you want our data center to look like? That's not going to happen, so what you should do is always keep an eye out for error conditions. and handle them properly in this case, if the geolocation exists but fails, we also load the alternative content much better, everyone gets content all the time if you want to learn more about the specific features of Googlebot, where the features have limited or no support they are compatible. I've put together a guide that walks you through the steps to figure out what's going on so you can try and fix your problems more easily.
Additionally, you want to make sure users see your page in the best way possible, so if I'm looking for an apple pie recipe and all I get is Barbara's baking blog 20 times. I like it, but what is the recipe? We also have these similar fragment descriptions. They are called meta descriptions or snippets as we sometimes call them and you. I can give them per page and then I know, so, cupcakes, I don't care about brownies, I don't care about apple pie, that's what I care about, you don't have to write this, that's what your content writers do, copywriters and sometimes SEOs are also good at optimizing them, but you need to make sure that you have a solid

technical

foundation to provide them and the way to do it in React for example is react with hull, you install that additional package and then you can use its page component properties to populate the title, meta descriptions, and other meta tags, so in this case we give it a useful title and a useful description, let's say like Barbara's apple pie recipe, this is Barbara's apple pie. apple. recipe that my grandmother used and it is very easy to make, quick and fantastic and everyone loves it, that's all.
I want to click on that result. I'm hungry. You may notice that in angular you have the title built in and the meta services they do. Same as in react you just give it the properties you need to create a style and a metafragment and in view you use the view mera package to do the same thing, that's great and if you want to learn more about this stuff we have a video series which called the JavaScript SEO series on our YouTube channel that goes over these and other things. We talked about essential testing and tips and the different frameworks and all that kind of stuff.
We also talked about a concept that I will discuss later. Great, we talked a lot about JavaScript and how to do it.represents Google search, but the reality is that not all crawlers do this, there are other search engines and then there are social networks and other applications that crawl your page when you share a link, but do not run JavaScript on your page, so How do you handle it? Maybe consider something like server-side rendering. Server-side rendering isn't really a workaround, it's something incredibly fast, usually because it outputs the HTML and the H. the browser can parse the HTML as it comes in with client-side rendering, it has to get the HTML and then, voila, it has to download the JavaScript, parse it and run it and then it generates more age, you know, that will take more time, that's the reality.
If you don't want to do that and want to lose all your JavaScript features, there is something called hydration, most frameworks offer it in one form or another, which means your JavaScript becomes optional so you get the HTML very quickly and then , once JavaScript is executed. actually it upgrades to the normal single page app and client side app, if you don't want to do that because you know doing it on every request is a considerable cost, you might want to do some pre-rendering that does It makes sense if you have a page like a blog or a marketing page where you know when the content changes on my blog, the content only changes if I write a new blog post or edit an existing one, so I know it's precisely what I need to render. and then you can use something like a headless browser, for example, you can use puppeteer or any service that does that for you or you actually rewrite your app slightly in universal JavaScript and run JavaScript on the server side, which works just as well.
Well, if you don't want to make changes to your interface, you can also use an alternative solution called dynamic rendering. It's also in the video series. We'll talk about that in a second, but let's see how server-side rendering would work. I work in react so here I'm using next js which is a higher level framework that uses react components but it basically prerenders it for me sorry so every site renders it for me on the fly so it's well, it works with things. like now information and all that kind of cool stuff and it's a lot of fun to work with and it's actually quite successful, there was another talk at the same time about the next JS, so I'm sorry to have you here, but definitely check out the videos once you get them they went up If you want to do react pre-rendering there is a thing called react plugin which uses a headless chrome to crawl my different pages and generate the HTML for me which I can then deploy to any static hosting which is also good and react plugin also has the hydrate option, you would have to change your app component a bit to hydrate or render the app depending on whether you are on the server or dynamic browser rendering, on the other hand this solution is where your server looks. in the user agents and if they look like a crawler, then I would pass it on to a proxy service that you would like to know become a customer of I guess, or use your own Tron instance or use as a puppeteer to build your own or ghost Jas, whatever works for you and you'll generate static HTML and send it to the crawlers, but for your users, because you want to make sure everything works the way you developed it and is basically intact for the users' browsers, you just give them normal, like normal client side or Android.
We consider it to be a workaround because it requires a little bit of effort and a little bit of maintenance that you have to deal with, it has a lot of complications around caching and all that, but it helps you get up and running with trackers that Onix run JavaScript and it can also help you with the problems you have with JavaScript and Googlebot very quickly, but it is a workaround because it doesn't give users any benefits. Server-side rendering or pre-rendering is a longer-term solution and a bigger investment. Yes, but it's a longer-term solution that also helps you deliver your content faster, making your users happier and more engaged.
If you want to learn more about dynamic rendering, we wrote a blog and code lab post and documentation page about it. If you want to try it, give this link, everything is fine, but I think it's time for us to finish and enjoy the coffee break or the Q&A session, so to conclude, I think you should use tools like the one supported by mobile devices. test or search console to see how your pages perform in search. These are our tools to give you the real data. They're not like making things up, don't worry about that, they're free and they're fantastic.
Search console, even emails. If there is a problem, if we find a problem, we will tell you if you are registered, if we know you own this domain, we will tell you about server-side processing and pre-processing, like I said, our fantastic investments for the longer term , you should definitely look into it if you're not using it once again. I recommend Natalia's talk in the next Jas. It's probably also very good as a resource, if you're using angular you want to look into angular Universal and if you're using View Jas then you can check out the following Jas, generally speaking if you help crawlers understand your pages you'll have a good time in In most cases, it will just work if you write semantic HTML, like proper links and such.
You can add structured data to tell us more about what something really is and, you know, basically help us with robots.txt and meta tags to figure out what this page is about and then you'll have a lot of fun. It's much safer if you want more information. We write a ton of guides to help you. There are also getting started guides on this page for

developers

. It was like a developer's guide to searching or something. It's fantastic. My coworker Lizzy wrote most of it and it's really really good, you can also watch videos if you prefer videos, JavaScript SEO videos are online on our youtube channel youtube.com forward slash Google webmasters you can also ask us questions every other Weeks we have online office hours so you can jump on a Hangouts with us and ask us questions and we'll be happy to answer them.
Those Hangouts tend to be pretty generic, so it's not just

technical

SEO, it's regular SEO too, but you might learn a couple of things, so that's cool and you'll be able to figure it out. Who's hanging out? Is there anyone who likes it? You have the questions that I myself have and now I got an answer, so maybe you can help us, but if you have specific questions about JavaScript, we have a mailing list that you can join and ask your JavaScript. questions there and that focuses solely on JavaScript sites. You can also keep up to date with us on our blog so we have the webmaster blog that we keep posted and we post ads as well or just follow us on Twitter so all that being said like all friends spasiba by Shia and have a nice day oh oh Seba Martin what I would like to speak for the conversation maybe in Russian oh don't you want to join me for a conversation well it's not like you have a choice you know?
This was very exciting and eye-opening. Thank you very much for all the advice and especially. I didn't really know that Googlebot doesn't click or process data and that's something that's very useful to know, so I think at this point. although it would be useful to know if each content, each piece of content, is equally viable for Google when it indexes it, for example, if there are some things that are generated using JavaScript and there is something that is already static and goes in HTML. I think there was an article, maybe even in the video series I mentioned once, it could take longer to index content generated with JavaScript, so if you have a breaking news story that you want to appear in the search results of immediate, it's probably better to use server-side rendering, that's true, so we're working on fixing that quality issue, but it's still something we have to do, like I said in the process, we parse the HTML and if we have content .
We already put it in the index so that it gets to the index faster. It is not a dynamic representation. We worked out that dynamic rendering would solve that because we have the content and then in the initial HTML it doesn't mean we're treating it as. less important or something doesn't say anything about the quality or how we're treating it in the index, it just means it takes longer to get into the index, how much delay are we talking about here, like catechins, minutes? it's already minutes, it can be seconds, it can be ours again, in theory it can be days, but the problem is, as some people say, weeks and months, and the problem is that you don't see where the delay is coming from, it can be creeping . delay because like I said, we have a crawl budget and we have crawl demands, so if it's a page that's only crawled once a month and you just missed the crawl, then update something with work, as part of the content which is simply generated with Javis commands.
For this content for Schwaben, we say yes, the HTML content would have taken a month to appear because it just missed my window. It's interesting. You also have a couple of questions here, like one that has been quite popular. Can? call, can you name a few things that really have a dramatic influence on search engines, um, search ranking, like HTTP security? Yeah, Foreman probably the things that are really important are: Sorry, the content, your content has to be good, this is how The first thing it has to be good, I know that's not something that I know, if you look at the technical signs, if your site was hacked, if you are using Search Console we will tell you that your site was hacked and also deleted if you do some things like if you have a virus and an action or there is a Bitcoin miner on your page we will not show that embed , so you can't do that anymore, you can't do it, no one sorry, sorry, you can't.
We still don't use our infrastructure to mine bitcoins, but who knows, but basically it's social security issues. Our big problem, HTTP, is more decisive, so if we have two pages and one is HTTP and the other is not, we can change them, but it is not HTTP. It is important to your users and there is a trust signal, but it is not one of the most important ranking factors. Mobile friendliness is also quite important because most people are now coming from mobile devices, but yes, I would say that security issues are our biggest web performance. It's a great thing because in the end you want your users to engage and have the content fast and I definitely know that usually we can't directly influence it as developers, but if you see stupid content like hot cakes x10, then talk to your SEO. and talk to your marketing departments because sometimes SEOs have the same problem as us as developers, so they say we need to change this content and arrows of course.
I don't believe it. I think it's fine, but so does the developer. It goes back and forth, like what advice, what is this, so you know there's more people pushing to fix the content and the important thing, okay, we also have a question from Dennis and Demetri. I look inside the shadow root when the page index happens, what about the web components? The new Evergreen Googlebot we have support for Shadow Dom, negative support for Shadow Dom and I think if I remember correctly it's okay since Googlebot will index Shadow Dom content but remember again that there are crawlers that do not run JavaScript and it is possible that they don't see your Shadow Dom content.
Also for composability it's a good idea to put similar critical content in Light Dom so you can override it and not modify it by wrapping it in more components, so I'd say I'd still stick to my previous statement a couple of times. It's been months since you should put it in the light DOM and use the shadow Dom to encapsulate implementation details instead of mm-hmm content. It's also interesting to see a lot of things like cities being generators and turning into something real that you'll pre-render. and just put into service everything you can to have a good skeleton, yeah, it's interesting that's exactly what I wanted to mention, it feels like we're moving full circle at this point and the existence of the dust, let's say, of the CD ends if you put the content in the CD answers it also helps, it just helps because things are faster, they are usually correct if they are closer to the user which is what CD ends are mainly about and that is a Fantastic way to do it faster.
Hmm, okay, I think I'm running outquestions here, but I'm actually very happy to see that we actually moved away from Chrome 41 and what was the reason we were stuck with it. It's like the technical thing was there because it didn't support grid layout yeah it didn't support a lot of features yeah so the reason was maybe you should also explain the similar so write let's start right so that The situation before I/O was that we were running Chrome. 41 in the rendering service, so we have updated it, so in Chrome 74 don't worry, but you will hear SEO or people, sometimes even developers often say what it is now.
I hear web developers say oh, what I just discovered is Chrome. run Google is running Chrome 41 for rendering and I say yes but it hasn't changed since then oh god so you'll hear it a few times like oh but it's using Chrome 41 and you can say it's not really there is a post of blog. Google said that is no longer true and the reason for this was that we had Chrome 41 in the past, it didn't have any two-legged API instrumentation, so we didn't like accessing it, so we had to write a lot custom code to get data from the render that we did, then the team said okay, we can update it, so let's say something like Chrome 50, but then we ran into the exact same problem because it takes us like half a year or a year.
I don't know how long to take the code from one Chrome coming back to the next and then Chrome keeps ahead of us so we have to find something and then they decided to work together with another team and that's the team that brought us, the puppeteer and the API development tools basically the effluence integration is something that our rendering team helped with and that was the strategy to get it up and running so we could use it in place of custom code so once Chrome landed and I think that was about a year or something like that.
The Developer Tools API came to Chrome, that was the time when they were able to start building or rebuilding the code that they used to use these APIs to pull data and then they still have some. Special API as well, but most of it is now like in the public source code, so it's easier for us to update it, but that only took a couple of time, okay, and there's just one more question from Alex, so when arrive. to single page apps, react view etc. what are the common things we should be aware of like common mistakes people make all the time, common mistakes because I still need to talk to framework teams about the duck? documentation the documentation neglects the meta description and title tags it's like you have this HTML and now we forget about it and now we do all this cool stuff here and no one talks about the fact that now all your pages have the same title and meta description , which isn't fantastic, so it's like a low-hanging fruit that you want to fix first, and then it's like performance optimization and making sure that your rendering is actually working properly, so you want to try this and then you see where. the problems are that it depends a lot, okay, quite a bit of time has been spent on search engine optimization these days, all the resources, tools and techniques, and in each one, it wasn't a compliment, it wasn't far off, I'm just now.
I mean we are trying to get as much documentation as possible and get it into the hands of developers. Yes, that's great. In fact, I think so. I mean, I don't know, maybe I'm wrong, but it seems like it was within the last year. two, there's been a big push to evangelize and explain the purpose and how it works and everything, so I'm still doing that, yeah, thank you, you will. Thanks Martin, can you be with us today?

If you have any copyright issue, please Contact