YTread Logo
YTread Logo

Martin Splitt - Technical SEO 101 for React Developers | React Next 2019

Jan 28, 2022
What am I here today to talk to you about first? Lunch, yes, I had a good time, excellent. You know, after lunch, normally, people are like whoever, but you have the energy. I like that. It's important and it's the kittens so I think the internet doesn't have enough cats and I know it's a very debatable opinion but I think we need more cats and actually more dogs too. I'm not that racist or anything like dogs. alright by me too but i thought i would like to build something really volusia nari to disrupt the internet and i will build a really cool website which i call the kitty club and obviously it has as super like they are the best kitties you can find on the internet and the i put them all together on one side and you can upload your own kitty, rate it and if it passes the quality threshold it could be shown here along with the other awesome kitties i have and the coo The thing is these days that's pretty easy to do alright I mean we have

react

ions

react

has hooks I just like to build this page and boom we've got it all done thank you so much for coming to my TED talk except there's one question that keeps coming up especially if you're working for a startup or company bigger than you produce a new website or you are helping someone make their way on the web its like i need SEO and the answer is clearly of course q ue no I mean you know SEO is easy you're like a great website and then you'll find yourself oh ok ok that's not good so ok you might want to know a little bit more about what is seo to get on the internet and be found by search because really how do people find websites if its not one of the biggest websites already owned by a very well known brand then chances are people wont find your site web, especially if you're starting out on the web as a smaller company or have a new project or have a new product that you want people to find and you want people who need it to find it and search engines are a channel fantastic for making that happen. engine of your choice, so you want to be on the search engine you want to be found through the search engine, so how can you facilitate that?
martin splitt   technical seo 101 for react developers react next 2019
Well, first things first. I would like to talk about what is SEO because who here has worked with SEO in the past as

developers

keep your hand up if you enjoyed the experience oh what an amazing person keep that up bring on I'll give you some Swiss chocolate that they can contribute to your SEO because they want that person to know about it. SEO is not necessarily well understood and sometimes you know people are joking about

developers

. They are like what is an algorithm. it does you good and SEO zar someti mes similar they're not necessarily the best people to explain what they're doing and there's so much and that's not because they don't want to or they're terrible people it's just because there's a lot that goes into SEO and all they have a slightly different definition and everyone does it slightly differently based on what their clients or companies we work for need and that's perfectly fine, we also develop software differently so as a confession I'm not reacting much. like vanilla lit or view guy but i've done some react and some angular and i've done it quite a long time so i know what it does and it's a tool in my toolbelt and seo can be similar but what who?
martin splitt   technical seo 101 for react developers react next 2019

More Interesting Facts About,

martin splitt technical seo 101 for react developers react next 2019...

Here you think SEO is basically slaughtering a chicken and like running around the computer three times and then waiting for the moon and stars to align with your hands up. Yes, that's more people saying they enjoy working with an SEO. SEO stands for search engine optimization and actually, in my opinion, most of the time it's three things that starts with content. SEO must discover, along with sales and marketing, what we have and who uses or needs it. and what are these people looking for what is the problem that our stuff is solving and how do we describe what we have to these people that is not really

technical

so what else is there there is also the strategy side of things how we do this what we do we want people to do once you are here what do we want people to say what do we want to help people with what questions do they have what things can we suggest how we can guide them through what is likely to be a very tricky process If you have ever booked a train ticket in Germany You know that complicated processes can take many steps to complete successfully again, that's not really something we can help with, but that's what good SEO is these two things.
martin splitt   technical seo 101 for react developers react next 2019
There are things that good SEO does for you, but where we can help you is on the

technical

side because SEO is also technical because it's a technical process and it deals directly with our code and crawlers behave differently to users and sometimes users. Crawlers behave differently than normal browsers. so a good technical SEO can help you make the right implementation decisions and can help you make decisions while designing your system instead of fixing the problems once they happen, but they can also test the system for you and can also do feel more comfortable with what you're building by saying yes we're making it find or about it we'll find everything is ok and if something goes wrong then they probably know how to fix it too at least they won't fix it themselves but they can tell you where to find the information you need for your specific tech stack to fix that problem so this is where we want to focus today because it's where I feel most at home and I think all of you are probably great too ok so first question i keep getting is also does that mean react you showed us your site was not in search does that mean that reacting kills SEO, you can never have a successful website in Google search? if you're using react and I say no that's not true but the thing is it's a technical process on both sides we're building a technical system and there's a technical system that interacts with our online content and we have to make sure that these play together pretty well and I think the first step to understanding what's going on and where things can go wrong is understanding how your page gets into Google, so how does that work?
martin splitt   technical seo 101 for react developers react next 2019
I mean the other search engines do similar things but I can't. I really know a lot about them because normally other search engines don't talk about it with us or the public so I can only gather from what we're doing what others are probably doing so what I'm telling you is definitely Virginia lid for Google search, but it's definitely valid to some extent for other search engines as well, so we'll start with a list of urls that we found somewhere or someone sent them to us or we just know we know a bunch of web pages so we take these urls and put them in a list and now we take things from this list and we don't do it manually we don't pay people to do it for us we have googlebot for that and the first thing googlebot does if it's crawled it which means it makes an HTTP GET request to the URL it got from the queue and it doesn't like that it's not just one computer doing it, obviously lots of computers do it at the same time. mo time but they are doing a get request and they get some HTML or something back depending on where the HTML of the get request goes so now we have some content and now we need to understand what it is if it is HTML , we probably have semantic information like this is the title this is the link this is an image so you can figure out what this page is about from the text of the image and from the semantics of the page and from the videos and whatnot great and we'll also find links here so once we find the links we can put the tail back and then basically we can do the following and put this information like this page is about cats this page is about dogs this is about ice cream this is about react we put it in our index because if you go to a library you go to someone who knows where the books are and that's more or less like the index something like i need a book about cooking a vegan and then you go to the person or to the computer that lets you look up the index and you say: vegan and then it shows you the whole page, all the books that deal with vegan cooking, you do pretty much the same thing here, we have an index that then we can query about the problem though if we look at our website from before what is this website about we made a get request if I understood this hmm its not necessarily clear what this web page is about so yeah i mean the problem is some crawlers don't actually even run javascript so that's all they see and they end up again thank you so much for coming to my Tech Don't talk don't be quiet because Googlebot actually runs JavaScript and it's been doing it for a couple of years so we've got a little bit of experience with how to do it that's great because that means we can extend our tracing infrastructure properly to trace, process and index. but we also render the problem is that that's the web, as far as Googlebot knows, that's a lot of pages, isn't it?
It's 130 trillion pages, now it turns out that I don't know if you ever started with computing and had a prophecy like mine. theoretical computing people like to imagine so we assume you have a computer with infinite memory or with an infinite number of cores now this doesn't really exist yes but we have

martin

cloud yes but it's still someone else's mputers so there's still actual hardware from somebody and we don't have infinite amounts the hard way I know it's surprising but that's the reality and this number keeps growing this is just the largest approved number I could find so yeah, so how do we do that? ok again the queue so we figure out ok we need to render this page and then we put it in a queue and by the time we have computing power available we actually render it which means we are using a headless browser , in this case chrome to actually run the javascript and get the html after the javascript has run and then again we're processing the links we found all the urls go back to this url crawl queue that's how this works and then it goes to the index and then something that I'm not going to talk about SEO Slough talking about it.
I don't really because it's not my cup of tea and it's a whole different kind of worms because we have to open, we have to do something after that, but it doesn't matter. this is the HTML we're looking at in Googlebot in Google bot this is the rendered HTML so this page is clearly about the kitty club and they are the best kids on the web so this page now has content that we can put in the index once we have it in the index we can render it we can rank but we're not talking about ranking sorry im just this you confused me surprised i put it in anyway you'll hear some SEO Stelling speaking for you too about that you know everything is ok googlebot can run javascript but googlebot runs chrome 41 and 41 yes that used to be true but recently we updated googlebot now it is evergreen chrome and every time we are going to update the stable version of chrome Googlebot will be updated within a couple of weeks so you no longer have to worry about the very old version of Chrome. years we've fixed that and now t We have it through the index we have to rank it which means once someone comes and asks us for the cutest kittens we have to see what kind of kitten pages we have and how good they are and we are looking for many. of factors what is a good kitten page in this particular case if the query is kittens so for each query we are ranking the pages differently it usually doesn't rank all my pages ranked really well ranked really good for what if you're selling vegan The cake ranking for the cutest kitty query doesn't really help you, but then again ranking is a whole different can of worms.
I'd rather not talk about it, so all we're talking about is how we can influence your developers. Let's talk. crawl, render and index ignore ranking ranking is not a problem if you have good quality content if your content is good and you are doing things right you will be fine ok so there are a few things you need to do to make sure that Googlebot can render your website correctly, one thing is that you have to make sure that you are actually linking between your different pages because if what we are doing is getting the HTML we render the JavaScript and then get more HTML and then fetch URLs and links if not you're linking your pages we're not going to find the different pages, well there are ways around that but normally generally speaking you need to make sure youhave links between the different pages of your website lets say your home page is a bunch of products you are selling if i click on that product i should probably find links to other similar products and then we can crawl your page through the links also use real links we're not clicking the stuff oh but it's a not so good button good for you we're not clicking a button why are we clicking a button? me and while we're on the topic we're looking for urls don't use hash or url if you're using the hash hack we used; we had to do it back in the days but because that was our only way to change the content dynamically with a javascript event now we have api usage history react router does it by default should be ok but i've seen a few react apps basically best described as legacy don't trust these if you have one ok it's not that we're saying you can't have any of these but this doesn't describe different content if you're using a hash to load content different then you're hacking urls that don't work with crawlers you also need to make sure you understand which pages on your website are valuable if you have really good description pages if you have really good informational pages those are the gist of your website because if i have a question and your website answers this question it's great if you have user generated content pages you might don't know if this content is good if you know it's not good because many of the fields you provided as optional sources are not complete don't submit it to the index because you know that it takes time for us to crawl and we can crawl the most interesting and meaty pages of your site maybe later or slower so that's not a good thing either and you can optionally create a sitemap and tell us all these pages are really important to us the others not so much and then even if they're not very well linked, we will find them through the site map.
This is a sitemap in the XML file with all the URLs that really interest you. You can use some server side code. You can use puppeteer or something to create this, it's not a guarantee that we'll index it, it's just that you're telling us that you care about these urls and if we catch that, that might be one of the signals we use to calculate figuring out which urls track how often and when too. You don't want to be in this territory, do you? So if I'm looking for a recipe for apple pie, where do I click?
Here are these good search results not really especially if the description is the same for everyone so you'll see how we have a little bit of description text under the title if everyone says the same thing for all the different recipes on their website that not a good experience instead give us specific titles and descriptions if this says barbara apple pie this apple pie is really easy to make and quite fun to bake with kids if thats what im trying to make thats an info fantastic i want as i scroll through the right search results this is not a ranking factor let me be very clear this is not a ranking but it helps users understand that this is a good search result for them so they get more clicks that way oh wow ok never mind the important part this is easily visible you can use a plugin called reaction helmet to do e so on your reaction pages. i can use all the props and in the render function you basically specify the title in this case i am using the name of the cat here oh ok kitty dog ​​club and then i have a description that tells me what is this type of cat and kitten or cat she's doing great but what do I do when I have a situation like this so I can see our goats specifically Abby like this but I want to be friendly to people who misspelled because especially phones sometimes do weird things with all caps , so this should be the same page, but because I didn't start from a green green field, I actually have a legacy app.
I'm also IDS compliant and actually like that it used to be a really really old legacy app that had urls like this. I redirect them all to the same content, but how do we choose? How do we know that you probably want to make sure we have a consistent URL and that we're not using yours that you don't want to see in search results? well this is called canonical and again in within the re act like hamlet you can tell us the canonical link relationship and then give us the url you would like us to display for the specific element and use it so it can help us doubly find cats we're actually pretty good at filtering out duplicates so if you're in this situation and you don't tell us anything we'll go random and we'll be fine so this is something that linked to a lot so we'll probably go for this one if it's something that is linked to us. you probably go for this one but you actually want this one and just tell us we may not choose it if you think your canonicals are wrong or not useful we might ignore them but overall please give us a canonical that will help us a lot too.
If you want something that is not in the index, you can add a meta tag. You can add this meta tag. will crawl it but then say ok this doesn't want to be in the index we will remove it it won't show up in search results if however you are ok with being linked and somehow appearing in two search results a times, but you don't want us to crawl your page because your server is really fragile, so you can also add a robots.txt file and tell us not to, so Googlebot won't crawl everything below the slash private.
You can also use a user agent asterisk and all bots won't crawl this url beware though here you see like my kitty club and you see no images down here none its all blank and it says like kitty club but what why is there no image uploaded here? well the problem is if i check what happens when i call the api this is like some random api slash cats then it says googlebot was blocked by robots.txt so i thought i made sure googlebot to crawl my api , but my api needs to be crawled to display any content so don't do it. be very careful what you put in your robots.tx We also have something we call structured data where it turns out if you've seen something like this here's a product that's rated pretty highly and here's a bunch of recipes if you have one of these things if you have recipes articles movies videos books something like that events all sorts of things you can get these search results by adding what we call structured data you give it to us and we'll be happy to have a rich results test that you can use to find out if your page is eligible and we have a page outlining all the verticals we support so it's all listed here with lots of documentation on what you need to add to your website.
Let's also talk about performance. well i don't know if i look at it the first time i'm glad it's here so first time dargo timing is really bad but luckily there are things you can do you should invest in server side rendering or hydro ation or pre-rendered depending a bit on your use case because it makes things faster for react instant, for example react instant uses a headless dance to crawl your website and create HTML pages from it, it looks Sort of like this if you want to use hydration as well so you can say ok we'll load the static HTML but then we add the HTML the JavaScript action on top of it and then you have a pretty regular react application running in your browser , but it also sent the HTML to crawlers that don't run JavaScript, so you win, you kill two birds with one stone that way and server-side rendering makes a difference if looks at it here like we have the original pen page rendered client side first a dog in here and first we go here if i render server side and that's the only difference the rest of the code is the same it's just rendering of the server side versus client side rendering and each of these is like a second and a half or something. a very slow mobile connection s or that makes a big difference you can also use one of these Gatsby and XJS are pretty cool and actually bring you a lot of this stuff for free and you don't have to deal with this once you're protecting yourself on it gatsby even It has SEO documentation which I think is fantastic and something I'd like to see more of in other frameworks. lazy way and it doesn't actually speed up the first two actions but it does speed up the follow interaction so it's pretty good and then it works with search too be careful to test this properly because there are race conditions where things they can go wrong. this is pretty much the code for this, so we're lazily loading this specific component, the caselist component, and then using react core's fail to say the fallback for this is the loader word, which was this pulsating kitty so you can use that to lazy load your components.
There is also a workaround which we call dynamic rendering. Dynamic rendering is a workaround because it provides no benefit to the user. Basically, what happens is that we can do it. a little darker again to make the arrows show up that would be great if it's not right basically a browser comes to your server asking like hello can i have the thing and you just sent our wonderful there we go no yes yes great and that retrieves the HTML initial and the JavaScript of the client-side rendered app, but if a crawler comes along and tells you in the user agent that your server will branch out and run a browser with no interface to pre-render it to static HTML like that, all crawlers get the content in a static HTML, the downside is that it's a workaround because it's only for the crawlers, you don't get the benefits to the user, you don't have a faster website afterwards, you can use things like render Tron puppeteer or pre-render or i or those are tools and services that do this for you and you set them up on your server if you don't want to touch your code client side that's one way to to do so we also have a code lab if you want to try it out and take the rendering time to spin.
We explained it to you in the code lab. There are videos that explain all this in a bit of a serious way and the series continues so stay updated because there are more videos to come and the first eight videos are pretty helpful already so what can go wrong well I make a little mistake with my arms here and then this happens so it can fall off sometimes that can also happen with technology so what if i go to url that doesn't exist it looks like a narrow page except when i check the HTTP code it says that this is not a narrow page, that's fine and normally Googlebot is good at detecting that, but it should make you nervous when that happens, why should it make you nervous because you might end up? with something like this so this is it this is something you don't want to see there was an error in the search results because you didn't tell us it's a problem this is not the content you'd like to see this is how you fix it then here we are actually why is it asking for dogs it doesn't matter this is asking for a dog from the api a cat in this case really if the cat doesn't exist we can redirect the server it has responded 200 ok we are done with server side but on the server side client we can say go to a page that gives us a 404 and we'll be fine, we can also check for an error and then use the meta know meta robots and set it to no index and then once we've rendered it the renderer will say oh no , this doesn't want to be in the index we can do this so we have as meta robots that disables all indexing and then we use JavaScript to come back and like it knows this page actually exists this is ok who thinks that this is ok smart people because i did that and it wasn't ok because w What happens here is not what you think.
I have to use my own reaction gifts because I don't get copyright clearance for others so I use my own. Now what happens here is the render stage sees or sees the render has finished but we don't need to render we don't need to index because it says no index so JavaScript never ends our pages out of index it's not very good there's more things like for example if you trust cookies let's say you have a home page that has a cookie notice and then on every other page you're checking to see if that cookie has been set and return to that page if hasn't been set so googlebot gets stuck here because booboo bot doesn't persist data we don't set cookies it can use cookies local storage special storage index DB but we're not going to persist that data we're going to throw thosedata once the page pin has finished being indexed so don't rely on these you should also be careful with feature detection feature detection is a good thing but sometimes it's not enough here we check if ever if this browser has geolocation if you have your location we load local content for the small location the user is in and if not then we load global content the downside is this is a problem because what if I turn it down so the browser has geolocation but i disallowed it so this callback never fires because i disallowed the permission request?
Googlebot is denying permission requests so we don't have any content on this page except if you had a fallback controller here load fallback for error case so now we're like this browser has geolocation yeah great in that case what we can gloat about local content if it's successful or we can know backup content if it's not successful that now works in all cases we have content so it's good if you want to learn more about these details you can easily go to the troubleshooting page if you want to test your site we have two fantastic tools to do this one is the mobile friendliness test it tells you if your page is mobile friendly who would have known the other thing is which tells you what it looks like this is this white space looks like the page you would expect to see here gives you the rendered HTML so you can figure out what it actually looks like the rendered html and gives you javascript errors including stack traces so you can figure out if something goes wrong why it went wrong here the other thing is the search console it tells you how much of your pages are in the index , how much was not, how much was included and excluded, and how much had arrows. it also tells you why something is not in the index, so it could be a slight error because we had a 404 on this page and we may not have expected a 404 here.
It also shows you analytics on how often you show up in search results that's pretty nice and you can test them live so you can put in any url that's on your domain and then it does a little bit of spinning and then it tells you that this is not on google and then you can see why not well it's been crawled but not yet indexed so it's so me in the middle keep in mind these tools haven't been updated to the latest version of chrome yet , but we're working on it so stay tuned to our blog and twitter we'll let you know when that happens thank you so much hope you had a good time

If you have any copyright issue, please Contact