From Zero to SEO: Beyond Keywords

03 March 2025


SEO Image

Lately, I’ve been trying to grow my online presence by contributing to open source projects, being more active on LinkedIn and Medium, and writing on my blog. By writing on LinkedIn and Medium, it’s easy to have a large reach of audience since their system helps us on Google Search Ranking as well.


This got me intrigued about how SEO works. Is it only related to adjusting Metadata? Keyword research? Building backlinks?


As I dove deep into this area, I realized that SEO requires implementation from cross-functional fields. For example:


  1. Content production team: Keyword Research and creating high-quality, relevant and engaging content.
  2. People Relations Team: Managing backlinks and deals with other strategic partners to create joint content and shared audiences to boost brand visibility.
  3. Design + Dev Team: Ensure the websites are mobile-friendly and responsive.
  4. Core Dev team: Manage Technical SEO aspects, know how Google Crawler works, Site structure for JSON LD, and attaching metadata.


SEO Guide

Basic site? Basic SEO. Building something HUGE? Then, you need more than just metadata. News sites, e-commerce giants, online learning platforms, thriving forums — what do they all have in common? They understand the real power of SEO. Let’s find out how!


In this post, I’ll discuss how technical SEO works. With my website as the case study.

Crawlability and Indexability

Technical SEO's main purpose is to make our website visible to Google Crawler and how we present it.


But first, we need to show up.


To do that, we need to know how search engines work. There are 3 steps to how it’s done by them. In this example, it’s Google's search engine and crawler bots:


SEO Process

Firstly, make sure that your pages are ready and verified site ownership of your website (Initial Steps for Google Search Console); only then can we request indexing for each page with URL Inspection in Google Search Console.


Google Search Console

For new pages and when the content of URL is changed, hit the button ‘Request Indexing’


When registering each page with URL Inspection, it took me 1–2 days to be ranked in Google.


But as the content of our website grows, it’s wise to make a scalable move, which is to insert the sitemaps for the crawlers. So that every new content is created, we can expect it will be indexed by the end of the week (Typically 3–4 days if we do it this way).


Sitemaps for Crawler Bots

Ever find yourself in a new area, limited to just one path? If you’re like me, you’d want to explore a bit. So, what’s your go-to? Google Maps, of course. It’s fast, offers a range of choices, and keeps things simple.


The same goes for web crawlers. There are billions of websites nowadays, and when it reaches our website, they can follow the path (internal links). But for fast, simple, and when there are lots of URLs to cover, web crawlers might need sitemaps.


Sitemaps

The content of my blog sitemaps. It gives direction for Crawlers to fetch those pages. Get the metadata to index it on the respective search engine.

After creating the sitemaps, don’t forget to submit the URL Sitemaps to Google Search Console as well!

Setting Boundaries for the Crawler (Robots.txt)

We can use a robots.txt file to tell search engines which parts of our website they should or shouldn’t crawl. This is useful for blocking duplicate content, pages under development, or auth-guarded pages that contain sensitive client data.



Robots.txt

In my personal website, the usage of robots.txt is relatively simple

Enriched Search Result with JSON-LD

Ever wondered how to render Google search results for your website? It turns out it’s still within the scope of technical SEO!


JSON-LD

Firstly, let’s see the common attributes of the JSON-LD Schema. Let’s say I have an E-Commerce site that has a product detail page. The JSON-LD Schema should be similar to something like this:

{
  "@context": "https://schema.org",
  "@type": "Product",
  "name": "Example Product",
  "description": "A detailed description of the product.",
  "image": "https://example.com/image.jpg",
  "offers": {
    "@type": "Offer",
    "price": "29.99",
    "priceCurrency": "USD"
  },
  "aggregateRating": {
    "@type": "AggregateRating",
    "ratingValue": "4.5",
    "reviewCount": "120"
  }
}

  • @context specifies that we're using schema.org vocabulary.
  • @type indicates this is a product. It can be BlogPosting, Articles, Books, Events, Places, Rating and more!
  • The name, description, and image properties provide information about the product.
  • The offers property contains a nested object with offer details.
  • aggregateRating shows how many ratings the product received of those reviewCount available.

Testing JSON-LD

Testing the JSON-LD Schema with Google Search Tools


Testing the JSON-LD codes can give us a better overview of how the rich snippets work. This is quite a game changer because we don’t need to wait for the pages to be crawled by bots and getting indexed, which could take days.


Site Structure

Alright, here’s where it gets a bit like choosing your adventure.


Site Structure

My blog setup with SEO in mind


If you’re using an older Next.js setup, you’ve got a choice: you can do it my way or use a library called “next-seo”. But if you’re using the shiny new Next.js (app router), they’ve got this built-in feature generateMetadata() that does the job for you.


Metadata Reusable Component


I’m using MDX for my articles, which lets me add fancy UI elements directly into my writing. To keep this SEO-focused, we’ll dive into how my <Metadata /> component helps me manage the important SEO information for each blog post.



The blog post has been processed as such and renders properly when we try to access each of them. But how do we expose each articles to form the sitemap?


It’s time for us to create an API function to handle that:


API Function

FILE: /api/articles.ts handles all created articles into a list of object that sitemap would understand

After creating the API, we can quickly check the API Response


API Response

By having the API response ready, we can now easily generate the real Sitemap XML. Since my website is still using Next JS v.12.1.5, I can do it by using getServerSideProps function.


Sitemap XML

This sitemap.xml.tsx will be accessible in https://ilhamadhim.my.id/blog/sitemap.xml

Final Sitemap

The final result!


After the sitemaps are ready, request indexing on the Google Search Console by doing URL Inspection and clicking on the “Request Indexing” button.


Request Indexing

Technical SEO Tools

To monitor all those efforts, we need to validate at least 3 things:


  1. Check if sitemaps.xml is valid with Sitemap Validator
  2. Check if robots.txt is valid by accessing directly on your website
    https://<your-domain>/robots.txt
  3. Check the page speed and mock how your site visitors might experience it with https://pagespeed.web.dev
  4. After ensuring everything is up and running, just give it a few days for the Crawler to gather your data. These data will be displayed in Google Search Console.

SEO Tools

For context, this image is taken at 28 Feb 2025. It took almost 2 weeks for the report to show up.

Others

  • Open Graph Meta Tags

Open Graph meta tags are crucial for controlling how your web pages appear when shared on social media platforms like Facebook, Twitter (now X), LinkedIn, and others. They tell these platforms how to display your content.


Open Graph Meta Tags

How OpenGraph renders my URL Overview when it is being shared through other platforms (e.g. Discord).

The use case of OpenGraph in my website is simple. For maximum scalability, I put builtTitle, description, url, metaImage in parameters that can be adjusted when I call the <Metadata /> component. The output of my OpenGraph section is as follows:



You can check more comprehensive syntax of OpenGraph here .


  • Page Speed and Mobile-Friendliness

A fast server load time will lead to better coverage of search engines. Whereas a mobile-friendly design is just much more common and convenient for people to access your website. The more seamless your website design, the more CTR (click-through rate) that you possibly yield (as long as you pump out high-quality content that is relatable to your audience).


Page Speed

Quick check for mobile responsiveness of my personal website with Lighthouse.

  • HTTPS and Security

Implementing HTTPS and maintaining strong website security are crucial for SEO. They build trust, protect user data, and contribute to a positive user experience. By prioritizing security, you can enhance your website’s visibility, improve its ranking, and protect your brand’s reputation.


Conclusion

Remember, SEO is a journey, not a destination. By prioritizing crawlability, structure, and user experience, you’re already well on your way. Technical SEO isn’t as daunting as it seems — you’ve got this 😉.


©️ Muhammad Ilham Adhim - 2025