I remember staring at my PageSpeed Insights report, seeing that angry red number for “Total Blocking Time.” My own site felt snappy enough on my fiber connection, but the data told a different story. It was one of those moments where you know something is wrong, but the fix feels like chasing ghosts.

Foto oleh cottonbro studio via Pexels
It’s a common problem, this idea of main thread blocking time. You see it, you know it’s bad, but the path to fixing it isn’t always clear. Most tutorials jump straight to `async` and `defer` for JavaScript, and while those are valid, they often miss the bigger picture. I’ve been there, thinking I’d solved it, only to see the numbers barely budge.
The Invisible Wall: What Happens During Main Thread Blocking Time?
Think of your browser’s main thread as a single-lane road. All the important stuff has to use it: parsing HTML, styling CSS, executing JavaScript, handling user interactions. When a task takes too long, it’s like a broken-down truck blocking that road. Nothing else can get through. The UI freezes. Clicks don’t register instantly. That’s main thread blocking time in action.
My first real encounter with this was on a personal project, a small online tool I built for converting file formats. It was simple, but when I uploaded a larger file, the entire page would just… pause. For seconds. The progress bar wouldn’t update, the cancel button was dead. My initial thought was, “My code is inefficient.” Which it was, partly.
But the deeper issue was how my browser handled that inefficiency. A single, long-running JavaScript function was hogging the main thread. It wasn’t just slow; it was *blocking* everything. This directly impacts Interaction to Next Paint (INP), a crucial Core Web Vital. A bad INP score means users feel your site is unresponsive.
Where Most Tutorials Miss the Point: Beyond Just JavaScript
Yes, JavaScript is a major culprit. Heavy scripts, unoptimized loops, complex DOM manipulations – they all contribute. But it’s not the only one. This is a nuance many guides gloss over, and it led me down a lot of rabbit holes.
I distinctly remember debugging a static portfolio site I made. Minimal JavaScript, mostly just CSS animations. Yet, Lighthouse flagged significant blocking time. I was baffled. How? Turns out, my CSS was the problem. Specifically, layout thrashing.
I had some JavaScript that, on scroll, would read an element’s computed style, then immediately modify another element’s style, forcing the browser to recalculate layout and repaint. Doing this repeatedly in a scroll event listener was incredibly expensive, even if the JS itself was minimal. It was like forcing the browser to rebuild the entire road for every single car that passed.
Another silent killer is excessive DOM size. A webpage with thousands of elements, especially nested deeply, makes styling and layout calculations inherently slower. Even small CSS changes can trigger massive re-renders, blocking the main thread. It’s not the individual task that’s long, but the sheer volume of tiny tasks adding up.
Is it always about JavaScript?
No, not always. While JavaScript execution is a primary source of main thread blocking time, other factors play a significant role. Heavy CSS, complex layouts, forced synchronous layout calculations (layout thrashing), and even large image decoding can contribute. The main thread is a shared resource, and anything that demands too much of its attention for too long will cause a blockage.
Tactical Fixes I Actually Use (and Messed Up First Time)
After many attempts, some failed, some partially successful, I’ve found a few approaches that consistently help. These aren’t magic bullets, but they make a difference.
One of the biggest lessons was breaking up long tasks. If I have a function that takes 500ms to run, I can’t just let it run. The browser will freeze. The first time I tried this, I just slapped `setTimeout` everywhere, thinking it would magically solve everything. It made the code harder to read and didn’t always help, because I wasn’t breaking the *tasks* themselves, just deferring the whole thing.
The real trick is to identify the parts of the long task that can be executed independently. For computationally heavy stuff, Web Workers are a lifesaver. I used them for that file conversion tool. Offloading the heavy processing to a background thread meant the UI stayed responsive. It was a pain to implement initially, with all the message passing, but the result was night and day.
For rendering-related issues, like my CSS thrashing problem, the solution was to audit how I was reading and writing DOM properties. Grouping DOM reads, then grouping DOM writes, and avoiding interleaved operations. Using CSS properties like `will-change` (sparingly!) or `content-visibility` can also help. I saw a 200ms reduction in blocking time on an internal documentation site just by adding `content-visibility: auto` to some heavy sections.
And let’s not forget the DOM itself. Sometimes, the best optimization is simply less. Pruning unnecessary elements, simplifying nested structures, or lazy-loading parts of the UI that aren’t immediately visible can dramatically reduce the browser’s workload. You can read more about common causes of poor interactivity, which is heavily tied to main thread blocking time, in read also: Penyebab Inp Buruk (Javascript, Third Party Script).
The Uncomfortable Truth About Third-Party Scripts
This is where it gets tricky. You can optimize your own code until it sings, but then you add a third-party chat widget, an analytics script, or an ad network, and BAM! Your main thread blocking time skyrockets. I’ve seen a single marketing script add 300ms of blocking time to my personal blog. It’s frustrating because you often need these tools.
My go-to strategy here is a combination of `defer`, `async`, and aggressive lazy loading. For scripts that don’t need to run immediately, `defer` is usually better than `async` because it preserves execution order. For ads or widgets that appear below the fold, I’ll often wrap them in an Intersection Observer to only load them when they’re about to become visible. This isn’t always possible, especially with some stubborn scripts, but it’s worth the fight.
Sometimes, I’ve even resorted to self-hosting small, critical third-party libraries if their CDN version is consistently slow or adds too much overhead. It’s more maintenance, but sometimes it’s the only way to regain control over the main thread.
Can I really control third-party scripts?
To a degree, yes, but it’s a constant battle. You can implement attributes like `defer` and `async`, use resource hints, or lazy-load them. However, some third-party scripts are just built to be aggressive and will still block the main thread. In those cases, it often comes down to evaluating if the value they provide outweighs the performance cost. Sometimes, the most practical solution is to find a lighter alternative or negotiate with the vendor for a more optimized version.
The “Why Did That Work?” Moment: Performance Monitoring Beyond PageSpeed
You can spend hours optimizing, but without proper monitoring, it’s just guesswork. PageSpeed Insights is a great starting point, but it’s lab data. It doesn’t tell you what your actual users are experiencing. Real-User Monitoring (RUM) tools are essential here, giving you field data on metrics like INP and TBT.
But for deep dives, Chrome DevTools’ Performance tab is my best friend. I once spent a day trying to figure out why a specific page on my site had intermittent jank. Lighthouse looked fine. RUM showed occasional spikes. In DevTools, I recorded a user flow and zoomed into the red blocking sections. It revealed a tiny, almost invisible `resize` event listener that was firing hundreds of times per second on certain browser widths, completely unbeknownst to me.
It was a piece of inherited code, buried deep. No amount of static analysis or PageSpeed reports would have found it. Only by seeing the actual execution on the timeline, frame by frame, could I pinpoint the exact function causing the main thread blocking time. It felt like finding a needle in a haystack, but the satisfaction was immense.
Reducing main thread blocking time isn’t a one-time fix. It’s an ongoing process of understanding, testing, and iterating. You fix one bottleneck, and another appears. It’s like gardening; you prune, you fertilize, and you keep an eye out for new weeds. I just closed my laptop, knowing there’s always another long task waiting to be optimized.
