Back to all tech blogs

Introducing web workers to improve subito.it performance — part 2

11 min read
The results and challenges: was it worth it?

By: Alberto De Agostini and Alessandro Grosselle, Senior Engineers

In this article, we continue our journey of adding web workers to our frontends at Subito.it, part of Adevinta. We discuss the results and challenges of this exercise, and answer the question: “Was it worth it?” If you haven’t read part one yet, you can find it here. This will explain what web workers are and why we chose to introduce them.

Results

We rounded off the story in part one with how we measured and compared the TBT with and without web workers. Now, let’s reveal the results.

The average TBT we measured was 4142.5ms for the version with web workers vs 4335.5ms for the ‘normal’ version. This means we got a 200 millisecond decrease of the total blocking time on average!

That may not seem a huge number but it does matter, especially if you consider:

  1. Every performance increase is great, no matter how small it is.
  2. The worse the device, the better the improvements the web workers bring. This is because low-end devices have slower CPUs (but usually have multiple cores), and moving javascript operations increases the parallelism of the multicore CPUs. With this in mind, the tests were done on my machine, and I have a Macbook Pro with an Intel Core i7 with 2.6 GHz. That’s a lot better than the average user.
  3. All the other metrics remained pretty much the same. In some tests there were a few oscillations, but the averages were stable. Other performance metrics weren’t affected, the values moved as expected and we considered this a great result.

But there were difficulties

If you think that implementing this was issue-free, I have bad news. We faced a lot of technical challenges — a lot more than anticipated, if I’m honest.

Before you get cold feet, I want to stress that a lot of the challenges we faced were due to our architecture and how we wanted to implement the feature. If you want to use web workers on a simple project, you won’t face most of the following issues (theoretically none of them), especially if you use frameworks like Next.js or Nuxt, as the latest versions handle web workers out-of-the-box by using webpack 5.

We learn the most when we encounter problems, so it was a huge (and interesting) learning curve — and worth reading on!

Web workers need a file to be created

As you can read from the MDN docs, to create a worker you must provide a JavaScript file that includes the logic of the worker. This means it must set up a listener to receive messages/data from the main thread, perform an action based on the message and then return a message/result to the main thread.

So, what’s the problem? As I said before, if you were using web workers in a Next.js project that has all the business logic inside it, you would need to create a JS file, tell Next.js to serve it and then reference it from the code when doing `new Worker(…)`. The framework will take care of serving the file at runtime on the browser.

If you instead have a single page application (or any static generated site), you need to find a place to host/serve that file. For example, you can upload it in an S3 bucket and then serve it through a CDN like Cloudfront. In that case you can then point to it from your application. This is a layer of complexity that you’re going to need to add to your architecture if you don’t already have a CDN up and running.

But as we use Next.js for our applications, in theory we shouldn’t have a problem with this. The issue arises because we have an external library to make http requests to our backends. Our Next.js apps include the library (via npm) and use some exposed methods to fire the requests. So, our worker is not going to be implemented in our Next.js apps. Instead, it’s implemented in a separate library bundled with rollup (as a separate project).

We thought of two possible solutions:

  1. Create the script at runtime and pass it as a blob to create the worker. Basically this means creating a file (blob object) with the script as a big string, then spawning the worker by providing that blob. This is called ‘inline-workers’. You can read more about this technique here. It’s a cool solution because it removes the issue of serving the file completely. Obviously, it has some trade-offs and we’ll cover these shortly.
  2. Instruct rollup to create the bundled library AND a separate chunk for the worker, then make Next.js serve that file.

The first solution — pretty easy but a few drawbacks

Implementing the first solution was pretty easy. You can leverage some plugins like https://github.com/darionco/rollup-plugin-web-worker-loader (specifying `inline: true` on the configuration) and rollup will take care of all the complexity.
We had a go but we weren’t really satisfied because this approach has some downsides that we wanted to avoid:

  • Can’t cache the worker file: being the file created at runtime as a string, you can’t cache it anywhere.This can be a bit of a performance quibble, and the larger the file, the worse the problem.
  • Blob size: this is related to the previous point. Our http library makes use of two external dependencies: ‘axios’ and ‘morphism’. These are two “not little” libraries, and to make this approach work, we must inline both in the worker script, resulting in a massive blob to be created every runtime. Ouch.
  • Debugging complications: last but not least, adding web workers is already adding a layer of complexity to the application, especially when something breaks as debugging can be harder. The inlined blob file makes this even worse. Chrome DevTools is going to have a harder time helping you when something doesn’t work, especially with sourcemaps.

So even if this solution was easily implemented, we decided to test the second approach to see if we could do better, both for the end user and for our developers.

💡 By the way, we think the first approach can be fine for simple cases where you have a static page and the workers’ script is not going to be long, so make sure you consider this approach case-by-case.

The second solution — a bumpy road but problems solved

The road for the second solution had a lot of twists and turns.

We used this amazing plugin from @surma. By using this, we ended up with a bundle and a chunk for the worker, and rollup automatically handled the URL of the import for the worker file. This would work well in a project where the application is also bundled with rollup. But we have the app and the library separated and need to instruct Next.js to import and serve the worker file, not rollup.

Unfortunately, at this time, the plugin didn’t accept a parameter to specify this behaviour. We needed a way to tell rollup: “yes please, bundle the library and make a separate chunk for the worker, but don’t ‘fill’ the URL to import the worker file; that will be done by someone else (Next.js) in a future build step”.

I think this is pretty advanced and we didn’t find a lot of resources about it. But, by trial and error, we were able to make this work by overriding an internal (and undocumented) property of the plugin.

This is the part of the config that makes it work:

Code
Code

We tell the plugin to not ‘fill’ the URL and where to import the file. We then end up with a bundle containing this line to import the worker:

worker = new Worker(new URL(“Worker-4b398188.js”, import.meta.url));

The “import.meta.url” is a special directive that Next.js is able to understand and replace with the application URL at build time.

Without that configuration, the same line was:

worker = new Worker(new URL(“Worker-9b59f393.js”, module.uri));

`Module.uri` is something that the plugin is providing, not something that Next.js can change.

This was a bumpy road. It took more than a few hours, but we ended up with a second solution that worked as we wanted: the file is created at build time, Next.js can serve it and cache it. So, this problem is solved, right? Of course not, not yet at least.

The code generated by the rollup-plugin-off-main-thread also has another issue. It’s not server-side ready. This means that it generates code that makes use of the browser API, and when this runs on the server side of Next.js, node.js does not have a good time!

Web workers are a browser feature (there are workers on node.js, but they are not the same thing), and we only wanted the client-side web workers for performance.

To overcome this, we created a simple solution. In the library we created an entry point that checks if we are on the client, and only if we are, we dynamically import the rest of the library, otherwise we just do nothing.

Entry point code sample:

Code
Code

With this little trick we solved our first problem. This was the biggest one by the way…luckily.

Don’t get us wrong, Surma’s plugin is amazing. These problems are understandable: the first is pretty advanced and it’s likely no one encountered it before, the latter is because Next runs code both server-side and client-side, and the rollup plugin is meant just for the client. Down the road, we’re planning to try to open a few Pull Requests (PR) to fix these two issues.

Storybook is not able to serve the worker file

Storybook was our next hurdle. For the folk who don’t know what Storybook is — we use it as documentation for our libraries (and highly recommend trying it as it’s an amazing tool).

I guess we could say that Storybook is not as smart as Next.js. As what we’ve done in the previous chapter to leverage Next.js’s ability to host and serve the worker file is exactly what broke Storybook. We actually didn’t anticipate this problem. Storybook makes use of webpack and it should be able to handle that correctly, but apparently there is a bug. We opened an issue on the official repo and we are monitoring this.

As a temporary workaround, we mocked our library that creates and uses the web worker. We don’t use the worker in our Storybook, we just need to show a textual documentation for that package, for example “how to use the library and similar stuff”. So, we decided to mock the library at the Storybook level. You just need to specify how to resolve the alias of the library in the config.

This is ours, as an example:

Code
Code

No matter what’s inside, it’s not called in Storybook.

💡 The `webpackFinal` is a hook where you can tap-in and change the webpack configuration. You can read more about it here.

Messages between the main thread and workers must be serialisable. The communication between web workers and the main thread happens via ‘postMessage’, an API that accepts only ‘transferable’ data (serialisable). In short, this means that you can only send simple data like strings, numbers, objects and so on. You cannot pass functions, references to DOM elements and other complex stuff.
(Here are some useful links if you want to read more transferring data and what works and what doesn’t.)

Again, this shouldn’t be a problem. You’d expect that we’d send the data for the http request (and that’s going to be just objects, strings, booleans and numbers) and expect back the same kind of data — the response from the server modelled. Unfortunately, because of our architecture and how we wanted to implement this, this was not the case.

As we said, we already had the Networking library with functions that call Axios and map the response via Morphism. To implement our solution as a separate library, agnostic from any logic, we expose a function that accepts the Axios config and the Morphism schema, and returns the modelled response. This way we can use this low-level library inside our already built networking library, just by changing the Axios and Morphism call with this new library exposed method.

Unfortunately, some of our Morphism schemes have functions inside, which is specific to Morphism and how it works. We didn’t want to change the architecture, which is why we took this route, even if it meant a little more complexity. Surfing the web, we found a solution for this issue. It may not be elegant, but it does the job. It means writing custom `JSON.stringify` and `JSON.parse` functions that serialise and deserialise functions (by inserting the body of the function when stringifying and reparsing as a function when parsing).

This is the code, if you’re interested:

Code
Code

Once we cracked this, we achieved our goal, but it took way more time than expected.

In conclusion — was it worth it?

We consider ourselves advocates for web workers. We think they are highly under-used and suggest everyone at least tries them out, because they can bring good performance improvements. But, even if the ecosystem has improved massively since their release, (they are an old feature, available even in IE11) in some cases it still feels too hard to use them (and their API, like the postMessage, is not great). This is probably why they’re not widely adopted.

So, should you use them? Try! If you can adopt them with almost zero effort, go for it. Otherwise you’ll need to consider the trade-offs case-by-case because the implementation effort and the added complexity may not always be worth it, even if performance matters a lot.

Have you tried web workers and want to get in touch with me to have a chat about it? We’re always glad to hear feedback and tips if you think we could have done something better.

Related techblogs

Discover all techblogs

From Glitches to Grins: The Support Superhero Squad’s Epic Journey to SLO Success!

Read more about From Glitches to Grins: The Support Superhero Squad’s Epic Journey to SLO Success!
Breakroom

Lessons learned from organising our first global AI hackathon

Read more about Lessons learned from organising our first global AI hackathon
AI hackathon

How we matured Fisher, our A/B testing Package

Read more about How we matured Fisher, our A/B testing Package
working