Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if somehow the request got handled by two different edge workers that were desynchronized? I’ve seen it happen in busy areas (NYC, etc.) where a single client will hit many Workers in a session whereas when connecting from a rural area I’ve never observed that.

Regardless, I say the solution is fat index files. Is there any tangible benefit to the long held tradition of separating the structure from the functionality from the styling? Seems to me like that’s just asking for trouble.



I mostly use Vite nowadays, so my bundle is usually automatically split into a vendor.js and an index.js. The vendor bundle for dependencies is large (usually 50-200KB brotli'ed for my popular side projects) and rarely changes. The index.js containing only my code is usually smaller and changes on every build. With a fat index everything has to be downloaded on every change. Most people don't care these days but I try to make the experience nice even for people with really shitty connections.

In addition, fat index files are really bad for multi-page apps.


Splitting vendor and index makes good sense, I use esbuild which has [iffy](https://github.com/evanw/esbuild/issues/207) support for that. Still, the vendor code could be loaded independently while the application code (and styling!) is inlined into the initial index.html response.

> I try to make the experience nice even for people with really shitty connections

I see this sentiment a lot, though people seem to be able to use it to justify any design at all... the question IMO is what does "shitty" mean?

- In a low bandwidth case, serving only the absolute minimum data to do what the user has specifically requested makes good sense. A solution here is Server Components, which can send the client js event handlers on a per-interaction basis.

- In a high latency case, the total number of round trips should be minimized at all costs, so the Server Components approach is terrible, the client may need to wait for 2+ round trips to do their interaction (one to download the client js, one for that client js to perform the actual action). A solution here leans towards the fat index approach.

- In the case where connections drop often, all the data that the client might need should be transferred over as soon as possible, as there's a good chance they won't be able to access the server at the exact moment when the data is needed. The solution here is the fattest indices possible, with copious caching.

These are all three in conflict with each other to some extent, so the best approach is probably dependent on the specifics of your user-base.

In my experience, a "shitty connection" is one where the bandwidth is low, latency isn't typically a big deal, but the connection will drop frequently, potentially for hours or more. However, in such cases I'll at times have access to the occasional hotspot where the network is perfectly fine. Accordingly, I design my apps to transfer as much data as possible initially (ensuring that the main content can be seen and interacted with even if data for some other module is transferring in the background), and provide the option to store everything in stale-while-revalidate service worker caches so the full experience is available fully offline to the extent possible, even if you didn't fully explore the app while online. In this way, I can download the latest chunks when on the good connection and run fully offline from then on (obviously excepting actions that are legitimately impossible without a server).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: