HTMX has given me so much joy. I love Django and was there near the beginning. For a while I thought I had to switch to something like FastAPI and Vue etc to make relevant web apps and sites, but with Django recently adding async, Django Ninja, and HTMX, I'd reach for Django now for almost anything besides a few specific use cases.
So many problems I've run into with newer tools feel like they were already solved years ago if you can SSR.
Not that the newer tools don't have their place and they have plenty of good ideas, but it's been fun to see Django stay relevant and for more of the included batteries to be useful again (forms, templates, etc).
At some point I hope it becomes obvious that well-engineered SSR webapps on a modern internet connection are indistinguishable from a purely client side experience. We used this exact same technology over dialup modems and it worked well enough to get us to this point.
Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention. It also doesn't help much in the more meaningful domains of business since you can't cheat god (information theory). If the data is so frequently out of sync that every interaction results in JSON payloads being exchanged, then why not just render the whole thing on the server in one go? This is where I can easily throw the latency arguments back in the complexity merchant's face - you've simply swept the synchronization problem under a rug to be dealt with later.
Yes, a well-engineered SSR webapp could be indistinguishable from an SPA. However, it is much harder to build a well engineered SSR with the tools we have. I haven't seen someone solve errors with form submissions and the back button well at the framework level. Post-Redirect-Get was awful. Trying to solve back buttons and wizards. Trying to solve modals. Is a modal a separate page with the rest in the back? What does closing a modal mean? What does a sidebar mean? How about closing it? Pretty soon, you're in half-an-SPA already.
And since you don't want a 2000 character URL, you're either storing half of the session on the server or having to build an abstraction with local storage. And since our frameworks didn't evolve to handle that, what is the purpose?
The key insight into the SPA is that you are writing a coherent client experience. No SSR framework figured out how to do this because they thought about pages rather than experiences.
Let me be clear: I am speaking about web applications. If you're providing information and only have a small number of customer interactions, an SSR is superior. CNN should not be an SPA.
All of the SSR webapps I've built had these solved at a framework level. Dot net and PHP.
Like, the back button: there is no logic because this isn't react. It's just the browser back button. You don't have to do anything if you're using SSR. Back button problems only apply to SPAs or hybrids.
> Yes, a well-engineered SSR webapp could be indistinguishable from an SPA. However, it is much harder to build a well engineered SSR with the tools we have.
Clearly you've never used Laravel + Livewire. Modals, forms, wizards, sidebars, I have all of that in my app without writing any client-side JavaScript. And it works better than most SPAs. I actually get gushing praise for how "smooth" the app experience is.
My contention is that this may not be the traditional client side app, but you are still placing these on a single page. Just because you are replacing the HTML on the page doesn't mean it is a multi-page app. It's an interesting SPA/MPA hybrid but just because you are not writing javascript doesn't mean that the infrastructure isn't using javascript to handle the plumbing.
So, let's use this as an example. Let's say you bring out a side drawer to edit the details of one row on the table. The side drawer pops up. The user edits details and clicks submit. (To answer this question, the user scrolls to other parts of the table to look at other rows.) There is an error in the user's input based on business logic. The user corrects it, and the row is changed. The side drawer goes away.
How many times is the whole page loaded from scratch? In a traditional SPA, the page is loaded once. With a strict MPA, the page is loaded from scratch four times. With Laravel + Livewire, to my understanding, the page is loaded once and divs are replaced with HTML from the server.
Even if it is not a react app, it is still a collection of single page apps with server side intermediations using html.
> The key insight into the SPA is that you are writing a coherent client experience.
This is the best way to put it I've yet seen. HN articles keep saying things like "now that navigation transitions are solved in CSS, there's no use case left for SPAs". Is everyone just writing apps for widespread content consumption or something?
> CNN should not be an SPA.
Yes, and we need canonical "that should be an SPA"-type apps to bring up in these discussions--which can be hard, since all the best SPAs are for getting work done, not publishing content for the public to consume. Thus, as a class they tend to be department-procured B2B apps and not as generally recognizable. I propose GMail and Google Docs/Sheets/Slides for starters.
The only real use case for an SPA is something that has to continue to work offline. There are legitimate cases like this, but most apps developed as SPAs aren't it.
> No SSR framework figured out how to do this because they thought about pages rather than experiences.
Laravel, Blazor and apps designed around HTMX are all like this. "SSR framework" has literally nothing to do with "pages rather than experiences". Pages are just a medium to deliver experiences.
> Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention
With modern CSS transitions, you can mostly fake this anyway. It's not like javascript apps actually achieve 0ms in practice - their main advantage is that they don't (always) cause layout/content flashes as things update
> At some point I hope it becomes obvious that well-engineered SSR webapps on a modern internet connection are indistinguishable from a purely client side experience.
I dunno; other than the fact that there are some webapps that really are better done mostly client-side with routine JSON hydration (webmail, for example, or maps), my recent experimentation with making the backend serve only static files (html, css, etc) and dynamic data (JSON) turned out a lot better than a backend that generates HTML pages using templates.
Especially when I want to add MCP capabilities to a system, it becomes almost trivial in my framework, because the backend endpoints that serve dynamic data serve all the dynamic data as JSON. The backend really is nicer to work with than one that generates HTML.
I'm sure in a lot of cases, it's the f/end frameworks that leave a bad taste in your mouth, and truth be told, I don't really have an answer for that other than looking into creating a framework for front-end to replace the spaghetti-pattern that most front-ends have.
I'm not even sure if it is possible to have non-spaghetti logic in the front-end anymore - surely a framework that did that would have been adopted en-masse by now?
I read HN all the time on my phone, and I love that it loads reliably even on 1 bar of 4G. Meanwhile, Reddit no longer works reliably even with 3 bars of 5G.
The former is HTML with a light sprinkling of JavaScript, the latter is a SPA app.
I actually started my own PHP based on C# called CHP for fun.
It runs atop whatever the current dotnet hosting service is (Kestrel?). It takes everything inside the "<? ?>" code blocks and inlines it into one big Main method, exposing a handful of shared public convenience methods (mostly around database access and easy cookie-based authentication), as well as the request and response objects.
Each request is JITed, then the assembly is cached in memory for future requests to the same path, and it will recompile sources that are newer than the cached assembly.
There is no routing other than dropping the .chp extension if you pass "-ne" into the arguments launching the server.
It's not very far along, and is completely pointless other than for the sake of building my own web language thingy for the first time since 2003.
Have you looked into the string interpolation & verbatim operators as a templating alternative? These can be combined to create complex, nested strings:
There are a lot of ways to manage this problem. My preferred path is to wrap interpolated fields with HttpUtility.UrlEncode() when I know a user can touch it and there are plausible reasons for allowing 'illegal' characters at form submit time.
In terms of performance, it is definitely faster. The amount of time it takes to render these partials is negligible. You'd have to switch up your tooling to measure things in microseconds instead of milliseconds if you wanted any meaningful signal.
As a long time PHP developer, it never fails to amuse (amaze?) me the lengths people go to in order to get the things the browser will give you for free.
The browser gives you a full-blown programming language with a rich API, but it seems a lot of people avoid that in favor of smushing together a static view on the server side with little more than string interpolation.
HTML templating isn't just string interpolation, Its a whole templating engine. It's not like JSX, which is fake templating. Server side frameworks have real templating.
Plus, you have to convert data to HTML somewhere. If you're using react you do this typically on the front end. You traverse and read JSON and convert it to HTML... Just like you would in PHP. Just, on the front end.
That's just bad coding no matter the framework. Too many devs pull too many packages in....Saving themselves minutes at the expense of all the users waiting longer. So much compute and co2 is wasted on it.
The most "special" code that I regularly come across is when a developer takes a JPG in blob storage -- already a public HTTPS URL -- then serves that in a "Web API" that converts it to base-64 encoded bytes inside JSON, sends it to client JavaScript, decodes it, and feeds it to an image in code.
Invariably, it's done with full buffering of the blob bytes in memory on both server and client, no streaming.
Bonus points are awarded for the use of TypeScript, compression (of already compressed JPGs, of course), and extensive unit and integration tests to try and iron out the bugs.
It has been wild to realize I've now seen one full technology cycle of thin client to thick client to thin client again. Maybe PHP this time around will be able to be more robust with the lessons learned.
Major differences that I can think of between the two are (with regarding to warts and ease of use):
PHP 8 uses exceptions with a unified Error hierarchy, there are type errors, division by zero, certain parse errors and so on.
PHP 8 has strong support for static typing now, thank goodness.
PHP 8 introduces union types (int|float|null).
PHP 8.1 introduces intersection types (A&B).
PHP 8.1 added the "never" return type.
PHP 8 has less repetitive boilerplate.
PHP 8 has consistent function signatures now.
PHP 8 has consistent object/array syntax now (to be honest, some asymmetries remain).
PHP 8 has named arguments for clarity and flexibility.
PHP 8 has the nullsafe operator which simplifies deeply nested null checks.
PHP 8 has arrow functions which makes closures concise and easier to use.
PHP 8 has attributes, e.g. "#[Route("/users")]".
PHP 8 has "match" expressions which is a more predictable, type-safe, and expression-oriented alternative to "switch".
PHP 8 has many more tools for testing and debugging (incl. static analyzers).
PHP 8 has many new functions (incl. utility functions).
PHP 8.1 introduces native enums.
PHP 8.1 has "readonly" properties for enforcing immutability.
PHP 8.1 has cleaner syntax for referencing callables.
PHP 8.1 has "fibers" which enables cooperative multitasking and is a foundational building block for upcoming async/await features.
Global namespace pollution has been pretty much resolved (Composer autoloading[1]).
There are other ecosystem-level improvements such as PSR standards[1], better async story, etc.
This list is non-exhaustive. These are just the improvements that come to mind off the top of my head so I probably missed a lot of other major improvements. PHP 8+ is definitely much easier to use and they greatly reduced PHP's warts. There may be some inconsistencies left here and there, but they are not a deal-breaker IMO, if you even run into them.
I strongly recommend taking a fresh look at PHP 8+. It is very different from the PHP you have once known. It is "modern" now. There are lots of deprecations and removal of old warts. I did not like PHP as much ages ago, but it was a pleasure to use PHP 8+.
If you are looking to (re)learn PHP, the book “PHP & MySQL: Novice to Ninja” is a good starting point[1]. There are many other, high-quality books and resources as well.
If you have any specific warts or whatnot, or if you want more resources, please do feel free to let me know.
---
I wrote this comment on my phone, so it is not as detailed and it is not structured as well, but I hope that it will still provide some insight into the differences between legacy PHP and modern PHP.
Given you can run Doom on your fridge these days, it should be absolutely no surprise that you can already run PHP both in the browser and in Node [0].
The example URL here, though, is still not (helpfully) bookmarkable because the contents of page 2 will change as new items are added. To get truly bookmarkable list URLs, the best approach I've seen is ‘page starting from item X’, where X is an effectively-unique ID for the item (e.g. a primary key, or a timestamp to avoid exposing IDs).
Yeah, solving this edge case properly can add a lot of complexity (your solution has the same problem, no? deletes would mess it up as would updates, technically). I've seen people using long-lived "idempotency tokens" point to an event log for this but it's a bit nuts. Definitely worth considering not solving it, which might be a more intuitive UX anyway (e.g. for leaderboards).
He’s being downvoted because suggesting cursor pagination in an example describing sorting by price (descending) is plainly wrong. While neither is bookmarkable, cursor pagination is much worse.
The UX went from “show me _almost_ the most expensive items” to “show me everything less expensive than the last item on the page I was on previously — which may be stocked out, more expensive, or heavily discounted today”. The latter isn’t something you’d bookmark.
If you believe that the user wants to see everything around a particular price point, e.g. because they've ordered their search results by price, then the correct cursor token is the price point of the top item (or the price point of the last item on the previous page, as an open bound, or even something fancier like the median price of the items in the page).
There's a choice to be made about semantics, and you have plenty of information given to you by the user in a search scenario, but ‘page 2’ is not the right choice because it has no useful semantics. If the user is hoping to bookmark the page it's because they want to preserve some property of the data for later, even in the face of data changes. I can almost guarantee that property isn't ‘items that happen to be on page 2 today’.
I cannot think of any other way to bookmark anything static unless I convert it into pdf/screenshot before sharing. Are there better ways to bookmark a list page which guarantees same list forever?
This depends on use case and who or what is actually consuming the pages. Most of the time, humans don't actually want the same list for all time (though what follows would work for them).
The only way to have a static list is to have an identifier for the state of the list at a certain time, or a field that allows you to reconstruct the list (e.g. a timestamp). This also means you need to store your items' data so the list of items can be reconstructed. Concretely, this might mean a query parameter for the list at a certain time (time=Xyz). When you paginate, either a cursor-based approach, an offset approach, or a page number approach would all work.
This is not what most human users want: they would see deleted items, wouldn't see added items, and changes to fields you might sort on wouldn't be reflected in the list ordering. But it's ideal for many automated use cases.
ETA: If you're willing to restrict users to a list of undeletable items that is always sorted, ascending, by time of item creation, you can also get by with any strategy for pagination. The last page might get new items appended, and you might get new pages, but any existing pages besides the last page won't change.
Someone said he is being downvoted for suggesting cursor-based pagination, yet one of your suggestions was the cursor-based approach, and as thus, I do not understand why he is being down-voted if it is a legitimate approach, which I believe it is.
I guess we would have to hear nebezb's solutions.
If you are already sorting by price and you bookmark at the second page (which now would be in the 3rd), what would you do? I personally do not care about the item in a sorted list enough to expect a bookmarked URL to start from there, or I cannot remember when I did and why. Any ideas why would one want this? If I bookmark second page, I know that the items on page 2 may not always be on page 2. Why would anyone expect different? If you want to bookmark an item, just go to the product itself and bookmark that. I do not think I ever bookmarked a specific page expecting that to never change.
Well, this really depends on the intention: are you looking for the cheapest items, excluding the 20 first, or are you linking to a content list.
I use Occams razor to decide this, and conceptually it is simpler to think that you are linking to a content list - so that is likely the right answer.
If you're bookmarking a directory, a list of things (e.g. the HN frontpage), you expect the content to change when opening the bookmark.
You bookmark a link to the directory so you don't forget the directory's entry URL.
The use case the author is talking about is a different one: You are configuring a complex item in a shop, and want to bookmark the URL so you can save it, recall it later, share this configuration with someone, or compare it with a different URL.
In this case, you also would expect little details to change (pricing, descriptions, photos) but the structure of the state should stay the same.
It's very frustrating when you share a link to a product detail page, only to discover that all your filters and configurations have been lost.
The data in a bookmark may change, but it should preserve some property of interest — otherwise why bookmark it?
Page 1 (a.k.a. the top few results with no pagination) has the property of being the selected top of HN, which is an interesting property in its own right, and what we're bookmarking. Page 2 doesn't have that property.
So until about 2013? 2014? URL-driven state was just the way everything worked.
One of the major complaints of `cgi-bin` was that you had to manually add back to the URL to manage state (and of that time, there were a good number of cgi-bin applications that just didn't bother -- which unsurprisingly is how the SPAs worked at first until "URL Routing" took over).
But, all of this is literally just reinventing the wheel that's been there since the web began. The entire purpose of the web was to be able to link to a specific resource, action, or state without having to to anything other than share a URL.
What's wild is there are whole generations of programmers that started programming after the SPA world debuted and are now re-learning things that "were just the way things were" before 2013.
tbh I always found it interesting that CGI was dropped as a well supported technology from languages like Python. It was incredibly simple to implement and reason about (provided you actually understand HTTP, maybe that's the issue), and scaled well beyond what most internal enterprise apps I was working on at the time needed.
The JS world leaves me more and more perplexed.There's a similar rant about forms, but why is this so hard? Huge amount of dev time spent being able to execute asynchronous functions to the backend seamlessly yet pretty much every major framework is just rawdog the url string and deal with URLSearchParams object yourself.
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
Nuqs[0] does a very good job at parsing and managing search params. It's a complex issue that involves serialization and deserialization, as well as throttling URL updates. It's a wonderful library. I agree, though, that it would be nice to see more native framework support for this.
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
Don’t get me wrong, I never meant it was easy to solve, just that things could be better if search parameters didn’t somehow become this niche legacy thing with minimal appetite to fix.
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
> Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
The same thing should happen that happens with Rails/Django and friends: nothing. Most frameworks only parse URL params, they don't check to see if the params are valid given your app logic.
That's your job. Frankly, anything more would be over kill. Why should my url param manager handle new or removed form fields?
> The same thing should happen that happens with Rails/Django and friends: nothing
So you can never make any breaking change to your api whatsoever? Or, in practice, you don't care and let users deal with app crashes and invalid state? Yep, welcome to the frontend-world.
> why do you think that this must happen on the front-end?
It must happen on both frontend and backend, because it's about their communication with each other. So that includes frontend.
> a) serve an error page, leaving that to the backend (at some point the backend must validate anyway)
> b) serve the regular front-end and react to the invalid state with error messages. there are libraries like zod that should make your job easier.
I mean, that is exactly what makes me so frustrated! Sorry, you are just giving a good example of what I'm complaining about. Both of those solutions are sub-optimal.
Here is how I'd implement that by hand if I would write something like nextjs/django:
1.) The frontend always sends a version in each of it's request (in the payload, the header, wherever)
2.) The backend compares that version against what it is expecting and compatible with. If it detects an outdated/incompatible version then there are two options: it is somehow possible to automatically fix it. For example because the used protocol has some mechanism and the developer uses that (such as default values for missing values) or because the developer has manually provided a migration for old versions.
3.) If it can be fixed, all good. If not, the backend sends a message back, saying that the request cannot be processed because the version is too old. The user can then decide to e.g. save their state elsewhere and reload, or just reload. Or do something completely different.
4.) Bonus: while we are at it, why don't have that fronted send regular requests to backend and ask if the version is still up-to-date (especially if it's the type of user that has their browser tab open forever). That would help to prevent data loss or other problems before they even occur, because the user gets an info and can refresh early.
Why should I implement all of that myself over and over again? Are you guys really thinking that this should not be handled, or at least made very easy, by typical weblibs/frontends such as nextss or django?
> treating URL parameters as your single source of truth... a URL like /?status=active&sortField=price&sortDir=desc&page=2 tells you everything about the current view
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
> All of those questions and more will vary between applications. One size does not fit all.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
until you pressed enter, this progress is understood to be ephemeral. It has only been recently that the user has been 'conditioned' to expect the form inputs to be retained when they click a link, and it's because the app is trying to retain the state of ephemeral progress.
So you cannot have both a webpage that is not an app, but maintain an app-like behaviour. Trying to do so is a cursed problem, and it might succeed with high effort, but ultimately not worth it.
Yes the simple solution is obviously not perfect in edge cases. It's a tradeoff between simplicity and edge-case-perfectness.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
to be absolutely nerve-wracking. Not hard to do but it's just batshit crazy and breaks the whole idea of how web crawlers are supposed to work. On the other hand, we had trouble with people (who we know want to crawl us specifically) crawling a site where you visit
http://example.com/item/448828
and it loads an SPA which in turn fetches a well-structured JSON documents like
with no cache so it downloads megabytes of HTML, Javascript, Images and who knows what -- and if they want to deal with the content in a structured way and it put it in a database it's already in the exact format they want. But I guess it's easier to stand up a Rube Goldberg machine and write parsers when you could look at our site in the developers tools and figure out how it works in five minutes... and just load those JSON documents into a document database and be querying right out of the gate.
What I would want is to GET http://example.com/item/448828 with an Accept header of ‘application/s-expression,application/json;q=0.1’ instead of retrieving the HTML representation of the resource. HTTP is the API.
it felt like this was an opportunity for AI craze to adopt on top of the existing standards, instead they all invented their own stuff with llm.txt and MCP *sigh*
I had a similar strategy when building early web apps with jQuery and ExtJS (but using the URL hash before the History API was available). Just read from location.hash during page load and write to it when the form state changes.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.
I'm not sure that it really counts as ironic when HTMX was conceived specifically to try and get people to stop writing megabytes of JS and to go back to the old ways.
This is a great pattern to follow, and I highly recommend understanding it even to those working on projects that are full client-side SPAs.
Its too easy to jump right in to react, NextJS, etc. Learning why URLs work the way they work is still extremely useful. Eventually you will want to support deep linking, and when you do the URL suddenly becomes really important.
Remix has also been really interesting on this front. They leaned heavily into web standards, including URLs as a way of managing state.
I like the simplicity. I've been building some web apps with Alpine.js recently, another lightweight React alternative. It's pretty powerful and capable for building reactive SPAs, only ~16kb.
The bookmarkable ability is secondary to the filter parameters meaning. Once I know the parameters and their meaning I dont need the bookmark. In fact, I'd probably need to title the bookmark as something close to the URL anyway to know what it was actually referring to.
I remember that before cookies were widely implemented in web browsers, the Spinner web server (effectively an early web server and development framework in one process) implemented what it called ”prestate”; a parenthesised portion of the URL, part of the path, but before the actual path. Like this: http://example.com/(tables,images)/developers/
Our dealership listings page is largely run with this pattern, as well as most of our plugins. Nothing new and very dependable. Forgo HTMX for one less dependency.
> SEO is built in since search engines can crawl every state combination.
This isn't always a plus - bots can find a very large number of pages to crawl and swamp your server with traffic. Maybe they would get stuck on all the combinations of listing page filters and miss the important pages.
Not saying the conclusion is wrong - just something to consider.
A public-facing page at $DAYJOB has 44 boolean filters for a bunch of different hardware components (think screws and fasteners rather than boards and drives), meaning 2^44 different combinations ~= 17 trillion different pages to crawl (from the disrespectful AI crawler's perspective).
That said, it wasn't really an issue except we had a misconfiguration in the in-memory caching database we were using so it didn't delete old stuff from memory and started writing to disk, which meant that we ran out of our (millions of) file inodes on the production server. Just saying to point out that keeping the state in the URL wasn't the problem in this case, and it is generally not a problem; it can be a problem but, when it is, it will be obvious why.
didn't down-vote you but perhaps you mean tech which holds nearly all client state on the server like JSF or webforms and I think that may be not so clear to some :)
for front-end frameworks, not storing the state on the URL usually means storing it on the memory or sessionstorage and server is usually not involved
On a related note, I've found combining htmx with Parsley[0] to be very powerful for adding client-side validation to declarative server-rendered HTML form definitions. All that is needed is a simple htmx extension[1] and applicable data attribute use.
It's just how web works – storing data in URL params to restore the same state later. With React or whatever library you do the exactly same thing. In this case HTMX doesn't particularly stand out or enable anything new here.
This is a generalization from personal exposure, availability bias. It just points out that some people implement things poorly — ignoring that many well-designed SPAs do use URL state effectively. React itself does not prevent or discourage URL-based state, it's just the developer's choice whether to use routing or no.
I didn't see if you were doing this, but there is an additional use case that I had when using hot swapping like HTMX: updating other links on the page to reflect the URL state, when those links are outside of the swapped content.
While the server can use the URLs to update the links in the new HTML, if other links outside that content should also reflect the changes params, you need to manually update them.
You don't need to do anything different to the other URLs on the page, by default all parameters are passed along in every request, so you just need to retrieve any expected url parameters in the server code
Maybe, I'm not an HTMX user, but looking at hx-swap-oob I think that solves another issue. My need was when other links can exist in any place, and they need to match the URL after its clicked. I didn't want to have the performance hit or remember to add extra swaps just to get links up to date. The feature basically is "when a param is marked to be synced, ensure all links on the page are updated to match the changed param"
I’ve been building a Golang web platform for my own web apps and I wired up toaster notifications using hx-swap-oob. I just populate a ‘notifications’ slice in my view model and hx-swap-oob makes sure my toaster messages get loaded irrespective of what content is actually being swapped.
I have something similar setup for toast notifications in Django (Python). I have a notifications "partial" defined, which gets returned as part of an out-of-band swap by any view function that desires to use it. This includes other partials as well. It's how I chain fragment replacements together.
As an aside, I love that we can have this conversation - people in entirely different stacks can talk a similar language, through the glue of HTMX. This is why htmx is good for web development
I'll eventually make my repo public on Github, but I'm hesitant because it's still pretty half-baked. In the meantime, I'll do my best to capture the essentials. It's been a minute since I implemented it, so I apologize if I miss some details.
I use the same ViewModel structure for all renders, a struct called Content. It has a members to help with rendering and whatnot, and the data being rendered in my HTML template is stored in a Data type for the content that will be displayed:
Data any
So whether I'm rendering complex data, a form, or just a snippet of text, that's where it lives. That allows for all sorts of patterns for rendering data and re-using templates if that's what you want to do. Or you can just keep things simple.
Hat-tip to the Pagoda framework, which was used sometimes as an inspiration and other times as a guideline for this (and other) patterns that I used. You can find it here: https://github.com/mikestefanello/pagoda
I also have, as part of my ViewModel, a `Notifications` slice that specifies messages to send to the user and their type:
Notifications []messages.Notification
I have a HTML layout for my full page that includes a section that CSS uses to pick up notifications and display them via a toaster message:
<div id="user-notifications" class="toaster-container">
{{ range .Notifications }}
<div class="toaster{{ if .IsSuccess }} success{{ else if .IsError }} error{{ end }}">
{{ .Message }}
</div>
{{ end }}
</div>
I managed to get it working without any need for JavaScript (other than what HTMX needs to work).
My partial renders use a layout that includes a section for the partially rendered code and my oob Notifications:
{{ block "Content" . }}
Loading content.
{{ end }}
<div id="user-notifications" class="toaster-container" hx-swap-oob="true">
{{ range .Notifications }}
<div class="toaster{{ if .IsSuccess }} success{{ else if .IsError }} error{{ end }}">
{{ .Message }}
</div>
{{ end }}
</div>
So the hx-swap-oob results in my user-notifications <div> being replaced with new notifications content, if there is any. I have a base renderer that handles injecting the layouts and injecting data into `Notifications`. As a result, my handlers can be generally oblivious that this is all happening underneath.
This model can work for updating links, updating breadcrumbs, writing messages to the console, or whatever.
I'm still scratching the surface with HTMX, but I'm convinced HTMX is perfectly appropriate and a much simpler alternative for 95% of the web dev being done today.
This pattern - saving the query to the URL with the history API - is fantastic UX but never gets implemented because there’s never time. Luckily an LLM can build this quickly as it’s straightforward and mostly boilerplate.
Still the boilerplate makes me wonder if it belongs in a library, eg. a React hook that’s a drop in replacement for `useState`. Backend logic would still need to be implemented. Does something like this exist?
> is fantastic UX but never gets implemented because there’s never time
Wouldn't the change take something like an hour the first time you implement it and then 10s of seconds for calling the centralized function henceforth?
I don't think the problem is "there's never time"; and if that is the problem, I don't think an LLM will "solve" that, especially since studies have shown developers are slower when they use LLMs to code for them.
> never gets implemented because there’s never time
In my experience that time is saved and more when you find you no longer need to manage Zustand/redux stores to track application state. This pattern works beautifully when incorporating the query parameters as query keys with tan stack query too.
Yep, `useSearchParams()`. At work I built a wrapper to incorporate zod schemas for typesafe search param state. Nuqs is the best for this if your application meets its prerequisites: https://nuqs.47ng.com/
Note that you can store longer state (at least 64K; more not tested) in the fragment (`location.hash`); obviously only the client gets to see this, but it's better than nothing (and JS can send it to the server if really needed).
For parameters the server does need to see, remember that params need not be &-separated kv pairs, it can be arbitrary text. Keys can usually be eliminated and just hard-coded in the web page; this may make short params longer but likely makes long ones shorter.
You absolutely should not restore state based on LocalStorage; that breaks the whole advantage of doing this properly! If I wanted previous state, I would've navigated my history thereto. I hope this isn't as bad as sites that break with multiple open tabs at least ...
I've seem a largish company everyone here knows of, try this and have it fail, because of various weird client things, and also eventually run out of space in the hash. It's a neat hack but I wouldn't rely on it.
So many problems I've run into with newer tools feel like they were already solved years ago if you can SSR.
Not that the newer tools don't have their place and they have plenty of good ideas, but it's been fun to see Django stay relevant and for more of the included batteries to be useful again (forms, templates, etc).
Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention. It also doesn't help much in the more meaningful domains of business since you can't cheat god (information theory). If the data is so frequently out of sync that every interaction results in JSON payloads being exchanged, then why not just render the whole thing on the server in one go? This is where I can easily throw the latency arguments back in the complexity merchant's face - you've simply swept the synchronization problem under a rug to be dealt with later.
And since you don't want a 2000 character URL, you're either storing half of the session on the server or having to build an abstraction with local storage. And since our frameworks didn't evolve to handle that, what is the purpose?
The key insight into the SPA is that you are writing a coherent client experience. No SSR framework figured out how to do this because they thought about pages rather than experiences.
Let me be clear: I am speaking about web applications. If you're providing information and only have a small number of customer interactions, an SSR is superior. CNN should not be an SPA.
Like, the back button: there is no logic because this isn't react. It's just the browser back button. You don't have to do anything if you're using SSR. Back button problems only apply to SPAs or hybrids.
Clearly you've never used Laravel + Livewire. Modals, forms, wizards, sidebars, I have all of that in my app without writing any client-side JavaScript. And it works better than most SPAs. I actually get gushing praise for how "smooth" the app experience is.
So, let's use this as an example. Let's say you bring out a side drawer to edit the details of one row on the table. The side drawer pops up. The user edits details and clicks submit. (To answer this question, the user scrolls to other parts of the table to look at other rows.) There is an error in the user's input based on business logic. The user corrects it, and the row is changed. The side drawer goes away.
How many times is the whole page loaded from scratch? In a traditional SPA, the page is loaded once. With a strict MPA, the page is loaded from scratch four times. With Laravel + Livewire, to my understanding, the page is loaded once and divs are replaced with HTML from the server.
Even if it is not a react app, it is still a collection of single page apps with server side intermediations using html.
This is the best way to put it I've yet seen. HN articles keep saying things like "now that navigation transitions are solved in CSS, there's no use case left for SPAs". Is everyone just writing apps for widespread content consumption or something?
> CNN should not be an SPA.
Yes, and we need canonical "that should be an SPA"-type apps to bring up in these discussions--which can be hard, since all the best SPAs are for getting work done, not publishing content for the public to consume. Thus, as a class they tend to be department-procured B2B apps and not as generally recognizable. I propose GMail and Google Docs/Sheets/Slides for starters.
If this would have been a desktop app in the 90s, it's within this scope.
> No SSR framework figured out how to do this because they thought about pages rather than experiences.
Laravel, Blazor and apps designed around HTMX are all like this. "SSR framework" has literally nothing to do with "pages rather than experiences". Pages are just a medium to deliver experiences.
With modern CSS transitions, you can mostly fake this anyway. It's not like javascript apps actually achieve 0ms in practice - their main advantage is that they don't (always) cause layout/content flashes as things update
I dunno; other than the fact that there are some webapps that really are better done mostly client-side with routine JSON hydration (webmail, for example, or maps), my recent experimentation with making the backend serve only static files (html, css, etc) and dynamic data (JSON) turned out a lot better than a backend that generates HTML pages using templates.
Especially when I want to add MCP capabilities to a system, it becomes almost trivial in my framework, because the backend endpoints that serve dynamic data serve all the dynamic data as JSON. The backend really is nicer to work with than one that generates HTML.
I'm sure in a lot of cases, it's the f/end frameworks that leave a bad taste in your mouth, and truth be told, I don't really have an answer for that other than looking into creating a framework for front-end to replace the spaghetti-pattern that most front-ends have.
I'm not even sure if it is possible to have non-spaghetti logic in the front-end anymore - surely a framework that did that would have been adopted en-masse by now?
Have you heard of these things called smartphones? I hear they're getting quite popular.
The former is HTML with a light sprinkling of JavaScript, the latter is a SPA app.
https://htmx.org/examples/active-search/
It runs atop whatever the current dotnet hosting service is (Kestrel?). It takes everything inside the "<? ?>" code blocks and inlines it into one big Main method, exposing a handful of shared public convenience methods (mostly around database access and easy cookie-based authentication), as well as the request and response objects.
Each request is JITed, then the assembly is cached in memory for future requests to the same path, and it will recompile sources that are newer than the cached assembly.
There is no routing other than dropping the .chp extension if you pass "-ne" into the arguments launching the server.
It's not very far along, and is completely pointless other than for the sake of building my own web language thingy for the first time since 2003.
This is how I've been building my .NET web apps for the last ~3 years. @+$ = PHP in C# as far as I'm concerned.
In terms of performance, it is definitely faster. The amount of time it takes to render these partials is negligible. You'd have to switch up your tooling to measure things in microseconds instead of milliseconds if you wanted any meaningful signal.
Plus, you have to convert data to HTML somewhere. If you're using react you do this typically on the front end. You traverse and read JSON and convert it to HTML... Just like you would in PHP. Just, on the front end.
Invariably, it's done with full buffering of the blob bytes in memory on both server and client, no streaming.
Bonus points are awarded for the use of TypeScript, compression (of already compressed JPGs, of course), and extensive unit and integration tests to try and iron out the bugs.
It’s a chance to start all over yet again! Come on- we’re all up for that, we do it every few months!
Next.js is kind of bareable, as it uses the same approach, going back to the roots of web development, it is almost as doing JSPs all over again.
https://blog.platformatic.dev/laravel-nodejs-php-in-watt-run...
PHP 8 uses exceptions with a unified Error hierarchy, there are type errors, division by zero, certain parse errors and so on.
PHP 8 has strong support for static typing now, thank goodness.
PHP 8 introduces union types (int|float|null).
PHP 8.1 introduces intersection types (A&B).
PHP 8.1 added the "never" return type.
PHP 8 has less repetitive boilerplate.
PHP 8 has consistent function signatures now.
PHP 8 has consistent object/array syntax now (to be honest, some asymmetries remain).
PHP 8 has named arguments for clarity and flexibility.
PHP 8 has the nullsafe operator which simplifies deeply nested null checks.
PHP 8 has arrow functions which makes closures concise and easier to use.
PHP 8 has attributes, e.g. "#[Route("/users")]".
PHP 8 has "match" expressions which is a more predictable, type-safe, and expression-oriented alternative to "switch".
PHP 8 has many more tools for testing and debugging (incl. static analyzers).
PHP 8 has many new functions (incl. utility functions).
PHP 8.1 introduces native enums.
PHP 8.1 has "readonly" properties for enforcing immutability.
PHP 8.1 has cleaner syntax for referencing callables.
PHP 8.1 has "fibers" which enables cooperative multitasking and is a foundational building block for upcoming async/await features.
Global namespace pollution has been pretty much resolved (Composer autoloading[1]).
There are other ecosystem-level improvements such as PSR standards[1], better async story, etc.
This list is non-exhaustive. These are just the improvements that come to mind off the top of my head so I probably missed a lot of other major improvements. PHP 8+ is definitely much easier to use and they greatly reduced PHP's warts. There may be some inconsistencies left here and there, but they are not a deal-breaker IMO, if you even run into them.
[1] https://www.phptutorial.net/php-oop/php-composer-autoload/ (I do not use "dump-autoload"), https://github.com/php-fig/fig-standards/blob/master/accepte..., https://www.php-fig.org/psr/psr-4/ (https://www.php-fig.org/psr/)
---
I strongly recommend taking a fresh look at PHP 8+. It is very different from the PHP you have once known. It is "modern" now. There are lots of deprecations and removal of old warts. I did not like PHP as much ages ago, but it was a pleasure to use PHP 8+.
If you are looking to (re)learn PHP, the book “PHP & MySQL: Novice to Ninja” is a good starting point[1]. There are many other, high-quality books and resources as well.
[1] Available on libgen. The source code examples from the book are available on GitHub: https://github.com/spbooks/phpmysql7.
---
If you have any specific warts or whatnot, or if you want more resources, please do feel free to let me know.
---
I wrote this comment on my phone, so it is not as detailed and it is not structured as well, but I hope that it will still provide some insight into the differences between legacy PHP and modern PHP.
Happy to answer any questions!
[0] https://github.com/asmblah/uniter
There's a choice to be made about semantics, and you have plenty of information given to you by the user in a search scenario, but ‘page 2’ is not the right choice because it has no useful semantics. If the user is hoping to bookmark the page it's because they want to preserve some property of the data for later, even in the face of data changes. I can almost guarantee that property isn't ‘items that happen to be on page 2 today’.
The only way to have a static list is to have an identifier for the state of the list at a certain time, or a field that allows you to reconstruct the list (e.g. a timestamp). This also means you need to store your items' data so the list of items can be reconstructed. Concretely, this might mean a query parameter for the list at a certain time (time=Xyz). When you paginate, either a cursor-based approach, an offset approach, or a page number approach would all work.
This is not what most human users want: they would see deleted items, wouldn't see added items, and changes to fields you might sort on wouldn't be reflected in the list ordering. But it's ideal for many automated use cases.
ETA: If you're willing to restrict users to a list of undeletable items that is always sorted, ascending, by time of item creation, you can also get by with any strategy for pagination. The last page might get new items appended, and you might get new pages, but any existing pages besides the last page won't change.
I guess we would have to hear nebezb's solutions.
If you are already sorting by price and you bookmark at the second page (which now would be in the 3rd), what would you do? I personally do not care about the item in a sorted list enough to expect a bookmarked URL to start from there, or I cannot remember when I did and why. Any ideas why would one want this? If I bookmark second page, I know that the items on page 2 may not always be on page 2. Why would anyone expect different? If you want to bookmark an item, just go to the product itself and bookmark that. I do not think I ever bookmarked a specific page expecting that to never change.
There are always trade-offs for architectural decisions.
I use Occams razor to decide this, and conceptually it is simpler to think that you are linking to a content list - so that is likely the right answer.
Why is the content changing between refreshes not "(helpfully) bookmarkable"?
The HN front page (ie. "page 1") does that but it's a very useful bookmark.
You bookmark a link to the directory so you don't forget the directory's entry URL.
The use case the author is talking about is a different one: You are configuring a complex item in a shop, and want to bookmark the URL so you can save it, recall it later, share this configuration with someone, or compare it with a different URL.
In this case, you also would expect little details to change (pricing, descriptions, photos) but the structure of the state should stay the same.
It's very frustrating when you share a link to a product detail page, only to discover that all your filters and configurations have been lost.
Page 1 (a.k.a. the top few results with no pagination) has the property of being the selected top of HN, which is an interesting property in its own right, and what we're bookmarking. Page 2 doesn't have that property.
He probably wants to freeze the state of the page. Maybe he should consider saving it via ctrl s
One of the major complaints of `cgi-bin` was that you had to manually add back to the URL to manage state (and of that time, there were a good number of cgi-bin applications that just didn't bother -- which unsurprisingly is how the SPAs worked at first until "URL Routing" took over).
But, all of this is literally just reinventing the wheel that's been there since the web began. The entire purpose of the web was to be able to link to a specific resource, action, or state without having to to anything other than share a URL.
What's wild is there are whole generations of programmers that started programming after the SPA world debuted and are now re-learning things that "were just the way things were" before 2013.
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
[1] https://tanstack.com/router/latest/docs/framework/react/guid...
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
[0] https://nuqs.47ng.com
[1] https://saewitz.com/server-components-give-you-optionality
[2] https://saewitz.com/the-mental-model-of-server-components
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
That's your job. Frankly, anything more would be over kill. Why should my url param manager handle new or removed form fields?
So you can never make any breaking change to your api whatsoever? Or, in practice, you don't care and let users deal with app crashes and invalid state? Yep, welcome to the frontend-world.
a) serve an error page, leaving that to the backend (at some point the backend must validate anyway)
b) serve the regular front-end and react to the invalid state with error messages. there are libraries like zod that should make your job easier.
It must happen on both frontend and backend, because it's about their communication with each other. So that includes frontend.
> a) serve an error page, leaving that to the backend (at some point the backend must validate anyway)
> b) serve the regular front-end and react to the invalid state with error messages. there are libraries like zod that should make your job easier.
I mean, that is exactly what makes me so frustrated! Sorry, you are just giving a good example of what I'm complaining about. Both of those solutions are sub-optimal.
Here is how I'd implement that by hand if I would write something like nextjs/django:
1.) The frontend always sends a version in each of it's request (in the payload, the header, wherever)
2.) The backend compares that version against what it is expecting and compatible with. If it detects an outdated/incompatible version then there are two options: it is somehow possible to automatically fix it. For example because the used protocol has some mechanism and the developer uses that (such as default values for missing values) or because the developer has manually provided a migration for old versions.
3.) If it can be fixed, all good. If not, the backend sends a message back, saying that the request cannot be processed because the version is too old. The user can then decide to e.g. save their state elsewhere and reload, or just reload. Or do something completely different.
4.) Bonus: while we are at it, why don't have that fronted send regular requests to backend and ask if the version is still up-to-date (especially if it's the type of user that has their browser tab open forever). That would help to prevent data loss or other problems before they even occur, because the user gets an info and can refresh early.
Why should I implement all of that myself over and over again? Are you guys really thinking that this should not be handled, or at least made very easy, by typical weblibs/frontends such as nextss or django?
https://www.lexo.ch/blog/2025/01/highlight-text-on-page-and-...
In any case, yeah, what was suggested in the submission is nothing esoteric, but I guess everything can be new to someone.
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
So you cannot have both a webpage that is not an app, but maintain an app-like behaviour. Trying to do so is a cursed problem, and it might succeed with high effort, but ultimately not worth it.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
I also want http://example.com/application/with/path?and=parameters and http://example.com/application to return Link headers with rel=canonical appropriately.
I’d also like world peace.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.
Its too easy to jump right in to react, NextJS, etc. Learning why URLs work the way they work is still extremely useful. Eventually you will want to support deep linking, and when you do the URL suddenly becomes really important.
Remix has also been really interesting on this front. They leaned heavily into web standards, including URLs as a way of managing state.
https://data-star.dev/guide/reactive_signals
https://alpinejs.dev/
https://github.com/alpinejs/alpine
https://alpine-ajax.js.org/
Exactly, this approach doesn't scale well without trickery involved. You have to have some sort of weird encoding in place to compact it down.
Terrible for browser navigation/refresh though, because pretty much everything was a form POST. Thus no URL state sharing, either.
https://darkatlas.io/blog/critical-sharepoint-vulnerability-...
This isn't always a plus - bots can find a very large number of pages to crawl and swamp your server with traffic. Maybe they would get stuck on all the combinations of listing page filters and miss the important pages.
Not saying the conclusion is wrong - just something to consider.
A public-facing page at $DAYJOB has 44 boolean filters for a bunch of different hardware components (think screws and fasteners rather than boards and drives), meaning 2^44 different combinations ~= 17 trillion different pages to crawl (from the disrespectful AI crawler's perspective).
That said, it wasn't really an issue except we had a misconfiguration in the in-memory caching database we were using so it didn't delete old stuff from memory and started writing to disk, which meant that we ran out of our (millions of) file inodes on the production server. Just saying to point out that keeping the state in the URL wasn't the problem in this case, and it is generally not a problem; it can be a problem but, when it is, it will be obvious why.
EDIT: Hmm. Is this comment controversial? Obviously some people disagree strongly. Mind sharing why?
for front-end frameworks, not storing the state on the URL usually means storing it on the memory or sessionstorage and server is usually not involved
0 - https://parsleyjs.org/doc/index.html
1 - https://htmx.org/docs/#extensions
There are many React SPAs where the address bar URL rarely changes, and I have to find some "share" button on the page itself to get the page's URL.
https://github.com/Nanonid/rison
It's so refreshing!
It cant be used for everything. E.g. not dark mode!
While the server can use the URLs to update the links in the new HTML, if other links outside that content should also reflect the changes params, you need to manually update them.
In my progressive enhancement library I call this 'sync-params' https://github.com/roryl/zsx?tab=readme-ov-file#synchronize-...
https://htmx.org/attributes/hx-params/
It sounds like a similar use case to yours.
As an aside, I love that we can have this conversation - people in entirely different stacks can talk a similar language, through the glue of HTMX. This is why htmx is good for web development
I use the same ViewModel structure for all renders, a struct called Content. It has a members to help with rendering and whatnot, and the data being rendered in my HTML template is stored in a Data type for the content that will be displayed:
So whether I'm rendering complex data, a form, or just a snippet of text, that's where it lives. That allows for all sorts of patterns for rendering data and re-using templates if that's what you want to do. Or you can just keep things simple.Hat-tip to the Pagoda framework, which was used sometimes as an inspiration and other times as a guideline for this (and other) patterns that I used. You can find it here: https://github.com/mikestefanello/pagoda
I also have, as part of my ViewModel, a `Notifications` slice that specifies messages to send to the user and their type:
I have a HTML layout for my full page that includes a section that CSS uses to pick up notifications and display them via a toaster message: I managed to get it working without any need for JavaScript (other than what HTMX needs to work).My partial renders use a layout that includes a section for the partially rendered code and my oob Notifications:
So the hx-swap-oob results in my user-notifications <div> being replaced with new notifications content, if there is any. I have a base renderer that handles injecting the layouts and injecting data into `Notifications`. As a result, my handlers can be generally oblivious that this is all happening underneath.This model can work for updating links, updating breadcrumbs, writing messages to the console, or whatever.
I'm still scratching the surface with HTMX, but I'm convinced HTMX is perfectly appropriate and a much simpler alternative for 95% of the web dev being done today.
Managing the same state will have the same complexity on the server as it does on the client. HTMX's smugness is a huge turnoff.
https://htmx.org/essays/a-real-world-react-to-htmx-port/
while I certainly try to be funny online, I hope I'm reasonably even handed about the tradeoffs associated with the hypermedia approach:
https://htmx.org/essays/when-to-use-hypermedia/
https://htmx.org/essays/#on-the-other-hand
Still the boilerplate makes me wonder if it belongs in a library, eg. a React hook that’s a drop in replacement for `useState`. Backend logic would still need to be implemented. Does something like this exist?
Wouldn't the change take something like an hour the first time you implement it and then 10s of seconds for calling the centralized function henceforth?
I don't think the problem is "there's never time"; and if that is the problem, I don't think an LLM will "solve" that, especially since studies have shown developers are slower when they use LLMs to code for them.
In my experience that time is saved and more when you find you no longer need to manage Zustand/redux stores to track application state. This pattern works beautifully when incorporating the query parameters as query keys with tan stack query too.
That’s exactly what `nuqs` does (disclaimer: I’m the author).
> Backend logic would still need to be implemented
Assuming your backend is written in TypeScript, you can use nuqs loaders to reuse the same validation logic on both sides.
https://nuqs.47ng.com
https://stackblitz.com/edit/github-8ssor8-rqkyew8w?file=src%...
For parameters the server does need to see, remember that params need not be &-separated kv pairs, it can be arbitrary text. Keys can usually be eliminated and just hard-coded in the web page; this may make short params longer but likely makes long ones shorter.
You absolutely should not restore state based on LocalStorage; that breaks the whole advantage of doing this properly! If I wanted previous state, I would've navigated my history thereto. I hope this isn't as bad as sites that break with multiple open tabs at least ...
The first one that comes to mind was twitter...