If you are wondering how it works. You get a link from LinkedIn, it's from an email or just a post someone shared. You click on it, the URL loads, and you read the post. When you click the back button, you aren't taken back to wherever you came from. Instead, your LinkedIn feed loads.
How did it happen? When you landed on the first link, the URL is replaced with the homepage first (location.replace(...) doesn't change the browser history). Then the browser history state is pushed to the original link. So it seems like you landed on the home page first then you clicked on a link. When you click the back button, you are taken back to the homepage where your feed entices you to stay longer on LinkedIn.
Also www.reddit.com is/was doing the same back button hijacking.
From google.com visiting a post, then clicking back and you would find yourself on Reddit general feed instead of back to Google.
I'm pretty sure what you're describing is this long-standing bug[1] I've experienced only when using Mobile Safari on Reddit - affecting both old.reddit.com and the (horrible) modern Reddit. It just doesn't happen in other browsers/engines except on iOS. It's especially annoying on an iPad when I tend to use back/forward instead of open-in-new-tab-then-close on iPhone.
I would just like to point out that this was one of the things that the AMP straightjacket prevented. The whole online news industry has conclusively demonstrated that it can't be trusted with javascript and must be hospitalized, but they refuse to acknowledge their own illness.
Sounds like maybe some prevention against this is already implemented in either particular Android browsers, or ad blockers, maybe even for specific sites?
Just speculating, I can't imagine a reason why they'd implement this especially for Safari.
Other than A/B-testing or trash code that coincidentally doesn't work in all mobile browsers.
Maybe they use the same AI that generates their fictious relationship stories to add these dark patterns to their code base :D
My understanding is that Apple keeps Safari fairly broken and doesn't care to implement the Googleverse and leaves a lot of things E_WONTFIX. I have read speculation that broken Safari encourages apps in the App Store.
> You get a link from LinkedIn [or such]. You click on it, the URL loads, and you read the post. When you click the back button, you aren't taken back to wherever you came from. Instead, […]
I've taken to opening anything in a new tab. Closing the tab is my new back button. In an idea world I shouldn't have to, of course, but we live in a world full of disks implementing dark patterns so not an ideal one. Opening in a new tab also helps me apply a “do I really care enough to give this reading time?” filter as my browsers are set to not give new tabs focus - if I've not actually looked at that tab after a little time it gets closed without me giving it any attention at all.
Specifically regarding LinkedIn and their family of dark patterns, I possibly should log in and update my status after the recent buy-out. I've not been there since updating my profile after the last change of corporate overlords ~9 years ago. Or I might just log in and close my profile entirely…
When I intentionally want to read something that is what I do. However once in a while I'm scrolling, selecting a window, or some other activity; and I happen to click on a link: instead of whatever action I intended I end up on a new page I didn't want to read (maybe I will want to read it, but I haven't go far enough cognitively to realize that). That is when I want my back button to work - a get out of here back to where I was.
Bad design on their part, another reason not to revisit! If a site breaks my workflow I generally stop using the site, rather than changing my workflow.
Though I'm guessing it would work in the cases being discussed in this article & thread: when you are navigating into a site (such as linkedin) from another, rather than following internal links.
In Safari if you open a new tab, don't navigate anywhere, and click back, the tab closes and takes you back to the originating page. I've gottent so used to it, I now miss it in any other browser
Would this actually fall afoul of their new policy, though?
Assume the way that universal links work, is that the site main page is loaded, and some hash is supplied, indicating the page to navigate to from there. That's annoying, but perfectly valid, and may be necessary for sites that establish some kind of context baseline from their landing page.
It's not valid. You went to a page. They said "no, you're actually on the feed," and then immediately navigate you to the page you'd actually intended to visit. This is that they're doing today, and it's terrible. If I go to a URL, I'm NOT going to your homepage feed. I never wanted to go there.
LinkedIn is malware and it's frankly embarrassing that we seem to be stuck with it. It's like a mechanic being stuck with a wrench that doesn't just punch you in the face while using it, it opens your toolbox just to come out and punch you randomly.
The fix is to hold down the back button so the local history shows up, and pick the right page to go back to. Unfortunately, some versions of Chrome and/or Android seem to break this but that's a completely self-inflicted problem.
That's a different kind of dysfunction, though. You can address it by copying the link and pasting it in a new tab, or if that's not possible, copying the current page to a new tab and clicking on the link there.
It's also not a very effective workaround, because some of the websites in question end up spamming multiple instances of their home page in the history stack.
You can usually address this by going back as far as possible, then holding the button again so more of the history shows up. And IME, it's only really broken sites that have this problem in the first place.
The problem is, there are two conceptions of the back button, and the browser only implements one.
One conception is "take me back to the previous screen I was on", one is "take me one level up the hierarchy." They're often but not always the same.
Mac Finder is a perfect example of a program correctly implementing the two. If you're deep in some folder and then press cmd+win+l to go to ~/Downloads, cmd+up will get you to ~/, but cmd+[ will get you back to where you were before, even if this was deep in some network drive, nowhere near ~.
I feel like mobile OSes lean towards "one level up" as the default behavior, while traditional desktop OSes lean more towards tracking your exact path and letting you go back.
Desktop had this solved, on Windows there was and remains a distinction between "back" (history) and "up" (navigation).
Browsers actually used to have hierarchical navigation support, with buttons and all, back in the age of dinosaurs - all one had to do is to set up some meta tags in HTML head section to tell which URL is "prev"/"next"/"up". Alas, this has proven too difficult for web developers, who eventually even forgot web was meant for documents at all, and at some point browsers just hid/removed those buttons since no one was using them anyway.
The "Back" remains, and as 'Arainach wrote, it's only one concept and it's not, and never has been "up one level in the hierarchy".
EDIT:
The accepted/expected standard way for "take me up one level in hierarchy" on the web is for the page itself to display the hierarchy e.g. as breadcrumbs. The standard way to go to top level of the page is through a clickable logo of the page/brand. Neither of those need, or should, involve changing behavior of browser controls.
> one is "take me one level up the hierarchy." They're often but not always the same.
Who expects this behavior? It doesn't make sense. You just want to go back where you were.
Most file browsers I've used wanting to implement going up a level in hierarchy, have an arrow pointing up.
If you reached point B from point A - and you tell someone "I would like to go back", then you are expecting to go back to A. Not some intermediate, arbitrarily chosen point C.
Is there any click-bait news site that DOESN'T do this? You hit back and land on a list of their click-bait articles and add links instead of the page you expect.
While i agree, the current JS security model rally doesn't allow for distinguishing origin for JS code. Should that ever change, advertisers will just require that you compile their library into the first party js code, negating any benefit from such a security model.
> advertisers will just require that you compile their library into the first party js code, negating any benefit from such a security model.
It will become harder for advertisers to deny responsibility for ads that violate their stated policies if they have to submit the ads ahead of time. Also site operators will need a certain level of technical competence to do this.
More likely, advertisers will need you to insert a "bootloader" that fetches their code and passes it to eval().
Alternatively, they might require you to set up a subdomain with a cname alias pointing to them (or a common CDN), negating any security benefits of such a practice.
> More likely, advertisers will need you to insert a “bootloader” that fetches their code and passes it to eval().
Sounds like legal precedent waiting to be set. “Run our code so that it looks like your code, acts like your code, and has all the same access as your code” seems like it should be a slam dunk if said code ends up doing a Very Bad Thing to your visitors.
But of course that’s assuming common sense, and the law’s relationship with that isn’t always particularly apparent.
There is already plenty of precedent for real-time-served ads which are annoying, or malicious, or install malware; or outright exploit vulnerabilities in the browser.
The advantage would be that I know beforehand, and have the opportunity to test and, possibly, reject, what the advertiser want me to send to someone’s browser.
If it happened browsers started to warn their users about third party JS doing back button history stuff, I have a hunch, that many frontendies would just shrug and tell their visitors: "Oh but for our site it is OK! Just make an exception when your browser asks!" just like we get all kinds of other web BS shoved down our throats. And when the next hyped frontend framework does such some third party integration for "better history functionality" it will become common, leading to skeptics being ridiculed for not trusting sites to handle history.
> I feel like anything loaded from a third party domain
Unfortunately this would break some libraries for SPA management that people sometimes load from CDNs (external, or under their control but not obviously & unambiguously 1st-party by hostname) instead of the main app/page location. You could argue that this is bad design IMO, and I'd agree, but it is common design so enforcing such a limit will cause enough uproar to not be worth any browser's hassle.
I do like that they follow up this warning with “We encourage site owners to thoroughly review …” - too many site/app owners moan that they don't have control over what their dependencies do as if loading someone else's code absolves them from responsibility for what it does. Making it clear from the outset that this is the site's problem, not the user's, or something that the UA is doing wrong, or the indexer is judging unfairly, is worth the extra wordage.
The History API is pretty useful. It creates a lot of UX improvement opportunities when you're not polluting the stack with unnecessary state changes. It's also a great way to store state so that a user may bookmark or link something directly. It's straight up necessary for SPAs to behave how they should behave, where navigating back takes you back to the previous page.
Yeah but all of this is a symptom of a broader problem rather than reasons why the history API is useful.
SPAs, for example, require so many hacks to work correctly that I often wonder to myself if they’re not really just a colossal mistake that the industry is too blinded to accept.
As a user, I really don't care about the supposed purity or correctness of a website's tech stack. When I click "back" I want to go back to what I think the previous page was.
State management, URL fragment management, reimplementing basic controls...
One that I hate the most is that they first reimplement tabular display with a soup of divs, then because this is slow as a dog, they implement virtualized display, which means they now need to reimplement scrolling, and because this obviously breaks CTRL+F, they end up piling endless hacks to fix that - assuming they bother at all.
The result is a page that struggles to display 100 rows of data. Contrast that with regular HTML, where you can shove 10 000 rows into a table, fully styled, without noticeable performance drop. A "classical" webpage can show couple megabytes worth of data and still be faster and more responsive than typical SPA.
Sounds like you're referring to some specific examples of poorly implemented apps rather than the concept of SPAs as a whole.
For your example, the point of that div soup is that enables behaviours like row/column drag&drop reordering, inline data editing, realtime data syncing and streaming updates, etc. - there is no way to implement that kind of user experience with just html tables.
There's also huge benefit to being able to depend on clientside state. Especially if you want your apps to scale while keeping infra costs minimal.
I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.
And to be clear, I'm not saying that people building SPAs when all they needed was a page showing 10,000 rows of static data isn't a problem. It's just a people problem, not an SPA problem.
>> I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.
Any other way? Just build a web app with emscripten. You can do anything.
For a while GTK had an HTML5 backend so you could build whole GUI apps for web, but I think it got dropped because nobody used it.
> all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.
Except they had been solved in other ways and the problem was people insisted on using web technologies to emulate those other technologies even when web technologies didn’t support the same primitives. And they chose that path because it was cheaper than using the correct technologies from the outset. And thus a thousand hacks were invented because it’s cheaper than doing things properly.
Then along comes Electron, React Native and so on and so forth. And our hacks continue to proliferate, memory usage be damned.
> And they chose that path because it was cheaper than using the correct technologies from the outset
No, otherwise they would not need all those hacks. Web stack makes it cheap (fast and easy) to build an MVP, but since the very primitives required to fully implement requirements are not even there, they end up implementing tons of ugly hacks held by duck tape. All because they thought they could iterate fast and cheap.
It's the same story with teams picking any highly dynamic language for an MVP and then implementing half-baked typing on top of it when the project gets out of MVP stage. Otherwise the bug reproduction rate outpaces fixing rate.
I’ve done both too. And I honestly don’t like the box model.
But I will admit I’ve focused more on desktop than mobile app development. And the thing about sizing stuff is it’s a much easier problem for desktop than mobile apps, which are full screen and you have a multitude of screen sizes and orientations.
This is the whole concept of the SPA - make a page behave like multiple pages. The premise itself requires breaking absolutely everything assuming that content is static.
> There's also huge benefit to being able to depend on clientside state. Especially if you want your apps to scale while keeping infra costs minimal.
Um... I'm old enough to remember the initial release of node, where the value proposition was that since you cannot trust client data anyway and have to implement thorough checking both client and server side, why not implement that once.
> I get the frustrations you're talking about, but almost all of them are side effects of solutions to very real UX problems that couldn't be solved in any other way.
Let me introduce you to our lord and savior native app
Probably referring to using pushState (part of the History API) to update the URL to a bookmarkable fragment URL, or even to a regular path leading to a created document.
> The new history entry's URL. Note that the browser won't attempt to load this URL after a call to pushState(), but it may attempt to load the URL later, for instance, after the user restarts the browser.
It should be opt-in per website, per feature, because IMO it can be quite useful in some cases. Like clicking back on a slide-show bringing you to the overview page, instead of only going back one slide
> clicking back on a slide-show bringing you to the overview page
That behavior is expected in exactly one case (assuming slides, not the whole presentation, are modeled as a page each): If I navigated to that specific slide from the overview.
In any other scenario, this behavior amounts to breaking my back button, and I'll probably never visit the site again if I have that choice.
Opt in features are a great way to increase user frustration and confusion. See the whole new geolocation API they had to make for browsers since people would perma-deny it reflexively and then complain that geolocation features weren't working.
That's a good point, though I'm not familiar with the (changes to the) geolocation API you mention. Do you have any recommendations for reading up on that development?
facebook.com does this as a first party site, shit sites trying to squeeze eyeball time from visitors should be put on Google's malware sites list, but apparently those are the best sites nowadays... :/
There are valid use cases however the issue is rooted in lacking browser APIs.
For instance,
- if you want to do statistics tracking (how many hits your site gets and user journeys)
- You have a widget/iframe system that needs to teardown when the SPA page is navigated away
- etc
The browser does not have a;
globalThis.history.addEventListener('navigate')
So you must monkey patch the history API. It's impractical from a distribution standpoint to embed this code in the page bundle as it's often managed externally and has its own release schedule.
> - if you want to do statistics tracking (how many hits your site gets and user journeys)
You can do all of that server-side and much more reliably at that. The only reason to do any of this tracking client-side is advertisers trusting fake number go up more than sales numbers.
This misses the point. Websites are allowed to replace default keyboard shortcuts for a reason. There are only a few exceptions to this, like Ctrl+W. In other words, you can design your website however you want, except to make it more difficult to leave. This is an implementation of the same philosophy.
It used to be a de facto standard in many programs. Since almost no mouse had a scroll wheel, you'd use the space bar or the cursor keys. Spacebar was usually faster, I guess some people still do.
Still doing that, also in Thunderbird, to scroll through E-Mails and go to the next one when reaching the end (or pressing "n" or "p" for previous). I even use shift + space to go up again. I thought it was very common. Another alternative, maybe a bit more intuitive is using page up and down buttons.
i love it. my mac doesn't have the home row (don't know if that's how that row of buttons is called) so I use spacebar and shift+spacebar as pgdown and pgup when I am reading
They're called the navigation keys. Fn + Up/Down (arrow keys) is PgUp/PgDn, and Fn + Left/Right is Home/End. But of course, those keys are on completely opposite sides of the keyboard, so Space is more convenient.
This is my biggest gripe with modern browsers. Stop fucking with my keyboard. I want my keyboard to control my agent, not some script. No key seems to be safe. The quick-search key (/) is often overriden by "clever" web devs, but not even in a consistent way. Ctrl-K to go to the browser search box is gone. I use emacs keybindings in text boxes, but those can be randomly overriden by scripts (e.g. Ctrl-B might by overridden to make stuff "bold" etc.).
I want to be able to say "Don't let any script have access to these keyboard keys". But apparently that can't be done even with extensions. I've strongly considered forking Firefox to do this, but I know how much effort that would be to maintain.
How hard would it be to write scripts that expose an interface that the user can bind to keys themselves, if they wish to?
It really comes down to JavaScript. The web was fine when sites were static HTML, images, and forms with server-side rendering (allowing for forums and blogs).
Did you use the web back in 1995? It was fun, but it also sucked compared to what we have now. Nothing is ever perfect, but I wouldn’t want to go back.
Geminispace is a very chill place. It’s definitely not a replacement for the web, but if you can handle the compromises, it feels like both the past and the future.
I read epubs, and html pages derived from texinfo and mandoc. When I see websites that just break down when you disable JS (I do it with ublock), I always feel a pang of sadness. Unless you’re Figma, Google doc, or OpenStreetMap…, which rely heavily on local state, JS should only be required for small island of interaction.
You're not wrong but we've never really tried the combination of modern CSS with no JS. It could produce elegant designs that load really fast... or ad-filled slop but declarative.
I published my first website in 1995 (and while it wasn’t even a little popular, eventually a spammy gay porn site popped up with the exact same joke name, leading to a pretty odd early “what if you search for your own site” experience).
If you put 2026 media players (with modern bandwidth), on the manually curated small-editorial web of ‘95 it’d be amazing.
We used to have desktop apps, these SPA JS monstrosities are the result of MS missing the web then MS missing mobile. Instead of a desktop monopoly where ActiveX could pop up (providing better app experiences in many cases than one would think), we have cross-platform electron monstrosities and fat react apps that suck, are slow, and omfgbbq do they break. And suck. And eat up resources. Copy and paste breaks, scrolling breaks, nav gets hijacked, dark mode overridden.
Netflix, Spotify, MS have apps I see breaking on the regular on prime mainstream hardware. My modern gaming windows laptop, extra juicy GPU for all the LLM and local kubernetes admin, chokes on windows rendering. Windows isn’t just regressing, their entire stack is actively rotting, and all behind fancy web buttons.
Old man yelling at cloud, but: geeeez boys, I want to go back.
There are still BBS you can access via telnet (and actual dial up if you really want), after the fifth one asks you for your full name, street address and phone Humber it gets a little old.
If you wanted to accomplish anything more substantial than reading static content (like an email client that beeps when you get an important email, or a chat app that shows you new messages as they come in), you needed to install a desktop app. That required you to be on the same OS that the app developer supported (goodbye Linux on the desktop), as well as to trust the dev a lot more.
We seem to have collectively forgotten the trauma of freeware. Operating an installer in the mid 2000s was much like walking through a minefield; one wrong move, and your computer was infected with crapware that kept changing your home page and search engine. It wasn't just shady apps, mainstream software (I definitely remember uTorrent and Skype doing this) was also guilty. Even updates weren't safe.
JavaScript didn’t kill Flash a Java. The web becoming cross platform did.
People started browsing on a plethora of devices from the Dreamcast to PDAs. And then Steve Jobs came a long and doubled down on the shift in accessibility. His stance on Flash was probably the only thing I agreed with him on too.
Oh, the social media was much, much better. People much more open, tracking didn't exist. All the idiots still thought computers were only a thing for nerds and kids.
This is the price we pay for openness and decentralization.
On one side, we have Apple giving us great APIs but telling us how to use them. On the other, we have W3C being extremely conservative with what they expose, exactly because of things like this.
This is the price we pay for stuffing browsers with tons of imperative APIs that the browser has no choice but to implement to the letter, since analysing how they are actually used runs afoul of Rice's theorem.
I feel like we need a complete black box layer or something, where a website can send requests to the browser to do something, but never gets any kind of reply, as to whether anything actually happened. But that would limit usefulness of it quickly, I guess.
As usual, it's a good first step but doesn't go far enough. I don't want my back-button hijacked by _anything_.
My issue with back-button hijacking isn't even spam/ads (I use an ad-blocker so I don't see those), but sites that do a "are you sure you want to leave? You haven't even subscribed to our newsletter yet?!"
There's a place for it within SPAs - you want the browser back button to retrace your path through screens in the application, not exit it, unless you are already on the first page. The same would be true for multi-page apps using HTMX or Turbo or something - if you change pages without doing a full page load, you need to push your new URL. The guiding principle is that the browser back button should work as the user expects - you should only mess with the browser history stack to fix any nonsense you did to it in the first place.
On the other hand, "are you sure you want to exit without saving" is a good use-case. But I'd prefer that to be a setting I can allow for specific site.
That API has quite a few heuristics that protect the user:
(At least on the Chromium browsers that I've tested it with)
1: It fails silently if the user hasn't interacted with the page. (IE, the user needs to "do something" other than scroll, like click or type.) This generally stops most SPAM.
2: The browser detects loops / repeated prompting and has a checkbox to get out of the loop.
---
It was a little jarring the first time I used that API and tested my code with it; but I appreciate the protections. I've come across far too many "salesman putting their foot in the door" usage of it.
Better yet, just save. Storage is cheap and fast these days. The “do you want to save?” idiom is a leftover from the days when a moderately sized document would take a noticeable amount of time to save and eat up a decent chunk of your floppy disk.
But what if you are leaving the page because you changed your mind, and don't wish to save the changes after all? This, for me, is the common case, so i would not want the browser to suddenly commit an unfinished draft.
If you’re worried about losing the old version, it should keep a history. If you want to erase the new version, there should be an explicit action to do that.
An interesting variant of a web phishing attack is to combine the back button hijacking with information that comes from the HTTP referer header. HTTP referer discloses from which website the user is coming from, when the user click the back button, the malicious site can take the user to the site that looks identical (except for the URL), but is attacker controlled.
I don't understand how Google's indexing work anymore. I've had some website very well indexed for years and years which suddenly disappeared from the index with no explanation, even on the Search Console ("visited, not indexed"). Simple blog entries, lightweight pages, no JavaScript, no ads, no bad practices, https enabled, informative content that is linked from elsewhere including well indexed websites (some entries even performed well on Reddit).
At the same time, for the past few years I've found Google search to be a less and less reliable tool because the results are less often what I need.
Anyway, let's hope this new policy can improve things a little.
This relates to Chrome, not to search. In regard to search, they have taken a new direction that I don't think is going to change any time soon. Some time in the last 2 years, they started removing any thing that doesn't get significant natural traffic (ie: have a 30 year old user manual for something odd that people only search for once in a while? -> removed). Last few months, I noticed that they will not index anything that seems broad (ie: if similar content exists, they won't index it regardless of your page authority).
Basically, they are turning search into Tiktok. If you try to make a search, you'll notice that now they give precedence to AI overview, Youtube, News stories, Maps, Products, etc. Anything but content.
> Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site's performance in Google Search results.
Good point. Chrome has a “feature” where if your website is google-flagged, it’ll display a danger alert when visiting it. For some reason I confused that with this.
If you're referring to Google Safe Browsing lists, all major browsers check agains the same list. I've managed to get mine listed there and immediately banned on all major browsers.
Not only that but I think Google listens to "cyber security" companies lists and feed from them. My website got in some of these lists (https://www.virustotal.com/gui/url/a4c9f166d2468f5bbb503ec79...) and I had to go through like 6-7 of them to whitelist my domain again. Something about code and input triggered something in some of these list's filters that my website is hacking related.
Wait, how does one website (google.com) know what happens inside my browsing session on another website (bad-blog.com) after I click over? Hmmmmm
This sort of announcement just emphasizes the extent to which Google observes ALL your web browsing behavior, thanks primarily to their eyes inside Chrome browser.
You know those warnings when you install a browser extension, about all the things that extension will be able to see and do? Well so can Chrome itself…
I use Chrome on my Android and Mac. For a while I've appreciated the seemingly built-in anti-hijacking measure that always does what I expect on the second Back press. (The first Back may pop up a subscription box for example, but the second will always return me to where I came from).
I actually felt that this was a solved problem, so I'm surprised to see so many people still suffer getting stuck in redirect loops.
IIRC the Azure “portal” does this. Also likes to not record things as navigation events that really feel like they should be. Hitting back on that thing is like hitting the back button on Android, it’s the “I feel lucky” button. Anything could happen.
I think that is because some "pages" are really full screen modals. So the back button does take you back to the previous page, but it looks like you went back two pages (closes modal + goes back). I don't spend too much time in the Azure portal but this behavior is rampant in the Entra admin center.
Thanks. I never imagined this is a thing, it's an useful addition to my mental model of software components, to explain why back button on web behaves in weird ways for some apps.
But it sure does sound like a dumb pattern on the web.
While we’re making sure that modals are recorded in history so that you can close them with the back button on mobile (e.g. https://svelte.dev/docs/kit/shallow-routing), MSFT can’t be bothered. But when it comes to abusing the very same history API to grab the user’s attention for a bit longer...
Are they? This seems about deceptive or malicious content (i.e., redirecting to ads) rather than “something in my history triggers a JS redirect”. I’ve definitely experienced the latter with MS, but never the former.
It seems like Google's policy is unconcerned with the intent of the practice. If a website JS redirect ruins the user experience by breaking the back button, it will be demoted in search results. It doesn't matter whether or not the redirect was meant to be deceptive or malicious, websites shouldn't be ruining the user experience.
> It seems like Google's policy is unconcerned with the intent of the practice.
I'm reading the opposite: "If you're currently using any script or technique that inserts or replaces deceptive or manipulative pages into a user's browser history that [...]"
This is Google. Most likely they will deploy an automatic scanner bot that "supposed to" handle all the edge cases. When it don't work, you will be blamed for not writing your js in the way the bot can understand.
I think most checkouts do that, to prevent duplicate payments. Dunno about epic, but I often encounter that mitigated by a dedicated ‘go back to store’ button post-checkout
Happened to me yesterday through a link off here. I was already expecting it given the domain, but usually mashing back fast enough does the trick eventually. Not this time. Had to kill the tab.
It's amazing how often highly-polished web infrastructure gets put into the trash in pursuit of narrow objectives like avoiding a full page load. Very few applications actually benefit from being a single page. You tend to lose a lot more than you gain in terms of UX.
TIL that this (or rather, the lack of this) is why some pages show that annoying "do you want to resubmit your post" notification, but not others, and the name for it. Thank you!
It's about time.
Google is doing so much to keep the web usable. They're the only ones with the teeth to back up standards for mobile web load time, max sender spam rates, leaving browser history alone, etc.
Ironically the only place I encounter this is using google news, where news sites seem to detect you're in google news (I don't think these same sites do it when I'm just browing normally?), and try to upsell you their other stories before you go back to the main page.
It seems like a lot of the APIs that make a website act like an application need to be disabled by default; and some kind of friction needs to exist to enable them.
Edit: I'm not sure what kind of friction is needed, either an expensive review process (that most application developers would complain about but everyone else would roll their eyes) or a reputation system. Maybe someone else can think of a better approach than me?
Bold coming from the company who gives me the most confusing “Open in app” prompts that are designed to confuse you and get you to use their app rather than the web
A browser feature I wasn't aware of for too long: long press the back button, to get a list of recent URLs, allowing you to skip anything trying to hijack the back button.
Surely the browser could enforce a limit on a domain, and make sure that the real page you came from (typically the search engine) is prominently displayed.
> When a user clicks the "back" button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation.
It seems pretty stupid. Instead of expanding the SEO policy bureaucracy to address a situation where a spammer hijacks the back button, the browser should have been designed in the first place to never allow that hijacking to happen. Second best approach is modify it now. While they're at it, they should also make it impossible to hijack the mode one.... oh yes, Google itself does that.
Please explain the legitimate uses. Not once I have ever encountered a website that does something useful by modifying the behavior of my browsing history.
Youtube doesn't implement a back function. A real back function would take you back to the same page you came from. If you click a video from the Youtube home page, then click the back button, Youtube will regenerate a different home page with different recommendations, losing the potentially interesting set of recommendations you saw before. You are forced to open every link in a new tab if you want true back functionality.
Well, if I wanted to return to the parent screen in a single page application, I'd click on the back button in the app itself. No need to prevent me from back tracking in the exact order of my browsing should I need it.
I especially hate YouTube's implementation, I can never know the true state on my older PC during whatever it's trying to accomplish, often playing audio from a previous video when I backspace out. I resort to opening every link in a new tab.
The spec kind of goes into it, but aside from the whole SPAs needing to behave like individual static documents, the big thing is that it's a place to store state. Some of this can be preserved through form actions and anchor tags but some cannot.
Let's say you are on an ecommerce website. It has a page for a shirt you're interested in. That shirt has different variations - color, size, sleeve length, etc.
If you use input elements and a form action, you can save the state that way, and the server redirects the user to the same page but with additional form attributes in the url. You now have a link to that specific variation for you to copy and send to your friend.
Would anyone really ever do that? probably not. More than likely there'd just be an add to cart button. This is serviceable but it's not necessarily great UX.
With the History API you can replace the url with one that will embed the state of the shirt so that when you link it to your friend it is exactly the one you want. Or if you bookmark it to come back to later you can. Or you can bookmark multiple variations without having to interact with the server at all.
Similarly on that web page, you have an image gallery for the shirt. Without History API, maybe You click on a thumbnail and it opens a preview which is a round trip to the server and a hard reload. Then you click next. same thing. new image. then again. and again. and each time you are adding a new item to the history stack. that might be fine or even preferred, but not always! If I want to get back to my shirt, I now have to navigate back several pages because each image has been added to the stack.
If you use the History API, you can add a new url to the stack when you open the image viewer. then as you navigate it, it updates it to point to the specific image, which gives the user the ability to link to that specific image in the gallery. when you're done. If you want to go back you only have to press back once because we weren't polluting the stack with history state with each image change.
Thanks for the detailed and thoughtful reply! I agree that in both of the scenarios you mentioned, this API does provide better usability.
I guess what feels wrong to me is the implicitness of this feature, I'm not sure whether clicking on something is going to add to history or not (until the back button breaks, then I really know).
Click on any Youtube video from any web in android. If you press anything that is not the back button immediately, you will loose the option to go back.
So this coming from google... it's funny. Welcome, but funny.
I understand this is vague on purpose but wish there was more detail. E.g., if I am running a game in a webgl canvas and "back button" has meaning within the game UI which I implement via history states, is my page now going to be demoted? This article doesn't answer that at all.
If it automatically adds something to the history when you visit the page, then yes. If it only adds to the history when the user clicks something, then I would assume it would be fine. Hopefully.
I would like to mention that Google own SPA framework, angular, has redirect routes which effectively do back button hijacking if used, because they add the url you're redirecting from to the history.
One of the worst is TikTok, even as a developer, when someone sends me a TikTok link and I have to visit it, I get stuck in the browser (same with the app but I uninstalled it), and it feels almost device-breaking the way they trap you in.
If the navigation simulates what would happen if we follow links to SPA#pos1, SPA#pos2, etc so that if I do two clicks within the SPA, and then hit Back three times I'm back to whatever link I followed to get to the SPA, I guess it's OK and follows user expectations. But if it is used as an excuse to trap the user in the SPA unless they kill the tab, not OK.
> From the browsers perspective those are the same thing though.
If the browser only allows adding at most one history item per click, I should be able to go back to where I entered a given site with at most that many back button clicks.
At a first glance, this doesn't seem crazy hard to implement? I'm probably missing some edge cases, though.
Some browser APIs (such as playing video) are locked behind a user interaction. Do the same for the history API: make it so you can't add any items to history until the user clicks a link, and then you can only add one.
That's not perfect, and it could still be abused, but it might prevent the most common abuses.
I never understood why browsers ever allowed this in the first place. It's obviously bad. Yeah, yeah there are "reasons" but it's still obviously a bad solution to whatever "problem" they were trying to solve.
Took long enough. Maybe I missed it, but I didn’t see them say how invested they are in tackling this. Promoting a rule is one thing, but everything SEO related becomes a cat and mouse game. I don’t have high confidence that this will work.
Seems invested enough to me. Adding this to the anti spam policy means they will list sites using this lower or not at all, when detected. And they use automated and manual detection for such things. Not much more they can do? And should be effective, who employs scam tactics like this is also interested in having visitors.
Number of times I've looked for something on my phone, gone through to a product page on Amazon but then have had to back out multiple times to get back to the search listing. Sometimes it's previously viewed products, sometimes it's "just" the Amazon home page. It should be one-and-done.
Amazing change, fighting with the back button is my least favorite part of the ad web and a blindspot for ublock. I wonder how Google is going to track this and if SPA style react router sites would be downranked because of the custom back button behavior. I doubt it due to their popularity but I'm curious how they're going to determine what qualifies as spam
Why not fix this at the browser level? E.g. long or double click on back button = go to previous non-javascript-affected page (I mean by that: last page navigated to in the classical sense, ignoring dynamic histories altered by js and dynamic content)
Double clicking is not a fix because it doubles latency, and more than doubles latency if you don't want to issue page loads that are immediately aborted. Long clicking is such a bizarre anti-feature that I never considered it might exist until I read about it in this HN discussion. Putting touchscreen-specific workarounds for lack of mouse buttons and modifier keys in a traditional GUI app is insanity.
That wouldn't work because this technique messes with your history. Long press on the button will just show you a list of the previous pages you visited, and all of them will have the same link to the one you're in, with just one at the bottom of the actual URL you came from. But that's so much friction UX-wise.
How does this work? How can a site inject a totally different site into the history? I thought eg the History API only lets you add to the stack and pop, not modify history?
There's also a replace() method, and trying to limit that to only same origin or already visited URLs seems futile, as the pages hosted there can themselves detect that the user is navigating back and can just forward you in a number of ways.
Google should probably talk to Microsoft about this because for me they are the biggest offenders with this back button hijacking in their support forums.
Phew. for a moment there i thought they would start blocking alternate uses of the back button in apps (for like when it means "go back" and when it means "close everything")
Scroll on Reddit on mobile and click on a link. The comments open in a new tab. Close the tab and the previous tab is also at the link you’ve just closed.
Makes it impossible to browse around and long click to open on a new tab doesn’t solve the issue either.
It's not clear what constitutes a hijacking and how they are going to detect it. It may be OK to override the button as long as it's used in the intended way which is to go back. In a single-page application it may not trigger a navigation event.
In an "application" model rather than a "document" one, like MS Word online or draw.io or similar, there's no clear semantics for "back" but there is a risk of the user losing data if they can navigate away without saving.
This is a consequence of sites being allowed to hijack back in the first place. They can still fix it.
For your use case all you need is the page to get notified so it can save. Remember that on Android your onSaveInstanceState gets called and you have to save your state or lose it.
This would break so many websites. There are valid uses for the history API, I often do modals/popups as shareable URLs, and using the back button closes it.
i wonder if this includes sites that do auto-redirect: A -> B (auto-redirect) -> C
if i'm on page C and go back, page B will take me to page C again. i think this is more about techincal incompetence rather than malicious intent, but still annoying.
Cool, now maybe let's do something about all the shit I have to clear out out my face before I can read a simple web page. For example, on this very article I had to click "No thanks" for cookies and then "No thanks" for a survey or something. And then there was an ad at the top for some app that I also closed.
It's like walking into some room and having to swat away a bunch of cobwebs before doing whatever it is you want to do (read some text, basically).
Haha, we had a solution for that, called pop-up blockers. Then when they became very usable, everyone switched to overlays injected with javascript, so they became unblockable.
But thinking of this at this moment, this could be a good use for a locally ran LLM, to get rid of all this crap dynamically. I wonder why Firefox didn't use this as a usecase when they bolted AI on top of Firefox. Maybe it is time for me to check what api FF has for this
I'm waiting for someone to develop an augmented-reality system that detects branded ads or products, compares them against a corporate-ownership database, applies policies chosen by the user, and then adds warning-stripes or censor-bars over things the user has selected against.
It would finally put some teeth behind the myth of the informed consumer, and there would be gloriously absurd court-battles from corporations. ("This is our freedom of speech and commerce, it's essential, if people don't like what we're doing they can vote with their wallets... NOT LIKE THAT STOP USING SPEECH AND COMMERCE!")
Great! So they'll fix the back button bugs on YouTube, and return me to the previous set of video recommendations when I use it on the homepage, right? Right? And let me return to the actual site when it detects that I lost the web connection for 0.01 seconds and hides all the content, and I then press the back button?
I’ll believe that when YouTube gives me the ability to block certain channels versus “not interested” and “don’t recommend channel” buttons that do absolutely nothing close to what I want.
Or a thousand other things, but that one in particular has been top of mind recently.
Because clicking on a navigation button in a web app is a good reason to window.history.pushState a state that will return the user to the place where they were when they clicked the button.
Clicking the dismiss button on the cookie banner is not a reason to push a state that will show the user a screen full of ads when they try to leave. (Mentioning the cookie banner because AFAIK Chrome requires a "user gesture" before pushState works normally, https://groups.google.com/a/chromium.org/g/blink-dev/c/T8d4_...)
will google really punish sites for doing this? and if so how do i report a site? i guess i could email the site with the google link and suggest they fix it first
> We believe that the user experience comes first.
If by "user" you mean advertisers, sure you do. Everyone else is an asset to extract as much value from as possible. You actively corrupt their experience.
The fact these companies control the web and its major platforms is one of the greatest tragedies of the modern era.
> Notably, some instances of back button hijacking may originate from the site's included libraries or advertising platform. We encourage site owners to thoroughly review their technical implementation...
Hah. In my time working with marketing teams this is highly unlikely to happen. They're allergic to code and they far outnumber everyone else in this space. Their best practices become the standard for everyone else that's uninitiated.
What they will probably do is change that vanity URL showing up on the SERP to point to a landing page that meets the requirements (only if the referer is google). This page will have the link the user wants. It will be dressed up to be as irresistible as possible. This will become the new best practice in the docs for all SEO-related tools. Hell, even google themselves might eventually put that in their docs.
In other words, the user must now click twice to find the page with the back button hijacking. Even sweeter is that the unfettered back button wouldn't have left their domain anyway.
This just sounds like another layer of yet more frustration. Contrary to popular belief, the user will put up with a lot of additional friction if they think they're going somewhere good. This is just an extra click. Most users probably won't even notice the change. If anything there will be propaganda aimed at aspiring web devs and power users telling them to get mad at google for "requiring" landing pages getting in the way of the content (like what happened to amp pages).
We tried a few times. We got as far as gating the ability to push into the "real history stack" [1] behind a user activation (e.g. click). But, it's easy to get the user to click somewhere: just throw up a cookie banner or an "expand to see full article" or similar.
We weren't really able to figure out any technical solution beyond this. It would rely on some sort of classification of clicks as leading to "real" same-document navigations or not.
This can be done reasonably well as long as you're in a cooperative relationship with the website. For example, if you're trying to classify whether a click should emit single-page navigation performance entries for web performance measurement. (See [2].) In such a case, if the browser can get to (say) 99% accuracy by default with good heuristics and provide site owners with guidance on how to annotate or tweak their code for the remaining 1%, you're in good shape.
But if you're in an adversarial relationship with the website, i.e. it's some malicious spammer trying to hijack the back button, then the malicious site will just always go down the 1% path that slips through the browser's heuristics. And you can try playing whack-a-mole with certain code patterns, but it just never ends, and isn't a great use of engineering resources, and is likely to start degrading the experience of well-behaved sites by accident.
So, policy-based solutions make sense to me here.
[1]: "real history stack": by this I mean the user-visible one that is traversed by the browser's back button UI. This is distinct from the programmer-visible one in `navigation.entries()`, traversed by `navigation.back()` or `history.back()`. The browser's back button is explicitly allowed to skip over programmer-visible entries. https://html.spec.whatwg.org/multipage/speculative-loading.h...
Classify history API, canvas etc etc as "webapp" APIs, and have them show a similar dialog to the webcam dialog.
Then I can just click no, and the scripts on the page can't mess around.
Yes Google Maps is great. No, my favorite news site doesn't need that level of access to my browser or machine, it just needs to show some images and text.
The back button itself feels overloaded. There's "go to previous state" and then there's "go to previous origin." In an ideal world when I doubleclick on the back button what I mean is: "get me off of this site, now."
We need to go back to an independent and competent
research group designing standards. Right now Google
pwns and controls the whole stack (well, not really
ALL of it 1:1, but it has a huge influence on everything
via the de-facto chrome monopoly).
Remember how Google took out ublock origin. They also lied
about this aka "not safe standards" - in reality they don't
WANT people to block ads.
Power is taken but also given. It's a dynamic and I agree it's gotten way, way out of hand. It may eventually supress progress and become a real parasitic presence, but we've not reached that point yet (in net terms). Google has been relatively responsible with the power, but cracks have been starting to show. It will get a whole lot worse before it gets better. That is why I embrace vertical integration despite the tremendous cost. Call it the cockroach approach; it allows being partially decoupled from outside fluctuations.
Addition: People underestimate Google's influence. It's easy to forget they de-facto control Firefox, leaving only Apple and Google in control of the Web. Scary, but looking away won't help either. The Americans have been consistently competent with technology since the advent of the transistor right after WW2. They're reaping the benefits of that still to this day. I say that as a European.
Ideally, when I create valuable content I am paid and when I consume valuable content I don't pay. Advertising does this but I hate it so I don't want that. So ideally, there is no way to extract value from me but I am able to extract value from others. I think I would support someone who finds a way to enforce this.
But I am also willing to pay for valuable content an exorbitant amount if it is valuable enough. For instance, for absolutely critical information I might pay 0.79€ a month.
Yeah, no thanks. I want to use my browser’s standard keyboard shortcut to navigate back. And also forward again. And I want to be able to inspect the history listing before I go back or forward.
Let the browser do the browsery things. Don’t make SPAs suck even more than they already do.