Google's shortened goo.gl links will stop working next month

(theverge.com)

224 points | by mobilio 20 hours ago

38 comments

  • edent 20 hours ago
    About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

    Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

    And for what? The cost of keeping a few TB online and a little bit of CPU power?

    An absolute act of cultural vandalism.

    • toomuchtodo 19 hours ago
      https://wiki.archiveteam.org/index.php/Goo.gl

      https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

      How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

      (edit: i see jaydenmilne commented about this further down thread, mea culpa)

      • progbits 13 hours ago
        They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.

        Going to run the warrior over the weekend to help out a bit.

      • pentagrama 17 hours ago
        Thank you for that information!

        I wanted to help and did that using VMware.

        For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.

        Project list: https://imgur.com/a/peTVzyw

        Current project: https://imgur.com/a/QVuWWIj

    • jlarocco 16 hours ago
      IMO it's less Google's fault and more a crappy tech education problem.

      It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

      And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

      And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.

      • justin66 15 hours ago
        > It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

        It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

        Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.

        • dingnuts 12 hours ago
          Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.
          • nly 11 hours ago
            CANs usually have complex hashy URLs, so you still have the compactness problem
      • gmerc 16 hours ago
        Ahh classic free market cop out.
        • FallCheeta7373 16 hours ago
          if the smartest among us publishing for academia cannot figure this out, then who will?
          • hammyhavoc 5 hours ago
            Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

            I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.

        • kazinator 15 hours ago
          Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

          The authors just had their heads too far up their academic asses to have heard of this.

    • epolanski 19 hours ago
      Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

      Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.

      • whatevaa 19 hours ago
        Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.
        • ceejayoz 18 hours ago
          It's not silly if the link is a couple hundred characters long.
          • IanCal 18 hours ago
            Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.
            • ceejayoz 17 hours ago
              It's not the bytes.

              It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.

              • SR2Z 17 hours ago
                I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

                This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

                This kind of luddite behavior sometimes makes using this site exhausting.

                • jtuple 16 hours ago
                  Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                  Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

                  Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

                  Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.

                  • Incipient 8 hours ago
                    >Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                    I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!

                • ceejayoz 16 hours ago
                  > I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

                  This is by no means a universal experience.

                  People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.

                  • SR2Z 16 hours ago
                    And how many of those people then proceed to type those links into their web browsers, shortened or not?

                    Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.

                    • ceejayoz 16 hours ago
                      > And how many of those people then proceed to type those links into their web browsers, shortened or not?

                      That probably depends on the link's purpose.

                      "The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.

                      • epolanski 12 hours ago
                        So he has a computer and can click.

                        In any case a paper should not rely on an ephemeral resource like internet links.

                        Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.

                        • JumpCrisscross 12 hours ago
                          I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.

                          There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.

                          • ycombinatrix 10 hours ago
                            The Google shortened link just redirects you to the PDF hosted god knows where...
                • andrepd 16 hours ago
                  I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.
                  • SR2Z 16 hours ago
                    > People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

                    Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.

                    • andrepd 11 hours ago
                      Very much an xkcd.com/2501 situation
                • reaperducer 13 hours ago
                  This kind of luddite behavior sometimes makes using this site exhausting.

                  We have many paper documents from over 1,000 years ago.

                  The vast majority of what was on the internet 25 years ago is gone forever.

                  • eviks 4 hours ago
                    What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?
                  • epolanski 12 hours ago
                    25?

                    Try going back by 6/7 years on this very website, half the links are dead.

              • leumon 17 hours ago
                which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually
          • epolanski 18 hours ago
            Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.
            • ceejayoz 17 hours ago
              Let me know when you figure out how to make a printed scientific journal clickable.
              • epolanski 12 hours ago
                Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

                Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.

              • diatone 17 hours ago
                Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead
    • zffr 19 hours ago
      For people wanting to include URL references in things like books, what’s the right approach to take today?

      I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades

      • toomuchtodo 19 hours ago
        https://perma.cc/

        It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

        (https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)

        • ruined 19 hours ago
          perma.cc is an interesting project, thanks for sharing.

          other readers may be specifically interested in their contingency plan

          https://perma.cc/contingency-plan

        • Hyperlisk 19 hours ago
          perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/
        • whoahwio 18 hours ago
          While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here
          • toomuchtodo 18 hours ago
            If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

            This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.

            • whoahwio 18 hours ago
              This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.
      • edent 19 hours ago
        The full URl to the original page.

        You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

        A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.

        • firefax 19 hours ago
          >The full URl to the original page.

          I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

          Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.

        • grapesodaaaaa 16 hours ago
          I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

          We’ve learned over the years that they can be unreliable, security risks, etc.

          I just don’t see a major use-case for them anymore.

      • danelski 19 hours ago
        Real URL and save the website in the Internet Archive as it was on the date of access?
    • kazinator 19 hours ago
      The act of vandalism occurs when someone creates a shortened URL, not when they stop working.
    • djfivyvusn 19 hours ago
      The vandalism was relying on Google.
      • toomuchtodo 19 hours ago
        You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.
      • api 19 hours ago
        The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

        The simplicity of the web is one of its virtues but also leaves a lot on the table.

    • QuantumGood 14 hours ago
      When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.
    • eviks 4 hours ago
      > And for what? The cost of keeping a few TB online and a little bit of CPU power?

      For the immeasurable benefits of educating the public.

    • justinmayer 13 hours ago
      In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:

      Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33

      Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...

    • crossroadsguy 18 hours ago
      I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.
      • BobaFloutist 18 hours ago
        I mean preferably do both, right? The URL is better for however long it works.
        • SoftTalker 18 hours ago
          We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.
    • SirMaster 17 hours ago
      Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?
    • lubujackson 5 hours ago
      Truly, the most Googly of sunsets.
    • jeffbee 19 hours ago
      While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.
    • asdll 8 hours ago
      > An absolute act of cultural vandalism.

      It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.

    • nikanj 18 hours ago
      The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year
    • garyHL 17 hours ago
      [dead]
    • bugsMarathon88 19 hours ago
      [flagged]
      • edent 19 hours ago
        Gosh! It is a pity Google doesn't hire any smart people who know how to build a throttling system.

        Still, they're a tiny and cash-starved company so we can't expect too much of them.

        • acheron 14 hours ago
          Must not be any questions about that in Leetcode.
        • lyu07282 18 hours ago
          Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.
        • bugsMarathon88 16 hours ago
          It is a business, not a charity. Adjust your expectations accordingly, or expect disappointment.
      • quesera 19 hours ago
        Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?

        I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.

        • bbarnett 18 hours ago
          You could deprecate the service, and archive the links as static html. 200bytes of text for an html redirect (not js).

          You can serve immense volumes of traffic from static html. One hardware server alone could so easily do the job.

          Your attack surface is also tiny without a back end interpreter.

          People will chime in with redundancy, but the point is Google could stop maintaining the ingress, and still not be douches about existing urls.

          But... you know, it's Google.

          • quesera 14 hours ago
            Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.

            Why break this??

            Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.

            But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.

            You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.

            Google is acting like they are a one-person startup here.

            Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.

            Of maybe that's what they're telling themselves, even if it's not real.

            • bugsMarathon88 8 hours ago
              > Why break this??

              We already told you: people are likely brute-forcing URLs.

              • quesera 7 hours ago
                I'm not sure why that is a problem.
      • nomel 19 hours ago
        Those numbers make it seem fairly trivial. You have a dozen bytes referencing a few hundred bytes, for a service that is not latency sensitive.

        This sounds like a good project for an intern, with server costs that might be able to exceed a hundred dollars per month!

    • oyveybro 19 hours ago
      [flagged]
  • mrcslws 19 hours ago
    From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...

    This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".

    • bayindirh 19 hours ago
      > The right question is "how much total value do all of the links provide", not "what percent are used".

      Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).

      This beancounting really makes me sad.

      • quesera 19 hours ago
        Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.

        Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.

        • bayindirh 19 hours ago
          This is what I mean, actually.

          If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.

      • socalgal2 17 hours ago
        If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.
      • ahstilde 19 hours ago
        > just for fun (but, of course, pay them for their work).

        Doing things for fun isn't in Google's remit

        • kevindamm 19 hours ago
          Alas, it was, once upon a time.
        • morkalork 19 hours ago
          Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.
          • jsperson 16 hours ago
            I’m still sore about reader. Gap has never been filled for me.
        • ceejayoz 19 hours ago
          It used to be. AdSense came from 20% time!
      • kmeisthax 19 hours ago
        [dead]
    • HPsquared 19 hours ago
      Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.
    • sltkr 18 hours ago
      I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.
      • chneu 38 minutes ago
        Where'd all my porn go!?
    • fizx 19 hours ago
      Don't be confused! That's not how they made the decision; it's how they're selling it.
      • esafak 19 hours ago
        So how did they decide?
        • chneu 36 minutes ago
          new person got hired after old person left. new person says "we can save x% by shutting down these links. 99% arent used" and the new boss that's only been there for 6 months says "yeah sure".

          Why does google kill any project? the people who made it moved on, the new people dont care because it doesn't make their resume look any better.

          basically nobody wants to own this service and it requires upkeep to maintain it alongside other google services.

          google's history shows a clear choice to reward new projects, not old ones.

          https://killedbygoogle.com/

        • nemomarx 19 hours ago
          I expect cost on a budget sheet, then an analysis was done about the impact of shutting it down
          • sltkr 18 hours ago
            You can't get promoted at Google for not changing anything.
    • firefax 19 hours ago
      > "more than 99% of them had no activity in the last month"

      Better to have a short URL and not need it, than need a short URL and not have it IMO.

    • SoftTalker 18 hours ago
      From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.
    • esafak 19 hours ago
      What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!
      • nomel 19 hours ago
        Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.

        Many videos I uploaded in 4k are now only available in 480p, after about a decade.

    • handsclean 19 hours ago
      I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.
      • HPsquared 19 hours ago
        It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).

        It's less "data-driven decisions", more "how to lie with statistics".

    • FredPret 18 hours ago
      "Data-driven decision making"
  • JimDabell 19 hours ago
    Cloudflare offered to keep it running and were turned away:

    https://x.com/elithrar/status/1948451254780526609

    Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.

    • fourseventy 19 hours ago
      Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.
      • nomel 19 hours ago
        I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.
        • joshstrange 18 hours ago
          Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.

          My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.

        • hnfong 17 hours ago
          Another shocking story to share.

          I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.

          It's still running. I have no idea why.

          • coryrc 17 hours ago
            It's the most enterprise-y and legacy thing Google sells.
        • throwyawayyyy 18 hours ago
          Pretty sure you can thank the FCC for that :)
        • mrj 18 hours ago
          Shhh don't remind them
        • kevin_thibedeau 18 hours ago
          Mass surveillance pipeline to the successor of room 641A.
    • thebruce87m 16 hours ago
      > Remember this next time you are thinking of depending upon a Google service.

      Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.

  • jaydenmilne 19 hours ago
    ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.

    https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

    • pimlottc 19 hours ago
      Looks like they have saved 8000+ volumes of data to the Internet Archive so far [0]. The project page for this effort is here [1].

      0: https://archive.org/details/archiveteam_googl

      1: https://wiki.archiveteam.org/index.php/Goo.gl

    • localtoast 19 hours ago
      Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.
      • chneu 33 minutes ago
        im about to go setup my spare n100 just for this project. If all it uses is a lil bandwidth then that's perfect for my 10gbps fiber and n100.
      • wobfan 19 hours ago
        Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?
    • ojo-rojo 19 hours ago
      Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.

      Us preserving digital archives is a good step. I guess making hard copies would be the next step.

    • hadrien01 13 hours ago
      After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki
    • AstroBen 19 hours ago
      Just started, super easy to set up
  • cpeterso 19 hours ago
    Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.

    Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.

    • growthwtf 18 hours ago
      This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.
  • jedberg 19 hours ago
    I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?
    • DominikPeters 19 hours ago
      It will include many URLs that are semi-private, like Google Docs that are shared via link.
      • chneu 30 minutes ago
        That's not any better than what archiveteam is doing. They're brute forcing the URLs to capture all of them. So privacy won't really matter here.
      • ryandrake 19 hours ago
        If some URL is accessible via the open web, without authentication, then it is not really private.
        • bo1024 19 hours ago
          What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.
          • prophesi 18 hours ago
            Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.
            • hombre_fatal 18 hours ago
              Sure, but you have to make executive decisions on the behalf of people who aren't experts.

              Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

              People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.

            • bo1024 18 hours ago
              I'm not seeing why there's a clear line where GET cannot be authentication but POST can.
              • prophesi 18 hours ago
                Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.
            • wobfan 13 hours ago
              I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.
      • charcircuit 18 hours ago
        Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.
      • high_na_euv 19 hours ago
        So exclude them
        • ceejayoz 19 hours ago
          How?

          How will they know a short link to a random PDF on S3 is potentially sensitive info?

    • Nifty3929 19 hours ago
      I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.
      • growt 19 hours ago
        And then output the search results as a 302 redirect and it would just be continuing the service.
    • devrandoom 19 hours ago
      Are they all public? Where can I see them?
      • jedberg 19 hours ago
        You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.
      • Alifatisk 19 hours ago
        I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there
        • chneu 29 minutes ago
          archiveteam has the list at over 2billion urls with over a billion left to archive.
        • sltkr 18 hours ago
          I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.
  • ElijahLynn 19 hours ago
    OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.
    • phyzix5761 18 hours ago
      People still trust Google?
  • spankalee 16 hours ago
    As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.

    No one wants to own this product.

    - The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.

    - Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.

    So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.

    This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).

    This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.

    I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.

    Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.

    • gsnedders 8 hours ago
      To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.

      While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.

    • rs186 15 hours ago
      Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?
      • spankalee 10 hours ago
        Definitely a valid question!

        Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.

        But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.

        I think the harder thing is getting management buy-in, even from the front-line managers.

  • romaniv 16 hours ago
    URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.

    Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".

  • davidczech 19 hours ago
    I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.
    • nikanj 18 hours ago
      There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code
  • hinkley 19 hours ago
    What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.
  • cyp0633 19 hours ago
    The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:

    Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)

    https://news.ycombinator.com/item?id=44117722

  • krunck 19 hours ago
    Stop MITMing your content. Don't use shorteners. And use reasonable URL patterns on your sites.
    • Cyan488 18 hours ago
      I have been using a shortening service with my own domain name - it's really handy, and I figure that if they go down I could always manually configure my own DNS or spin up some self-hosted solution.
  • musicale 19 hours ago
    • hinkley 18 hours ago
      That needs a chart.
  • bunbun69 2 hours ago
    Isn’t this a good thing? It forces people to think now before making decisions
  • pentestercrab 19 hours ago
    There seems to have been a recent uptick in phishers using goo.gl URLs. Yes, even without new URLs being accepted by registering expired domains with an old reference.
  • ccgreg 12 hours ago
    Common Crawl's count of unique goo.gl links is approximately 10 million. That's in our permanent archive, so you'll be able to consult them in the future.

    No search engine or crawler person will ever recommend using a shortener for any reason.

  • pluc 19 hours ago
    Someone should tell Google Maps
  • david422 18 hours ago
    Somewhat related - I wanted to add short urls to a project of mine. I was looking around at a bunch of url shorteners - and then realized it would be pretty simple to create my own. It's my content pointed to my own service, so I don't have to worry about 3rd party content or other services going down.
  • Brajeshwar 19 hours ago
    What will it really cost for Google (each year) to host whatever was created, as static files, for as long as possible?
    • chneu 26 minutes ago
      it's not the cost of hosting/sharing it. It's the cost employing people to maintain this alongside other google products.

      So, at minimum, assuming there are 2 people maintaining this at google that probably means it would cost them $250k/yr in just payroll to keep this going. That's probably a very low ball estimate on the people involved but it still shows how expensive theses old products can be.

    • malfist 19 hours ago
      It'd probably cost a couple tens of dollars, and Google is simply too poor to afford that these days. They've spent all their money on AI and have nothing left
  • rsync 14 hours ago
    A reminder that the "Oh By"[1] everything-shortener not only exists but can be used as a plain old URL shortener[2].

    Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.

    [1] https://0x.co

    [2] https://0x.co/hnfaq.html

  • xutopia 18 hours ago
    Google is making harder and harder to depend on their software.
    • christophilus 17 hours ago
      That’s a good thing from my perspective. I wish they’d crush YouTube next. That’s the only Google IP I haven’t been able to avoid.
      • chneu 25 minutes ago
        The alternatives just aren't there, either. Nebula is okay but not great. Floatplane is too exclusive. Vimeo..okay.

        But maybe a youtube disruption would be good for video on the internet. or it might be bad. idk.

  • andrii9 18 hours ago
    Ugh, I used to use https://fuck.it for short links too. Still legendary domain though.
  • pkilgore 19 hours ago
    Google probably spends more money a month than what it would take to preserve this service on coffee creamer for a single conference room.
  • throwaway81523 10 hours ago
    Cartoon villains. That's what they are.
  • gedy 19 hours ago
    At least they didn't release a 2 new competing d.uo or re.ad, etc shorteners and expect you to migrate
  • micromacrofoot 19 hours ago
    This is just being a poor citizen of the web, no excuses. Google is a 2 trillion dollar company, keeping these links working indefinitely would probably cost less than what they spend on homepage doodles.
  • charlesabarnes 19 hours ago
    Now I'm wondering why did chrome change the behavior to use share.google links if this will be the inevitable outcome
  • mymacbook 12 hours ago
    Why is everyone jumping on the blame the victims bandwagon?! This is not the fault of users whether they were scientists publishing papers or the fault of the general public sharing links. This is absolutely 100% on Alphabet/Google.

    When you blame your customer, you have failed.

    • eviks 3 hours ago
      They weren't customers since they didn't buy anything, and yes, as sweet as "free" is, it is the fault of users to expect free to last forever
  • ChrisArchitect 18 hours ago
    Discussion on the source from 2024: https://news.ycombinator.com/item?id=40998549
  • ChrisArchitect 18 hours ago
    Noticed recently on some google properties where there are Share buttons that it's generating share.google links now instead of goo.gl.

    Is that the same shortening platform running it?

  • ourmandave 19 hours ago
    A comment said they stopped making new links and announced back in 2018 it would be going away.

    I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.

    • chneu 21 minutes ago
      I just went through the old thread and it's comments. It appears google didn't specifically state they were going to end the service. They hinted that links would continue working, but new ones would not be able to be created. It was left a bit open-ended, and that likely made people think the links would work indefinitely.

      This seems to be echoed by the archiveteam scrambling to get this archived. I figure they would have backed these up years ago if it was more well known.

    • goku12 19 hours ago
      For one, not enough people seem to be aware of it. They don't seem to have given that announcement the importance and effort it deserved. Secondly, I can't say that they have a good migration plan when shutting down their services. People scrambling like this to backup the data is rather common these days. And finally, this isn't a service that can be so easily replaced. Even if people knew that it was going away, there would be short-links that they don't remember, but are important nevertheless. Somebody gave an example above - citations in research papers. There isn't much thought given to the consequences when decisions like this are taken.

      Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.

  • pfdietz 19 hours ago
    Once again we are informed that Google cannot be trusted with data in the long term.
  • fnord77 19 hours ago
    • quesera 7 hours ago
      From the 2018 announcement:

      > URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links

      Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).

      • chneu 19 minutes ago
        Thanks for pointing this out. That's hilarious.
  • insane_dreamer 19 hours ago
    the lesson? never trust industry
  • Bluestein 19 hours ago
    Another one for the Google [G]raveyard.-
  • lrvick 18 hours ago
    Yet another reminder to never trust corpotech to be around long term.