The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
As an example, my open-source app MBCompass hit this issue. I downgraded to AGP 8.11.1 with Gradle 8.13 to make it build, but even then, F-Droid failed due to a baseline profile reproducibility bug in AGP. The only workaround was disabling baseline profiles and pushing yet another release.
This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
References:
- F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593 - Catima example: https://github.com/CatimaLoyalty/Android/issues/2608 - MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
https://developers.redhat.com/blog/2021/01/05/building-red-h...
Think of how much faster their servers would be with one of those Epyc consumer cpus.
I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.
https://opencollective.com/f-droid#category-BUDGET
Not sure if this includes their Librapay donations either:
https://liberapay.com/F-Droid-Data/donate
This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.
It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.
So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.
On the other hand, I didn't dig very deep into the ticket history now but it sounds like this could have been expected: it broke once already 4 years ago (2021), so maybe planning an upgrade for when this happens again would be good foresight. Then again, volunteers... It's not like I picked up the work as an f-droid user either
It says for servers that 13-21 years is the break even for emissions from production vs consumption.
The 25 year number is for consumer devices like phones and laptops.
I would also argue that average load on the servers comes into play.
I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.
From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.
It isn't like they don't have any other things to fix or address.
$3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).
All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.
Not that they are bad and would not be way better than what they have, just that I though the parent was quite the optimist with his Zen4/Zen5 pricing.
Then there's also the overhead of setting up and maintaining the hardware in their location. It's not just a "solve this problem for ~$2,000 and be done with it".
I don't know the actual specs or requirements. Maybe 1 build server is sufficient, but from what I know there's nearly 4,000 apps on FDroid. 1 server might be swamped handling that much overhead in a timely manner.
Space in your basement or the colo rack of a datacenter along with power, data and cooling is an expense on top. But whatever old servers they have are going to take up more space and use more power and cooling. Upgrading servers that are 5+ years old frequently pays for itself because of the reduced operating costs (unless you opt for more processing power at equal operating cost instead)
See https://lkml.org/lkml/2025/4/25/409
RHEL 8 is still supported and Ubuntu is still baseline x86_64 I believe for commercial distros. Not sure about SuSE.
Deprecated for Debian
https://www.debian.org/releases/stable/release-notes/issues....
> Deprecated for Debian
> https://www.debian.org/releases/stable/release-notes/issues....
32 bit Linux is still supported by the kernel... and... 'Debian, Arch, and Fedora still supports baseline x86_64'.
Please do not take things out of context.
I would also like to know this.
Although I'm a little surprised to learn that the binary itself doesn't have enough information in its header to be able to declare that it needs SSSE3 to be executed; that feels like something that should be statically-analyzed-and-cached to avoid a lot of debugging headaches.
hobbyst dev? sure
Google? nope
Googlers aren't gods. It's a 100,000-person company; they're as vulnerable to "We didn't really think of that one way or the other" as anyone else.
ETA: It's actually not even Google code that changed (directly); Gradle apparently began requiring SSSE3 (https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153) and Google's toolchain just consumed the new constraint from its upstream.
Here, I'm not surprised at all; Google is not the kind of firm that keeps a test-lab of older hardware for every application they ship, so (particularly for their dev tooling) "It worked on my machine" is probably ship-worthy. I bet they don't even have an explicit architecture target for the Android build toolchain beyond the company's default (which is generally "The two most recent versions" of whatever we're talking about).
Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)
To, me, that's the worrying part.
Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)
Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.
The EU is already home to many OS contributors and companies. I like the Red Hat approach where you are profitable, but with open source solutions. It's great for governments because you get support, but it's much easier to compete, which reduces prices.
Smaller companies also give more of their money to open source. Bigger companies can always fork it and develop it internally and can therefore pressure devs to do work for less. Smaller companies have to rely on the projects to keep going and doing it all in house would be way too expensive for most.
The Red Hat that was bought by IBM?
I agree with your goals, but the devil is in the methods. If we want governments to support open source, the appropriate method is probably a legislative requirement for an open source license + a requirement to fund the developer.
Always has been.
hogwash
It's just I think that FDroid is an important project, and hope this doesn't block their progress.
Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!
Appologies if I came across like that, here's what I'm trying to convey:
- Fdroid is important
- This sounds like a problem, not necessarily one that's any fault of fdroid
- Does anyone know of a plan to fix the issue?
For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?
Server hardware at the minimum v2 functionality can be found for a few hundred dollars.
A competent administrator with physical access could solve this quickly.
Take a ReaR image, then restore it on the new platform.
Where are the physical servers?
The minimum is now eight cores on a die for both AMD and Intel, so running a quad core system means staying on 14nm. You may loudly criticize holding back on a quad core system, but you aren't paying $47,500 per core to license Oracle Enterprise database.
The eight core minimum is a huge detriment for commercial software that is licensed by core.
This, and this alone, shatters your argument. Any other questions?
Here's also a recent Xeon quad core [1]
Beside that, could you please show me where the F-Droid build server uses an Oracle Database?
[0] https://www.amd.com/en/products/processors/server/epyc/4004-... [1] https://www.intel.de/content/www/de/de/products/sku/236193/i...
For any software licensed by core count, modern systems are usually at a disadvantage.
Next question please.
Not even sure it's in the top 10
Low quality software tends to be popular among the general public because they're very bad at evaluating software quality.
Edit: searching online found this if anyone else is interested https://www.androidauthority.com/best-app-stores-936652/
And Oppo and Vivo too?
In both instances one company owns the other - why have competing app stores?
That's apparently what they did last time. From the ticket:
"Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"
Samsung Galaxy Store is much much bigger.
> Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3.[0]
I.e. the problem is because fdroid have older CPUs, newer ones would be able to build. I only mentioned it in terms of what the plans to fix might be. I have zero idea if upgrading servers is the best way to go.
[0] https://issuetracker.google.com/issues/438515318?pli=1
https://android.googlesource.com/platform/frameworks/base/+/...
Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.
So much for "Open Source"
Also, you don't need to compile all of AOSP just to get the toolchain binaries.
If the code was written reasonably you can usually find enough clues to figure out where to start decoding and thus get a reasonable assembly output, but even then you often need to restart the decoding several times because the decoder can get confused at function boundaries depending on what other data gets embedded and where it is embedded. Be glad self modifying code was going out of style in the 1980's and is mostly a memory today as that will kill any disassembly attempts. All the other tricks that Mel used (https://en.wikipedia.org/wiki/The_Story_of_Mel) also make your attempts at lifting machine code to assembly impossible.
https://youtu.be/eunYrrcxXfw
Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.
However at the same time, not even offering a fallback path in non-assembly?
There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)
[0] https://en.wikipedia.org/wiki/AMD_10h
You could buy a newer one but I guess they have other stuff they have to pay for.
Wow, i just got into newpipe/fdroid. Its neat to think even a donation the size of mine can be almost individually meaningful :)
https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153
Not sure how long it will take to get resolved but that thread seems reassuring even if there isn't a direct source that it was fixed.
In the thread you linked to people are confusing a typo correction ("mas fixed" => "was fixed") as a claim about this new issue being fixed.
The one that was fixed is this similar old issue from years ago: https://issuetracker.google.com/issues/172048751
If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?
Does anyone know the numbers of build servers and the specs?
However the AMD CPUs did not implement it until Bulldozer, in mid 2011.
While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.
SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.
There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).
And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.
1: https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...
I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.
It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.
F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593
Catima example: https://github.com/CatimaLoyalty/Android/issues/2608
MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
> But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.
Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.
F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)
[0]: Basically the inverse of https://xkcd.com/1172/
Then again who is to say that I would be a better custodian than this guy?
Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.
https://forum.f-droid.org/t/call-for-help-making-free-softwa...
When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”
I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary
Welp there goes my plans on savaging an old laptop to build my android apps.
https://android.googlesource.com/platform/frameworks/base/+/...
If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.
No one would write Android apps on a Chromebook, and making it harder to do so would only reduce the incentive for companies to develop Android apps.
How could Google benefit from pushing a newer instruction set standard on Windows and macOS?
If they required a Google-specific Linux distro to build this thing or if they went the Apple route and added closed-source components to the build system, this could be seen as a move to mess with the competition, but this is simply a developer assuming that most people compiling apps have a CPU that was produced less than 15 years ago (and that the rest can just recompile the toolchain themselves if they like running old hardware).
With Red Hat and Oracle moving to SSE4.1 by default, the F-Droid people will run into more and more issues if they don't upgrade their old hardware.
This happened because nobody gives a shit about F-Droid, not because it's somehow a "threat" with unmaintained apps.
It seems you're suggesting a very specific, targeted attack.
Yes, just like it happened with Firefox: https://news.ycombinator.com/item?id=38926156
Several years ago I glumly opined internally that Firefox had two grim choices: abandon Gecko for Chromium, or give up any hope of being a meaningful player in the market. I am well aware that many folks (especially here) would consider the first of those choices worse than the second. It's moot now, because they chose the second, and Firefox has indeed ceased to be meaningful in the market. They may cease to exist entirely in the next five years.
I am genuinely unhappy about this. I was hired at Google specifically to work on Firefox. I was always, and still remain, a fan of Firefox. But all things pass. Chrome too will cease to exist some day.
> suspicions were plausible but incorrect
The suspicions were not about the evil will of the engineers. It's the will of Google itself (or managers, if you want), which plays the main role here. This is exactly what causes the following:
> engineering teams continued to be resource-constrained
It reminds me a bit of Boeing: https://news.ycombinator.com/item?id=19914838
Despite its size, Google does shoestring engineering of most things, which is why so much is deprecated over time -- there's never budget for maintenance.
So I mean in some sense yes, there's valid criticism of Google's "will" here, but that will was largely unaware of Firefox, and the consequences burned Google products and customers just as much or more in the long run. Nightingale looked past individual instances to see a pattern, but didn't continue to scale the pattern up to first-party products as well.
The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow
And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.
If you want "very different" then look at the record-based filesystems used in mainframes.
Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets
Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?
So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.
Using AMD hardware that's "only" 13 years old can also cause this problem, though.
I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.
Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.
https://github.com/cygnusx-1-org/Discoverium/
Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.
Edit: or perhaps you mean that isn’t the only way to provide such guarantees, which is the implication I got reading your other replies.
Hardly any different from what was in the genesis of .NET.
Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.
And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.
Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.
At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.
Are Google buying Jetbrains?
Also Kotlin Foundation is mostly JetBrains and Google employees.
Modern Android has virtual machines on devices with supported hardware+bootloader+kernels: https://source.android.com/docs/core/virtualization
And yes, you could get that cost down easily.
(A server that old might not have any SSDs, which would be insane for a software build server unless it was doing everything in RAM.)
I still maintain old servers, and even my Amiga server has an SSD.
We don't know for sure the servers don't have SSDs, but we do know that back in the days of server hardware that didn't support SSE4.1, SSDs had not yet displaced hard drives for mainstream storage, so it's likely that servers that old didn't originally ship with SSDs. It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
A server at that age is also going to be harder to repair when something dies, and it's due for something to die. If they lose a PSU it might be cheaper to replace the whole system with something a bit less old. Other components they'd have to rely on replacing with something used, from a different manufacturer than the original, or use a newer generation component and hope it's backwards compatible. Hence why I said using hardware that old would imply their infrastructure is fragile.
But all of this is still just speculation because nobody involved with F-Droid has actually explained what specific hardware they're using, or why. So I'm still not convinced that the possibility of a misconfigured hypervisor has been ruled out.
You lost me there. One thing has nothing to do with the other.
People have reasons for running the hardware they run. Do you know their reasons? If you do, please share. If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.
Is my AlphaServer DS25 unmaintained? It's very old server hardware.
Is my 1981 Chevette unmantained? It's REALLY old. Can you infer that the fact that I have a car from 1981 means it's unmaintained? I'd say that reasonable people can infer that it's definitely maintained, since it would most likely not still be running if it weren't.
> It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
I don't know where you learned about servers, but no, it's not a weird choice to use newer storage in older servers. Not at all. Not even a little bit. Maybe you've worked somewhere that bought Dell servers with storage and trashed the servers when storage needing upgrading, but that's definitely not normal.
See, this is just you being unreasonable.
Yes, we can all imagine why people might keep old hardware around. But your AlphaServer is at best your hobby, not production infrastructure that lots of people and other projects rely on. Nobody's noticing whether or not it crashes. Likewise for your Chevette: nobody cares until it stalls out in traffic, then everyone around you will make the reasonable assumption that it's behind on maintenance.
If F-Droid is indeed using ancient hardware, and repeatedly experiencing software failures as a result, then the most likely explanation is that their infrastructure is inadequately maintained. Sure, it's not a guarantee, it's not the only possibility, but it's a reasonable assumption to work with until such time as someone from F-Droid explains what the hell is going on over there. And if there's nobody available to explain what their infrastructure is and why it is showing symptoms of being old and unmaintained, that's more evidence for this hypothesis.
[0] https://www.ebay.com/str/evolutionecycling
the point of my post still stands
Intel ME is not a feature for user, it is intended to control any modern CPU except the ones coming to US Army/Navy. It is needed to make Stuxnet-class attacks. The latest chip with possibiliy to have the ME provenly disabled is the 3rd gen.
Many ARM vendors sell powerful arm computers without any ME-analog on board.
> It is needed to make Stuxnet-class attacks.
I have issues with the presence of the ME and I think we agree on a lot of things, but this statement is lunacy lol
Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.
Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.
Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.
I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.
At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?
"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...
Your suggestion falls flat on its face when you look at software where performance REALLY matters: ffmpeg. Guess what? It'll use SIMD, but can compile and run just fine without.
I don't understand people who make things up when it comes to telling others why something shouldn't be done. What's it to you?
https://wiki.debian.org/InstructionSelection
There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.
[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...
[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.
FYI, there are plenty of methods of selecting code at run time, too.
If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?
That's testing it on the new CPU, not the old one.
> You can run that software inside of qemu.
Sure you can. Go ahead. Why should the maintainer be expected to do that?
> A bit ridiculous, don't you think?
Not at all. It's ridiculous to expect a software developer to give any significance to compatibility with obsolete platforms. I'm not saying we shouldn't try. x86 has good backward compatibility. If it still works, that's good.
But if I implement an algorithm in AVX2, should I also be expected to implement a slower version of the same algorithm using SSE3 so that a 20 year old machine can run my software?
You can always run an old version of the software, and you can always do the work yourself to backport it. It's not my job as a software developer to be concerned about ancient hardware unless someone pays me specifically for that.
Would you expect Microsoft to ship Windows 12 with baseline compatibility? I don't know if it is, but I'm pretty certain that if you tried running it on a 2005 CPU, it would be pretty much non-functional, as performance would be dire. I doubt it is anyway due to UEFI requirements which wouldn't be present on a machine running such CPU.
It's not that hard to use gentoo.
It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.
/s (should be obvious but probably not for this audience)
https://wiki.debian.org/InstructionSelection
Guess what the company behind Android wants to do...
For example, if they published their exact setup for building Android apps so others could replicate it
How many Android users compile the own apps they use
Perhaps increasing that number would be a goal worth pursuing
Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).
IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.
https://wiki.debian.org/InstructionSelection
That's a problem that people are trying to solve by not using an ancient CPU baseline. Do you have a reasonable proposal for how else we should enable widespread use of hardware functionality that's less than two decades old?
Seems like he is talking about the developer being responsible for that also!
Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"
This means their build infrastructure burns excessive amounts of power being run by volunteers in basements/homelabs on vintage museum grade (15 year old Opterons/Phenoms) hardware.
Gamers have been there 14 years ago with 'No Man's Sky' being the first big game requiring SSE 4.1 for no particular reason.
On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.
What are we missing here, besides that build farm was left to decay?
In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.
> they are just "it used to work perfectly" guys.
To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.
The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.
There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.
F-Droid likely has upgrade options even in the all-open scenario.
Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s
Are there any X86 tablets with Android ?
You need a middleman in place in case the app developer goes bad.
https://apps.obtainium.imranr.dev/
They put the disclaimer on top that this list is not meant as an app store or catalog. It's meant for apps with somewhat complex requirements for adding to Obtainium. But it serves well as a catalog since most of the major open source apps are listed.
Our Dutch news (and I think most EU news) is pretty much presenting us with the view that Israel has lost it (stories about young men searching for food being shot in the genitals for fun and such [0]), so I'm very curious how their government presents things to its civilians.
[0] https://nos.nl/nieuwsuur/artikel/2575933-beschietingen-bij-z...
There are also fully English sources like Times of Israel, though though it has sort of an international audience, not only Israelis.
For example, whether or not you're aware that the banking system is collecting interest on all the money in the world, every second of the day, and it created it all out of thin air.
Believe at what? A fact that is being actively documented in Gaza by NGOs and corroborated by numerous news agencies internationally?
This is all comming across as dishonest (specially when looking at your own homepage)
But in any case, this is false dichotomy, and likely exaggerated one to begin with.
The tools in question in OP should be easy to build from source and not rely on the host's architecture, to be usable on platforms like ARM and RISCV. It's clear that in the android ecosystem, people don't care, so F-Droid can't do miracles (the java/gradle ecosystem is just really bad at this), but this would not happen if the build tools had proper build recipes themselves.
Yup, same here! The story is as old as time, and the examples are plentiful. First Slashdot, then Reddit, then now GitHub, all became far-far-far slower and less usable, once they've been "improved" by the folk engaging in the resume-driven development:
Why is GitHub UI getting slower? - https://news.ycombinator.com/item?id=44799861 - Aug 2025 (115 comments)
I am, too, as a user, quite pleased that F-Droid is keeping it cool and reliable for the actual users.
What an entitled conclusion.