Isn't ATProto just a compromised version of Activity Pub, basically designed around an excuse to force all users into a data mining firehose structure like twitter used to have only there's no privacy features or federation for moderation controls?
I am afraid that gatekeeping is partially essential and somewhat desired, as an academic you don't have time to read everything and some sort of quick signals, albeit very flawed, can be useful to stop wasting time reading crappy science. If you don't gatekeep you will get a lot of crappy papers or papers that mention the same thing and it will waste more time from people that wish to get a quick sense of the state of a topic/field from quality work. An open source voting system would be easily abused, so it will end up to be trusting a select service of peer reviewers or agencies. Especially if a paper includes a lot of experiments and figures that can be somewhat complicated or overwhelming. What do think?
I'm inclined to agree, and yet the past decade of ML on arxiv moving at a breakneck pace seems to be a counterexample. In that case I observe citation "bubbles" where I can follow one good paper up and down the citation graph to find others.
I think for smaller software papers or ML learning on arxiv this might work. For larger papers on biomedical or hard-tech, I think it is much less likely. I struggle to keep up with BioRxiv.org as a medical professional, many articles would require 2 hours+ to confidently review as a professional and I would never trust a "public" review algorithm or network to necessarily do a great job. If you allow weekly updates on your topic area, you might get a 100 papers a week of which 90 are likely poor-quality, but who is going to review these? Definitely not me, I cannot judge a 100 papers a week. Granted, probably only 1 or 2 are directly relevant to your work, but even then the time sink is annoying. It is nice if a publisher has done some initial quality check, made sure the written text is succinct, direct and validated and backed up by well-presented data/figures/methods. Even if a totally open social network exists for upvoting/describing papers, I am afraid the need for these publishers will still be there and they will just exist regardless, and it will still be preferred by academics.
Three~five experts specifically asked to review a paper in a controlled environment versus a thousands random scientists or public people (which might be motivated by financial, malicious or other reasons) is probably still the better option. Larger, technically impressive multi-disciplinary papers with 20+ authors are basically impossible to review as individuals, you would like a few experts on the main methods to review it together in harmony with oversight from an reputable vendor/publisher. Such papers are also increasingly common in any biotech/hard-tech field.
And it is infamously insecure, full of spam, and struggles with attachments beyond 10mB.
So thank you for bringing it up, it showcases well that a distributed system is not automatically a good distributed system, and why you want encryption, cryptographic fingerprints and cryptographic provenance tracking.
And yet, it is a constantly used decentralized system which does not require content addressing, as you mentioned. You should elaborate why we need content addressing for a decentralized system instead of saying "10MiB limit + spam lol email fell off". Contemporary usage of technologies you've mentioned don't seem to do much to reduce spam (see IPFS which has hard content addressing). Please, share more.
Has a bit of a leg up in that if it's only academics commenting, it would probably be way more usable than typical social media, maybe even outright good.
Calling it peer review suggests gatekeeping. I suggest no gatekeepind just let any academic post a review, and maybe upvote/downvote and let crowdsourcing handle the rest.
While I appreciate no gatekeeping, the other side of the coin is gatekeeping via bots (vote manipulation).
Something like rotten tomatoes could be useful. Have a list of "verified" users (critic score) in a separate voting column as anon users (audience score).
This will often serve useful in highly controversial situations to parse common narratives.
Yes publishing is broken, but academics are the last people to jump onto platforms...they never left email. If you want to change the publishing game, turn publishing into email.
If not, same handle over there, I can get you in touch with them. Or hit up Boris, he knows everyone and is happy to make connections
There's also a full day at the upcoming conference on ATProto & scientific related things. I think they com on discourse more (?)
That'll get us connected off HN
I think Cosmik is the group I was thinking of that has also put out some initial poc like yourself
https://discourse.atprotocol.community/t/about-the-atproto-s...
Three~five experts specifically asked to review a paper in a controlled environment versus a thousands random scientists or public people (which might be motivated by financial, malicious or other reasons) is probably still the better option. Larger, technically impressive multi-disciplinary papers with 20+ authors are basically impossible to review as individuals, you would like a few experts on the main methods to review it together in harmony with oversight from an reputable vendor/publisher. Such papers are also increasingly common in any biotech/hard-tech field.
You need content addressing and cryptographic signatures for that.
So thank you for bringing it up, it showcases well that a distributed system is not automatically a good distributed system, and why you want encryption, cryptographic fingerprints and cryptographic provenance tracking.
Something like rotten tomatoes could be useful. Have a list of "verified" users (critic score) in a separate voting column as anon users (audience score).
This will often serve useful in highly controversial situations to parse common narratives.
Theirs? (Personally, I think not.)