So instead of scraping IA once, the AI companies will use residential proxies and each scrape the site themselves, costing the news sites even more money. The only real loser is the common man who doesn't have the resources to scrape the entire web himself.
I've sometimes dreamed of a web where every resource is tied to a hash, which can be rehosted by third parties, making archival transparent. This would also make it trivial to stand up a small website without worrying about it get hug-of-deathed, since others would rehost your content for you. Shame IPFS never went anywhere.
Weird, considering IA has most of its content in a way you could rehost it all idk why nobody’s just hosting a IA carbon copy that AI companies can hit endlessly, and then cutting IA a nice little check in the process, but I guess some of the wealthiest AI startups are very frugal about training data?
This also goes back to something I said long ago, AI companies are relearning software engineering poorly. I can think of so many ways to speed up AI crawlers, im surprised someone being paid 5x my salary cannot.
That already exists, it's called Common Crawl[1], and it's a huge reason why none of this happened prior to LLMs coming on the scene, back when people were crawling data for specialized search engines or academic research purposes.
The problem is that AI companies have decided that they want instant access to all data on Earth the moment that it becomes available somewhere, and have the infrastructure behind them to actually try and make that happen. So they're ignoring signals like robots.txt or even checking whether the data is actually useful to them (they're not getting anything helpful out of recrawling the same search results pagination in every possible permutation, but that won't stop them from trying, and knocking everyone's web servers offline in the process) like even the most aggressive search engine crawlers did, and are just bombarding every single publicly reachable server with requests on the off chance that some new data fragment becomes available and they can ingest it first.
This is also, coincidentally, why Anubis is working so well. Anubis kind of sucks, and in a sane world where these companies had real engineers working on the problem, they could bypass it on every website in just a few hours by precomputing tokens.[2] But...they're not. Anubis is actually working quite well at protecting the sites it's deployed on despite its relative simplicity.
It really does seem to indicate that LLM companies want to just throw endless hardware at literally any problem they encounter and brute force their way past it. They really aren't dedicating real engineering resources towards any of this stuff, because if they were, they'd be coming up with way better solutions. (Another classic example is Claude Code apparently using React to render a terminal interface. That's like using the space shuttle for a grocery run: utterly unnecessary, and completely solvable.) That's why DeepSeek was treated like an existential threat when it first dropped: they actually got some engineers working on these problems, and made serious headway with very little capital expenditure compared to the big firms. Of course they started freaking out, their whole business model is based on the idea that burning comical amounts of money on hardware is the only way we can actually make this stuff work!
The whole business model backing LLMs right now seems to be "if we burn insane amounts of money now, we can replace all labor everywhere with robots in like a decade", but if it turns out that either of those things aren't true (either the tech can be improved without burning hundreds of billions of dollars, or the tech ends up being unable to replace the vast majority of workers) all of this is going to fall apart.
Their approach to crawling is just a microcosm of the whole industry right now.
Thanks for the mention of Common Crawl. We do respect robots.txt and we publish an opt-out list, due to the large number of publishers asking to opt out recently.
yeah, they should really have a think about how their behavior is harming their future prospects here.
Just because you have infinite money to spend on training doesn't mean you should saturate the internet with bots looking for content with no constraints - even if that is a rounding error of your cost.
We just put heavy constraints on our public sites blocking AI access. Not because we mind AI having access - but because we can't accept the abusive way they execute that access.
The main issue is a well behaved AI company won't be singled out for continued access, they will all be hit by public sites blocking AI access. So there is no benefit to them behaving.
Something I’ve noticed about technology companies, and it’s bled into just about every facet of the US these days, is the consideration of if an action *can* be executed upon vs *should* an action be executed upon.
It’s very unfortunate and a short sighted way to operate.
> The AI companies won't just scrape IA once, they're keeping come back to the same pages and scraping them over and over. Even if nothing has changed.
Why, though? Especially if the pages are new; aren't they concerned about ingesting AI-generated content?
Possibly because a lot of “AI-company scraping” isn't traditional scraping (e.g., to build a dataset of the state at a particular point in time), its referencing the current content of the page as grounding for the response to a user request.
> The AI companies won't just scrape IA once, they're keeping come back to the same pages and scraping them over and over. Even if nothing has changed.
Maybe they vibecoded the crawlers. I wish I were joking.
It's been several years, but in my experiments it felt plenty fast if I prefetched links at page load time so that they're already local by the time the user actually tries to follow them (sometimes I'd do this out to two hops).
I think it "failed" because people expected it to be a replacement transport layer for the existing web, minus all of the problems the existing web had, and what they got was a radically different kind of web that would have to be built more or less from scratch.
I always figured it was a matter of the existing web getting bad enough, and then we'd see adoption improve. Maybe that time is near.
They already are, I've been dealing with Vietnam and Korea residential proxies destroying my systems for weeks, I'm growing tired. I cannot survive 3500 RPS 24/7.
> I've sometimes dreamed of a web where every resource is tied to a hash, which can be rehosted by third parties, making archival transparent. This would also make it trivial to stand up a small website without worrying about it get hug-of-deathed, since others would rehost your content for you. Shame IPFS never went anywhere.
You've just described Nostr: Content that is tied to a hash (so its origin and authenticity can be verified) that is hosted by third parties (or yourself if you want)
> So instead of scraping IA once, the AI companies will use residential proxies and each scrape the site themselves, costing the news sites even more money.
News websites aren’t like those labyrinthian cgit hosted websites that get crushed under scrapers. If 1,000 different AI scrapers hit a news website every hour it wouldn’t even make a blip on the traffic logs.
Also, AI companies are already scraping these websites directly in their own architecture. It’s how they try to stay relevant and fresh.
I don’t believe resips will be with us for long, at least not to the extent they are now. There is pressure and there are strong commercial interests against the whole thing. I think the problem will solve itself in some part.
Also, I always wonder about Common Crawl:
Is there is something wrong with it? Is it badly designed? What is it that all the trainers cannot find there so they need to crawl our sites over and over again for the exact same stuff, each on its own?
Many AI projects in academia or research get all of their web data from Common Crawl -- in addition to many not-AI usages of our dataset.
The folks who crawl more appear to mostly be folks who are doing grounding or RAG, and also AI companies who think that they can build a better foundational model by going big. We recommend that all of these folks respect robots.txt and rate limits.
AI companies are _already_ funding and using residential proxies. Guess how much of those proxies are acquired through being compromised or tricking people into installing apps?
We don’t lack the technology to limit scrapers, sure it’s an arms race with AI companies with more money than most. Why can’t this be a legal block through TOS
I maintain an open-source project called Linkwarden and this exact discussion is one of the reasons why it exists, teams needed a way to preserve referenced URLs reliably without having to depend on external services.
It stores webpages in multiple formats (HTML snapshot, screenshot, PDF snapshot, and a fully dedicated reader view) so you’re not relying on a single fragile archive method.
There’s both a hosted cloud plan [1] which directly supports the project, and a fully self-hosted option [2], depending on how much control you need over storage and retention.
Linkwarden is awesome and with the singlefile extension it's pretty easy to store things you can see but the scraper gets blocked on.
One question, what's your stance on adding a way to mark articles as read or "archive" them like other apps that are branded a bit more as storing things to read later. You can technically do something similar with tags but it's a bit clunky of a UX.
Thanks! At the moment we’re focused on archiving rather than read-later workflows, but this is great feedback. I’ve already added it to the feature requests list.
> with the singlefile extension it's pretty easy to store things you can see but the scraper gets blocked on
FWIW, at least on iOS, it's possible to inject Javascript into the web site being currently displayed by Safari as a side effect of sharing a web link to an app via the share sheet.
Several "read it later" style apps use this successfully to get around paywalls (assuming you've paid yourself) and other robot blockers. Any plans for Linkwarden to do this (or does it already)?
Does it just POST the url to them for them to fetch? Or is there any integration/trust to store what you already fetched on the client directly on their archives?
It affects science too (and there you'd want solid archiving as much as possible). Increasingly, meta-data is full of errors and general purpose search engines for science are breaking down, including even things like Google Scholar. I suppose some big science publishers are blocking AI bots too.
Did Google ruin it, or did advesarial activity between Google's algorithm and SEO ruin it? The latter seems more likely because the incentives make sense, and also inevitable.
It was. Advertising is incompatible with accurate data retrieval/routing. We've also implemented "obligation to deindex". So providing an unbiased index of the web as she is is essentially (in the U.S.) verboten.
> I suppose some big science publishers are blocking AI bots too.
That's a travesty, considering that a huge chunk of science is public-funded; the public is being denied the benefits of what they're paying for, essentially.
So the solution is to allow the AI scraping and hide the content, with significantly reduced fidelity and accuracy and not in the original representation, in some language model?
If it's publicly funded, why shouldn't AI crawlers have access to that data? Presumably those creating the AI crawlers paid taxes that paid for the science.
> If I build a business based off of consumption of publicly funded data, and that’s okay, why isn’t it okay for AI?
Because when you build it you aren't, presumably, polling their servers every fifteen minutes for the entire corpus. AI scrapers are currently incredibly impolite.
Publishers like The Guardian and NYT are blocking the IA/Wayback Machine. 20% of news websites are blocking both IA and Common Crawl. As an example, https://www.realtor.com/news/celebrity-real-estate/james-van... is unarchivable, with IA being 429ed while the site is accessible otherwise.
And whilst the IA will honour requests not to archive/index, more aggressive scrapers won't, and will disguise their traffic as normal human browser traffic.
So we're basically decided we only want bad actors to be able to scrape, archive, and index.
> we're basically decided we only want bad actors to be able to scrape, archive, and index
AI training will be hard to police. But a lot of these sites inject ads in exchange for paywall circumvention. Just scanning Reddit for the newest archive.is or whatever should cut off most of the traffic.
I'm part of that small but (hopefully) growing percentage, because Common Crawl is a deeply dishonest front for AI data scraping. Quoting Wikipedia:
"""
In November 2025, an investigation by technology journalist Alex Reisner for The Atlantic revealed that Common Crawl lied when it claimed it respected paywalls in its scraping and requests from publishers to have their content removed from its databases. It included misleading results in the public search function on its website that showed no entries for websites that had requested their archives be removed, when in fact those sites were still included in its scrapes used by AI companies.
"""
My site is CC-BY-NC-SA, i.e. non-commercial and with attribution, and Common Crawl took a dubious position on whether fair use makes that irrelevant. They can burn.
Hopefully my site is no longer part of Common Crawl. I'm not interested in participating in your project, block CCBot in robots.txt, and have requested deletion of my data via your form.
Did you see our reply? Edit: by which I mean, we sent you an email that explains what we did and how to verify it. Did you not receive an email reply? If not, please contact us again.
Also, if your site has CC-BY-NC-SA markings, we have preserved them.
Presumably someone has already built this and I'm just unaware of it, but I've long thought some sort of crowd sourced archival effort via browser extension should exist. I'm not sure how such an extension would avoid archiving privileged data though.
In particular, habeas petitions against DHS, and SSA appeals aren’t available online for public inspection: you have to go to a clerk’s office and pay for physical copies. (I think this may have been reasonable given the circumstances in past decades… not so now.)
I feel like a government funded search engine would resolve a lot of the issues with the monetized web.
The purpose of a search engine is to display links to web pages, not the entire content. As such, it can be argued it falls under fair use. It provides value to the people searching for content and those providing it.
However we left such a crucially important public utility in the hands of private companies, that changed their algorythms many times in order to maximize their profits and not the public good.
I think there needs to be real competition, and I am increasingly becoming certain that the government should be part of that competition.
Both "private" companies and "public" governement are biased, but are biased in different ways, and I think there is real value to be created in this clash. It makes it easier for individuals to pick and choose the best option for themselves, and for third independent options to be developed.
The current cycle of knowledge generation is academia doing foundational research -> private companies expanding this research and monetizing it -> nothing. If the last step was expanded to the government providing a barebones but useable service to commodotize it, years after private companies have been able to reap immense profits, then the capabilities of the entire society are increased. If the last step is prevented, then the ruling companies turn to rentseeking and sitting on their lawrels, turn from innovating to extracting.
> However we left such a crucially important public utility in the hands of private companies, that changed their algorythms many times in order to maximize their profits and not the public good.
No one "left" a crucially important public utility in the hands of private companies. Private companies developed the search engine themselves in the late 90s in the course of doing for-profit business; and because some of them ended up being successful (most notably Google), most people using the internet today take the availability of search engines for granted.
We can start by forcing sites to treat crawlers equally. Google's main moat is less physical infrastructure or the algorithms, and more that sites allow only Google to scrape and index them.
They can charge money for access or disallow all scrapers, but it should not be allowed to selectively allow only Google.
It's not like only allowing Google actually means that only Google is allowed forever. Crawlers are free to make agreements with sites to allow themselves to crawl easier or pretend they are a regular user to bypass whatever block they are trying to do.
The government having the power to curate access to information seems bad. You could try to separate it as an independent agency, but as the current US administration is showing, that’s not really a thing.
And in a world where running a Google-like search engine is just one of the many jobs the US federal government has, why shouldn't how the government runs that search engine be a national-level political question decided by elections, just like the management of all the other things the US federal government does is? Regardless of how the government curated access to information, a huge chunk of the US electorate would be mad about how they were doing it, reflecting very real polarization among the population.
The idea is that the government is biased towards hiding certain information and private companies are biased towards hiding a different set.
While unlikely, the ideal would be for the government to provide a foundational open search infrastructure that would allow people to build on it and expand it to fit their needs in a way that is hard to do when a private companies eschews competition and hides its techniques.
Perhaps it would be better for there to be a sanctioned crawler funded by the government, that then sells the unfiltered information to third parties like google. This would ensure IP rights are protected while ensuring open access to information.
I'm feeling it. Addressing the other reply: zero moderation or curation, and zero shielding from the crawler, if what you've posted is on a public network. Yes, users will be able to access anything they can think of. And the government will know. I think you don't have to worry about them censoring content; they'll be perfectly happy to know who's searching for CSAM or bomb-making materials. And if people have an issue with what the government does with this information (for example, charging people who search for things the Tangerine-in-Chief doesn't want you to see), you stop it at the point of prosecution, not data access. (This does only work in a society with a functioning democracy... but free information access is also what enables that. As Americans, with our red-hot American blood, do we dare?)
I wonder if these publishers would be more amenable to a private archiver that only serves registered academic / journalistic research projects (the way most physical private archives do), with a specific provision to never provide data to companies that would resell it or use it for training of generative models.
They already have archives with online and printed articles which they license to libraries, because the libraries take care of rate limiting and limiting abuse.
They probably have internal archives if they're smart; but that isn't accessible to the public. I think the issue isn't whether the data is archived, but whether that information is available to the public for the foreseeable future.
Time for a crowd source plugin that relays copies of what individuals view right from the browser.
Users control what sites they want to allow it to record so no privacy worries, especially assuming the plugin is open source.
No automated crawling. The plugin does not drive the users browser to fetch things. Just whatever a user happens to actually view on their own, some percentage of those views from the activated domains gets submitted up to some archive.
Not every view, just like maybe 100 people each submit 1% of views, and maybe it's a random selection or maybe it's weighted by some feedback mechanism where the archive destination can say "Hey if the user views this particular url, I still don't have that one yet so definitely send that one if you see it rather than just applying the normal random chance"
Not sure how to protect the archive itself or it's operators.
This is harder than you might expect. Publishing these files is always risky because sites can serve you fingerprinting data, like some hidden HTML tag containing your IP and other identifiers.
As does Tranquility Reader, if you're interested only in the primary content of the page ... and, usually, in a much smaller footprint ... with a PDF option.
For a historical archive, the issue with this is that it could be difficult to ensure that the data being sent from users' devices wasn't modified in some way, leading to an inaccurate archival copy.
Cross-reference. When a site is archived by one client (who visited it directly), request a couple other clients to archive it (who didn’t visit it directly, instead chosen at random, to ensure the same user isn’t controlling all clients).
Isn't the real problem here the unscrupulous AI scrapers? These sites want to be paid for their content to be used for AI training, if this same content is scraped by the Internet Archive the AI companies can get the content for free.
It's unfortunate that this undermines the usefulness of the Internet Archive, I don't see an alternative. IMHO, we'll soon see these AI scrapers cease to advertise themselves leading to sites like the NY Times trying to blacklist IP ranges as this battle continues. Fun times ahead!
The internet can't simultaneously be a place for weirdos and enthusiasts, and a vital part of the economy that everyone uses for a huge number of disparate things in daily life. Parts of the internet can be places for weirdos and enthusiasts, but spaces that cater to weirdos and enthusiasts are by necessity not popular or viral spaces.
Agreed. It’s mostly just disposable clickbait masquerading as journalism at this point. Outside of feeding people's FOMO, there's little content worth preserving for history.
As a website owner I hate the fact that more than 90% of my traffic is now bots, fake bots, bots masquerading as real visitors and real visitors who try try to use my platform to spam others.
Now AI companies are using residential proxies to get around the obvious countermeasures, I have resorted to blocking all countries that are not my target audience.
It's obviously not that, or they would have done this years ago. It very clearly is AI scraping concerns. Their content has new value because it's high quality text that AI scrapers want, and they don't want to give it away for free via the internet archive.
They will announce official paid AI access plans soon. Bookmark my works.
Too little, too late.
AI scrapers are better and better at acting human.
AI scrapers already have a massive corpus; the marginal value of today’s need is low and will remain so long after access is cut off.
When they manage to block archive.is too then I will believe they are at least a little serious.
Brewster’s concerns about the historical record are real and will eventually affect news orgs: their journalism may as well be ephemeral now without separate archiving. If a Wikipedia contributor, for example has to jump through extra hoops to get a stable link of a Times article, why wouldn’t they end up choosing an equally reliable WaPo article instead?
Even sites with that option already (like wikipedia) still report being hammered by scrapers. It's the full-funded aligned with the incompetent at work here.
There's a mundane version of this that hits small businesses every day. Platform terms of service pages, API documentation, pricing policies, even the terms you agreed to when you signed up for a SaaS product - these all live at URLs that change or vanish.
I've been building tools that integrate with accounting platforms and the number of times a platform's API docs or published rate limits have simply disappeared between when I built something and when a user reports it broken is genuinely frustrating. You can't file a support ticket saying "your docs said X" when the docs no longer say anything because they've been restructured.
For compliance specifically - HMRC guidance in the UK changes constantly, and the old versions are often just gone. If you made a business decision based on published guidance that later changes, good luck proving what the guidance actually said at the time. The Wayback Machine has saved me more than once trying to verify what a platform's published API behaviour was supposed to be versus what it actually does.
The SOC 2 / audit trail point upthread is spot on. I'd add that for smaller businesses, it's not just formal compliance frameworks - it's basic record keeping. When your payment processor's fee schedule was a webpage instead of a PDF and that webpage no longer exists, you can't reconcile why your fees changed.
> The Financial Times, for example, blocks any bot that tries to scrape its paywalled content, including bots from OpenAI, Anthropic, Perplexity, and the Internet Archive
But then it was not really open content anyway.
> When asked about The Guardian’s decision, Internet Archive founder Brewster Kahle said that “if publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”
Well - we need something like wikipedia for news content. Perhaps not 100% wikipedia; instead, wikipedia to store the hard facts, with tons of verification; and a news editorial that focuses on free content but in a newspaper-style, e. g. with professional (or good) writers. I don't know how the model could work, but IF we could come up with this then newspapers who have gatewalls to information would become less relevant automatically. That way we win long-term, as the paid gatewalls aren't really part of the open web anyway.
Wikipedia relies on the institutional structure of journalism, with newsroom independence, journalistic standards, educational system and probably a ton of other dependencies.
Journalism as an institution is under attack because the traditional source of funding - reader subscriptions to papers - no longer works.
To replicate the Wikipedia model would need to replicate the structure of Journalism for it to be reliable. Where would the funding for that come from? It's a tough situation.
> Well - we need something like wikipedia for news content.
The Wikipedia folks had their own Wikinews project which is essentially on hold today because maintenance in a wiki format is just too hard for that kind of uber-ephemeral content. Instead, major news with true long-term relevance just get Wikipedia articles, and the ephemera are ignored.
Which is a valuable perspective. But it's not a subsitute for a seasoned war journalist who can draw on global experience. (And relating that perspective to a particular home market.)
> I'm sure some of them would fly in to collect data if you paid them for it
Sure. That isn't "a news editorial that focuses on free content but in a newspaper-style, e. g. with professional (or good) writers."
One part of the population imagines journalists as writers. They're fine on free, ad-supported content. The other part understands that investigation is not only resource intensive, but also requires rare talent and courage. That part generally pays for its news.
Between the two, a Wikipedia-style journalistic resource is not entertaining enough for the former and not informative enough for the latter. (Importantly, compiling an encyclopedia is principally the work of research and writing. You can be a fine Wikipedia–or scientific journal or newspaper–editor without leaving your room.)
- crowdsourced data, eg, photos of airplane crashes
- people who live in an area start vlogs
- independent correspondents travel there to interview, eg Ukraine or Israel
We see that our best war reporting comes from analyst groups who ingest that data from the “firehose” of social media. Sometimes at a few levels, eg, in Ukraine the best coverage is people who compare the work of multiple groups mapping social media reports of combat. You have on top of that punditry about what various movements mean for the war.
So we don’t have “journalist”:
- we have raw data (eg, photos)
- we have first hand accounts, self-reported
- we have interviewers (of a few kinds)
- we have analysts who compile the above into meaningful intelligence
- we have anchors and pundits who report on the above to tell us narratives
The fundamental change is that what used to be several roles within a new agency are now independent contractors online. But that was always the case in secret — eg, many interviewers were contracted talent. We’re just seeing the pieces explicitly and without centralized editorial control.
So I tend not to catastrophize as much, because this to me is what the internet always does:
- route information flows around censorship
- disintermediate consumers from producers when the middle layer provides a net negative
As always in business, evolve or die. And traditional media has the same problem you outline:
- not entertaining enough for the celebrity gossip crowd
- too slow and compromised by institutional biases for the analyst crowd, eg, compare WillyOAM coverage of Ukraine to NYT coverage
Framing this as some anti-AI thing is wild. The simpler, more obvious, and more evidenced reason for this is that these sites want to make money with ads and paywalls that an archived copy tends to omit by design. Scapegoating AI lets them pretend that they're not the greedy bad guys here — just like how the agricultural sector is hell-bent on scapegoating AI (and lawns, and golf courses, and long showers, and free water at restaurants) for excess water consumption when even the worst-offending datacenters consume infinitesimally-tiny fractions of the water farms in their areas consume.
Yeah I assume what the news publishers actually care about is the thing where, when someone posts a paywalled news article on hacker news, one of the first comments is invariably a link to an archive site that bypasses the paywall so people can read it without paying for it.
> just like how the agricultural sector is hell-bent on scapegoating AI (and lawns, and golf courses, and long showers, and free water at restaurants) for excess water consumption when even the worst-offending datacenters consume infinitesimally-tiny fractions of the water farms in their areas consume.
When I learned about how much water agriculture and industry uses in the state of California where I live, I basically entirely stopped caring about household water conservation in my daily life (I might not go this far if I had a yard or garden that I watered, but I don't where I currently live). If water is so scarce in an urban area that an individual human taking a long shower or running the dishwasher a lot is at all meaningful, then either the municipal water supply has been badly mismanaged, or that area is too dry to support human settlement; and in either case it would be wise to live somewhere else.
Seems more like an easy excuse to shut down a means for people to bypass their paywalls. It would be trivial for AI companies to continue getting this data without using the Internet Archive.
I’m coming at this from a founder/product angle, not a technical one, so excuse the naive framing.
What worries me isn’t scraping itself, but the second-order effects. If large parts of the web become intentionally unarchivable, we’re slowly losing a shared memory layer. Short-term protection makes sense, but long-term it feels like knowledge erosion.
Genuinely curious how people here think about preserving public knowledge without turning everything into open season for mass scraping.
This partially feels like an intentional pendulum swing from Twitter/Facebook cancel culture and other forms of policing.
I'm thinking in particular about the rise of platforms like Discord where being opaque to search/archiving is seen as a feature. Being gatekept and ephemeral makes people more comfortable sharing things that might get a takedown notice on other platforms, and it's hard for people who don't like you in the future to try to find jokes/quotes they don't like to damage your future reputation.
Clearly very different than news articles going offline, but I do think there's been a vibe shift around the internet. People feel overly surveilled in daily life, and take respite in places that make surveillance harder.
As someone who has been dealing with SOC 2, HIPAA, ISO 9001, etc., for years, I have always maintained copies of the third-party agreements for all of our downstream providers for compliance purposes. This documentation is collected at the time of certification, and our policies always include a provision for its retrieval on schedule. The problem is when you certify their policy said X and were in compliance, they quietly change that and don't send proper notification downstream to us, and captain lawsuit comes by, we have to be able to prove that they did claim they were in compliance and the time we certified. We don't want to rely on their ability to produce that documentation. We can't prove that it wasn't tampered with, or that there is a chain of custody for their documentation and policies. If I wanted to use a vendor that wouldn't provide that information, then I didn't use them. Welcome to the world of highly regulated industries.
The issue of digital decay and publishers blocking archiving efforts is indeed concerning. It's especially striking given that news publishers, perhaps more than any other entity, have profoundly benefited from the vast accumulation of human language and cultural heritage throughout history. Their very existence and influence are built upon this foundation. To then, in an age where information preservation is more critical than ever (and their content is frequently used for AI training), actively resist archiving or demand compensation for their contributions to the collective digital record feels disingenuous, if not outright shameless. This stance ultimately harms the public good and undermines the long-term accessibility of our shared knowledge and historical narrative.
The internet isn't so simple anymore. I think it's important to separate commercial websites from non-commercial ones. Commercial sites shouldn't be expected to be achievable to begin with, unless it's part of their business model. A lot of sites (like reddit), started of as ad-supported sites, but now they're commercial (not just post-IPO, but accept payments and sell things to/from consumers). Even for ad-supported sites, there is a difference between ad-supported non-profit, and sites that exist to generate revenue from ads. As in, the primary purpose of the site is to generate ad-revenue, the content is just a means to that end.
I've said it before, and I'll say it again: The main issue is not design patterns, but lack of acceptable payment systems. The EU with their dismantling of visa and mastercard now have the perfect opportunity to solve this, but I doubt they will. They'll probably just create a european wechat.
I mean why wouldn’t they? All their IP was scraped for at their own cost of hosting it for AI training. It further pulls away from their own business models as people ask the AI models the questions instead of reading primary sources. Plus it doesn’t seem likely they’ll ever be compensated for that loss given the economy is all in on AI. At least search engines would link back.
Those countermeasures don't really have an effect in terms of scraping. Anyone skilled can overcome any protection within a week or two. By officially blocking IA, IA can't archive those websites in a legal way, while all major AI companies use copyrighted content without permission.
For sure. There are many billions and brilliant engineers propping up AI so they will win any cat and mouse game of blocking. It would be ideal if sites gave their data to IA and IA protected it exactly from what you say. But as someone that intentionally uses AI tools almost daily (mainly open evidence) IMO blame the abuser not the victim that it has come to this.
I'm not blaming the victim, but don't play the 'look what you made me do' game. Making content accessible to anyone (even behind a paywall) is a risk they need to take nevertheless. It's impossible to know upfront if the content is used for consumption or to create derived products (e.g. write an article in NYT style etc.). If this was a newspaper, this would be equivalent to scanning paper and then training AI. You can't prevent scanning, as the process is based on exactly the same phenomenon what makes your eyes see, iow information being sent and received. The game was lost before it even started.
That is a good question. However, copyright exists (for a limited time) to allow for them to be compensated. AI doesn't change that. It feels like blocking AI-use is a ploy to extract additional revenue. If their content is regurgitated within copyright terms, yes, they should be compensated.
The problem is that producing a mix of personalized content that doesn't appear (at least on its face) to violate copyright still completely destroys their business model. So either copyright law needs to be updated or their business model does.
Either way I'm fairly certain that blocking AI agent access isn't a viable long term solution.
> Either way I'm fairly certain that blocking AI agent access isn't a viable long term solution.
Great point. If my personal AI assistant cannot find your product/website/content, it effectively may no longer exist! For me. Ain't nobody got the time to go searching that stuff up and sifting through the AI slop. The pendulum may even swing the other way and the publishers may need to start paying me (or whoever my gatekeeper is) for access to my space...
Let’s be honest, one of the most-common uses of these archive sites has been paywall circumvention. An academics-only archive might make sense, or one that is mutually-owned and charges a fee for lookup. But a public archive for content that costs money to make obviously doesn’t work.
if that’s the real motive, why don’t they allow access to scrape content after some period? when that news is not as relevant. For example after 6 months.
> why don’t they allow access to scrape content after some period? when that news is not as relevant. For example after 6 months
I belive many publications used to do this. The novel threat is AI training. It doesn't make sense to make your back catalog de facto public for free like that. There used to be an element of goodwill in permitting your content to be archived. But if the main uses are circumventing compensation and circumventing licensing requirements, that goodwill isn't worth much.
Enabling research is a business model for many publications. Libraries pay money for access to the publishers’ historical archives. They don’t want to cannibalize any more revenue streams; they’re already barely still operating as it is.
The end of traditional news sites is coming. At least for the newspaper websites. Future mcp like systems will generate on the fly newstites in your desired style and content. Journalists will have some kind of paid per view model provided by these gpt like platforms which of course take a too big of a chunk. I can't imagine a WSJ is able to survive.
We need something like SETI@home/Folding@home but for crawling and archiving the web or maybe something as simple as a browser extension that can (with permission) archive pages you view.
This exists although not in the traditional BOINC space, it's Archiveteam^1. I run two of their warrior^2 instances in my home k3s instance via the docker images. One of them is set to the "Team's choice" where it spends most of its time downloading Telegram chats. However, when they need the firepower for sites with imminent risk of closure, it will switch itself to those. The other one is set to their URL shortener project, "Terror of Tiny Town"^3.
Their big requirement is you need to not be doing any DNS filtering or blocking of access to what it wants, so I've got the pod DNS pointed to the unfiltered quad9 endpoint and rules in my router to allow the machine it's running on to bypass my PiHole enforcement+outside DNS blocks.
In the US at least, there is no expectation of privacy in public. Why should these websites that are public-facing get an exemption from that? Serving up content to the public should imply archivability.
Sometimes it feels like ai-use concerns are a guise to diminish the public record. While on the other hand services like Ring or Flock are archiving the public forever.
But wait, I thought AI was so great for all industries? Publishers can have AI-generated articles, and instantly fix grammar problems, And translate it seamelessly to every language, and even use AI-generated images where appropriate to enrich the article. It was going to make us all so productive? What happened? Why would you want to _block_ AI from ingesting the material?
I fear that these news publishers would come after RSS next as I see hundreds of AI companies misusing the terms of the news publishers's RSS feed for profit on mass scraping.
They do not care and we will be all worse off for it if these AI companies keep continuing to bombard news publishers RSS feeds.
It is a shame that the open web as we know it is closing down because of these AI companies.
Dear news publications - if you aren't willing to accept an independent record of what you published, I can't accept your news. It's a critical piece of the framework that keeps you honest. I don't care if you allow AI scraping either way, but you have to facilitate archival of your content - independently, not under your own control.
How is the publisher supposed to fund their operations let along make a profit. How about a 1 year lock on the archive pages. There are many ways of keeping that record but not taking views undermining the business model
> How is the publisher supposed to fund their operations let along make a profit.
There used to be plenty newspapers sponsored by wealthy industrialists; the latter would cover the former's gaps between the costs and the sales, the former would regularly push the latter's political agenda.
The "objective journalism" is really quite a late invention IIRC, about the times of WW2.
"To give the news impartially, without fear or favor." — Adolph Ochs, 1858-1935
Objectivity is the default state of honest storytelling. If I ask what happened ? and somebody only tells the parts that suit an agenda, they have not informed me. The partisan press exists, because someone has a motive to deviate from the natural expectation of fair story telling and story recounting.
> Objectivity is the default state of honest storytelling. If I ask what happened ? and somebody only tells the parts that suit an agenda, they have not informed me.
Already at the level of what stories are covered you have made choices about what's important or not.
Your newspaper not covering your neighbors lawsuit against the city against some issue because they find it to be "not important" is already a viewpoint-based choice
A newspaper presenting both sides on an issue (already simplifying on the "there are two sides to an issue" thing) is one thing. Do you also have to present expert commentary that says that one side is actually just entirely in bad faith? Do you write a story and then conclude "actually this doesn't matter" when that is the case?
There are plenty of descriptions that some people would describe as fair story telling and others would describe as a hit piece. Probably for any article on any controversial topic written in good faith you are likely able to find some people who would claim it's not.
I think it's important to acknowledge that even good faith journalism is filled with subjectivity. That doesn't mean one gives up, you just have to take into account the position of the people presenting information and roll with that.
You make it sound like bias is completely relative and undecidable. But there is a clear line journalists can cross - if they're intentionally misleading their reader, that's bias. It's qualitatively different from neglecting to cover a story or not finding a suitable expert or whatever. It's intentional deception because they want the readers to have wrong knowledge. And they do it all the time.
If an independent press is critical to open societies, perhaps some sort of citizen directed funding is needed to maintain independence from both capital and government?
It's a great question, but they didn't seem to have a problem with this before AI, so I have to assume that the presence of a free available copy wasn't really impacting their revenue.
Maybe it would be better if these news operations had to find better ways to sustain themselves than the current paradigms. Also, the internet archive is not the only archive, and there will be more. This ins't something they can really stop.
Persinally i think people harp on news bias too much.
I think the real problem is that they often dont put events in context, which leads people to misunderstand them. They report the what not the why, but most events don't just happen one day, they are shaped by years or even decades of historical context. If you just understand the literal event without the background context, i dont think you are really informed.
I consider almost all news to be entertainment unless I need its perspective to make a decision (which is almost never). It is a lot safer to remain uninformed on a subject as it settles than to constantly attempt to be informed.
Information bias is unfortunately one of the sicknesses of our age, and it is one of the cultural ills that flows from tech outward. Information is only pertinent in its capacity to inform action, otherwise it is noise. To adapt a Beck-ism: You aren't gonna need it.
What I'm talking about is that the news tries to tell you what to think. You can read headlines on Google News about the same story, and see the bias of the publication in the headline pretty often.
Instead of reporting just the facts, they include opinions, inflammatory language, etc.
Reuters writes in a relatively neutral tone, as an example. Fox News doesn't, and CNN doesn't, as examples of the opposite.
If you don't notice, I doubt you're reading the news. It's part of the offering. Fox does it on purpose, not accidentally.
Newspapers in my country always were blogs before the internet existed. Its why they are still around and doing quite well- they don't just bring news.
I'm not sure what microfilm has got to do with this. Plenty of national libraries have extensive digital collections of various artifacts - books and even websites. Check out the National Library of Australia as an example: https://www.library.gov.au/discover/what-we-collect/archived...
As a news publisher (RedBankGreen.com) I’ll tell you that pretty much nobody is in it for the money anymore, at least at the local level.
It’s passion and love of the community, despite the many struggles and drawbacks.
AI bots scrape our content and that drastically reduces the number of people who make it to our site.
That impacts our ability to bring on subscribers and especially advertisers - Google and Meta own local advertising and AI kills the relatively tiny audience we have.
I dread the day that it happens in realtime - hear sirens? Ask AI who already scraped us.
I think the question of is a business allowed to have something free only for humans (presumably with advertising) does not have a clear best answer - politicians can decide.
News has a business model: do actual journalism. I don't see much reason to fund the people who are giving me the same story as everyone else who received the same press release, with no additional details: I might as well subscribe to the press releases.
The second thing that came to mind was paywall evasion. Any time a news article behind a paywall gets posted here, someone in the comments has the archive link ready to go, because of course they do.
The incentives for online news are really wacky just to begin with. A coin at the convenience store for the whole dang paper used to be the simplest thing in the world.
> Limit internet archive for articles that are less than a week old.
I mean this as a side note rather than a counterargument (because people learn to take screenshots, and because what can you do about particularly bad faith news orgs?): Immediate archival can capture silent changes (and misleadingly announced changes). A headline might change to better fit the article body. An editor's note might admit a mistakenly attributed quote.
Or a news org might pull a Fox News [1][2] by rewriting both the headline and article body to cover up a mistake that unravels the original article's reason for existing: The original headline was "SNAP beneficiaries threaten to ransack stores over government shutdown". The headline was changed to "AI videos of SNAP beneficiaries complaining about cuts go viral". An editor's note was added [3][4]: "This article previously reported on some videos that appear to have been generated by AI without noting that. This has been corrected." I think Fox News deleted the article.
I don't see the connection to adding the delay. I think the suggestion was to have a snapshot at time of publication but wait a week to make it public.
I have seen zero evidence that independent archives “keep news media honest”. In fact, I have on several occasions noticed news media directly contradicting their own stance from just a few years prior, with no mention of the previously published account at all. This is true even for highly respected newspapers of record.
I can indeed find clear records of that in the archives. But what do I do with them? How do I use that evidence to hold news media to account? This is meaningless moral posturing.
Contact the journalist of the new article with the contradicting article? Letters to the editor? Submit an opinion article?
I've contacted multiple journalists over the years about errors in their articles and I've generally found them responsive and thankful.
Sometimes it's not even their fault. One time a journalist told me the incorrect information was unknowingly added by an editor.
I get that it's popular on HN and the internet to bash news media, and that there are a lot of legitimate issues with the media, but my personal experience is that journalists do actually want to do a good job and respond accordingly when you engage them (in a non-antagonistic manner).
The incidents I’m referring to aren’t “errors” though.
If a major article claims that certain groups don’t exist, while the same newspaper published a detailed report about those exact groups and how dangerous they are just two years earlier, it’s not because the journalist wasn’t able to do a 10-second Google search where their own paper’s article would have been among the top results.
> But what do I do with them? How do I use that evidence to hold news media to account?
Contact their rivals with the story, have them write a hit piece. "Other newspaper is telling porkies: here's the proof!" is an excellent story: not one I'd expect a journalist to have time to discover, but certainly one I'd expect them to be able to follow up on, once they've received a tip.
That’s not how publishing works. News outlets (especially those of roughly similar political leaning) very rarely call out each other’s misconduct. In fact, they often seem to operate as a quasi-conglomerate rather than competitors.
If most of the Internet is AI-generated slop (as is already the case), is there really any value in expensing so much bandwidth and storage to preserve it? And on the flip side, I'd imagine the value of a pre-2022 (ChatGPT launch) Internet snapshot on physical media will probably increase astronomically.
Perhaps the AI slop isn't worth preserving, but the unarchivability of news and other useful content has implications for future public discourse, historians, legal matters and who knows what else.
In the past libraries used to preserve copies of various newspapers, including on microfiche, so it was not quite feasible to make history vanish. With print no longer out there, the modern historical record becomes spotty if websites cannot be archived.
Perhaps there needs to be a fair-use exception or even a (god forbid!) legal requirement to allow archivability? If a website is open to the public, shouldn't it be archivable?
Erm, there is still a newspaper stand in the supermarket I go to (Wallmart for the Americans). Not sure if the British library keeps a copy of the print news I see, but they should!
> I am sad about link rot and old content disappearing, but it's better than everything be saved for all time, to be used against folks in the future.
I don't understand this line of thinking. I see it a lot on HN these days, and every time I do I think to myself "Can't you realize that if things kept on being erased we'd learn nothing from anything, ever?"
I've started archiving every site I have bookmarked in case of such an eventuality when they go down. The majority of websites don't have anything to be used against the "folks" who made them. (I don't think there's anything particularly scandalous about caring for doves or building model planes)
Consider the impact, though, on our ability to learn and benefit from history. If the records of people’s activities cannot be preserved, are we doomed to live in ignorance?
I don't think so. Most of my original creations were before the archiving started, and those things are lost. But they weren't the kind of history you learn and benefit from--nor is most of the internet.
The truly important stuff exists in many forms, not just online/digital. Or will be archived with increased effort, because it's worth it.
Like it or not, the Internet is today’s store of record for a significant proportion—if not the majority—of the world’s activities.
If you don’t want your bad behavior preserved for the historical record, perhaps a better answer is to not engage in bad behavior instead of relying on some sort of historical eraser.
That's a risk we all take. Not that long ago, homophobia was the norm. Being on the wrong side of history can be uncomfortable, but people do forgive when given the right context.
Kind of the "think of the children" argument: most things that are worth archiving have nothing to do with content that can be used against someone in the future. But the raw volume is making it impossible to filter out the worthwhile stuff from the slop (all forms of, not just AI), even with automation (again, not AI, we've been doing NLP using regular old ML for decades now).
So instead of scraping IA once, the AI companies will use residential proxies and each scrape the site themselves, costing the news sites even more money. The only real loser is the common man who doesn't have the resources to scrape the entire web himself.
I've sometimes dreamed of a web where every resource is tied to a hash, which can be rehosted by third parties, making archival transparent. This would also make it trivial to stand up a small website without worrying about it get hug-of-deathed, since others would rehost your content for you. Shame IPFS never went anywhere.
The AI companies won't just scrape IA once, they're keeping come back to the same pages and scraping them over and over. Even if nothing has changed.
This is from my experience having a personal website. AI companies keep coming back even if everything is the same.
Weird, considering IA has most of its content in a way you could rehost it all idk why nobody’s just hosting a IA carbon copy that AI companies can hit endlessly, and then cutting IA a nice little check in the process, but I guess some of the wealthiest AI startups are very frugal about training data?
This also goes back to something I said long ago, AI companies are relearning software engineering poorly. I can think of so many ways to speed up AI crawlers, im surprised someone being paid 5x my salary cannot.
Unless regulated, there is no incentive for the giants to fund anything.
That already exists, it's called Common Crawl[1], and it's a huge reason why none of this happened prior to LLMs coming on the scene, back when people were crawling data for specialized search engines or academic research purposes.
The problem is that AI companies have decided that they want instant access to all data on Earth the moment that it becomes available somewhere, and have the infrastructure behind them to actually try and make that happen. So they're ignoring signals like robots.txt or even checking whether the data is actually useful to them (they're not getting anything helpful out of recrawling the same search results pagination in every possible permutation, but that won't stop them from trying, and knocking everyone's web servers offline in the process) like even the most aggressive search engine crawlers did, and are just bombarding every single publicly reachable server with requests on the off chance that some new data fragment becomes available and they can ingest it first.
This is also, coincidentally, why Anubis is working so well. Anubis kind of sucks, and in a sane world where these companies had real engineers working on the problem, they could bypass it on every website in just a few hours by precomputing tokens.[2] But...they're not. Anubis is actually working quite well at protecting the sites it's deployed on despite its relative simplicity.
It really does seem to indicate that LLM companies want to just throw endless hardware at literally any problem they encounter and brute force their way past it. They really aren't dedicating real engineering resources towards any of this stuff, because if they were, they'd be coming up with way better solutions. (Another classic example is Claude Code apparently using React to render a terminal interface. That's like using the space shuttle for a grocery run: utterly unnecessary, and completely solvable.) That's why DeepSeek was treated like an existential threat when it first dropped: they actually got some engineers working on these problems, and made serious headway with very little capital expenditure compared to the big firms. Of course they started freaking out, their whole business model is based on the idea that burning comical amounts of money on hardware is the only way we can actually make this stuff work!
The whole business model backing LLMs right now seems to be "if we burn insane amounts of money now, we can replace all labor everywhere with robots in like a decade", but if it turns out that either of those things aren't true (either the tech can be improved without burning hundreds of billions of dollars, or the tech ends up being unable to replace the vast majority of workers) all of this is going to fall apart.
Their approach to crawling is just a microcosm of the whole industry right now.
[1]: https://en.wikipedia.org/wiki/Common_Crawl
[2]: https://fxgn.dev/blog/anubis/ and related HN discussion https://news.ycombinator.com/item?id=45787775
Thanks for the mention of Common Crawl. We do respect robots.txt and we publish an opt-out list, due to the large number of publishers asking to opt out recently.
There's a bit of discussion of Common Crawl in Jeff Jarvis's testimony before Congress: https://www.youtube.com/watch?v=tX26ijBQs2k
yeah, they should really have a think about how their behavior is harming their future prospects here.
Just because you have infinite money to spend on training doesn't mean you should saturate the internet with bots looking for content with no constraints - even if that is a rounding error of your cost.
We just put heavy constraints on our public sites blocking AI access. Not because we mind AI having access - but because we can't accept the abusive way they execute that access.
The main issue is a well behaved AI company won't be singled out for continued access, they will all be hit by public sites blocking AI access. So there is no benefit to them behaving.
Something I’ve noticed about technology companies, and it’s bled into just about every facet of the US these days, is the consideration of if an action *can* be executed upon vs *should* an action be executed upon.
It’s very unfortunate and a short sighted way to operate.
> The AI companies won't just scrape IA once, they're keeping come back to the same pages and scraping them over and over. Even if nothing has changed.
Why, though? Especially if the pages are new; aren't they concerned about ingesting AI-generated content?
Possibly because a lot of “AI-company scraping” isn't traditional scraping (e.g., to build a dataset of the state at a particular point in time), its referencing the current content of the page as grounding for the response to a user request.
> The AI companies won't just scrape IA once, they're keeping come back to the same pages and scraping them over and over. Even if nothing has changed.
Maybe they vibecoded the crawlers. I wish I were joking.
Isn't this just how crawlers work? How do you know if a page has changed if you don't keep visiting it?
IPFS was an attempt at this: https://en.wikipedia.org/wiki/InterPlanetary_File_System
Coincidentally most of the funding towards IPFS development dried up because the VC money moved onto the very technology enabling these problems...
Is there a good post-mortem of IPFS out there?
What do you mean? It is alive and "well". Just extremely slow now that interest waned.
It's been several years, but in my experiments it felt plenty fast if I prefetched links at page load time so that they're already local by the time the user actually tries to follow them (sometimes I'd do this out to two hops).
I think it "failed" because people expected it to be a replacement transport layer for the existing web, minus all of the problems the existing web had, and what they got was a radically different kind of web that would have to be built more or less from scratch.
I always figured it was a matter of the existing web getting bad enough, and then we'd see adoption improve. Maybe that time is near.
What's IPFS 's killer app?
They already are, I've been dealing with Vietnam and Korea residential proxies destroying my systems for weeks, I'm growing tired. I cannot survive 3500 RPS 24/7.
> I've sometimes dreamed of a web where every resource is tied to a hash, which can be rehosted by third parties, making archival transparent. This would also make it trivial to stand up a small website without worrying about it get hug-of-deathed, since others would rehost your content for you. Shame IPFS never went anywhere.
You've just described Nostr: Content that is tied to a hash (so its origin and authenticity can be verified) that is hosted by third parties (or yourself if you want)
> So instead of scraping IA once, the AI companies will use residential proxies and each scrape the site themselves, costing the news sites even more money.
News websites aren’t like those labyrinthian cgit hosted websites that get crushed under scrapers. If 1,000 different AI scrapers hit a news website every hour it wouldn’t even make a blip on the traffic logs.
Also, AI companies are already scraping these websites directly in their own architecture. It’s how they try to stay relevant and fresh.
I don’t believe resips will be with us for long, at least not to the extent they are now. There is pressure and there are strong commercial interests against the whole thing. I think the problem will solve itself in some part.
Also, I always wonder about Common Crawl:
Is there is something wrong with it? Is it badly designed? What is it that all the trainers cannot find there so they need to crawl our sites over and over again for the exact same stuff, each on its own?
Many AI projects in academia or research get all of their web data from Common Crawl -- in addition to many not-AI usages of our dataset.
The folks who crawl more appear to mostly be folks who are doing grounding or RAG, and also AI companies who think that they can build a better foundational model by going big. We recommend that all of these folks respect robots.txt and rate limits.
AI companies are _already_ funding and using residential proxies. Guess how much of those proxies are acquired through being compromised or tricking people into installing apps?
Does anyone know if Teslas do this? I noticed Tesla cars want to have access to local WiFi and eat up oodles of bandwidth …
Even if the site is archived on IA, AI companies will still do the same.
But don't you have to sign a license agreement that prohibits scraping in order to purchase a subscription that allows you to bypass the paywall?
It's almost as if this isn't about scraping and more about shutting down a "free article sharing" channel that gets abused all the time.
AI browsers will be the scrapers, shipping content back to the mothership for processing and storage as users co browse with the agentic browser.
But hey, paywalled sites might be getting 2-3 additional subscriptions out of it!
We don’t lack the technology to limit scrapers, sure it’s an arms race with AI companies with more money than most. Why can’t this be a legal block through TOS
I maintain an open-source project called Linkwarden and this exact discussion is one of the reasons why it exists, teams needed a way to preserve referenced URLs reliably without having to depend on external services.
It stores webpages in multiple formats (HTML snapshot, screenshot, PDF snapshot, and a fully dedicated reader view) so you’re not relying on a single fragile archive method.
There’s both a hosted cloud plan [1] which directly supports the project, and a fully self-hosted option [2], depending on how much control you need over storage and retention.
[1]: https://linkwarden.app
[2]: https://github.com/linkwarden/linkwarden
Linkwarden is awesome and with the singlefile extension it's pretty easy to store things you can see but the scraper gets blocked on.
One question, what's your stance on adding a way to mark articles as read or "archive" them like other apps that are branded a bit more as storing things to read later. You can technically do something similar with tags but it's a bit clunky of a UX.
Thanks! At the moment we’re focused on archiving rather than read-later workflows, but this is great feedback. I’ve already added it to the feature requests list.
> with the singlefile extension it's pretty easy to store things you can see but the scraper gets blocked on
FWIW, at least on iOS, it's possible to inject Javascript into the web site being currently displayed by Safari as a side effect of sharing a web link to an app via the share sheet.
Several "read it later" style apps use this successfully to get around paywalls (assuming you've paid yourself) and other robot blockers. Any plans for Linkwarden to do this (or does it already)?
Neat. How does the archive.org integration works?
Does it just POST the url to them for them to fetch? Or is there any integration/trust to store what you already fetched on the client directly on their archives?
> Does it just POST the url to them for them to fetch?
Correct.
It affects science too (and there you'd want solid archiving as much as possible). Increasingly, meta-data is full of errors and general purpose search engines for science are breaking down, including even things like Google Scholar. I suppose some big science publishers are blocking AI bots too.
Google ruined its own search engine on top of that as well though.
We are increasingly becoming blind. To me it looks as if this is done on purpose actually.
Did Google ruin it, or did advesarial activity between Google's algorithm and SEO ruin it? The latter seems more likely because the incentives make sense, and also inevitable.
It was. Advertising is incompatible with accurate data retrieval/routing. We've also implemented "obligation to deindex". So providing an unbiased index of the web as she is is essentially (in the U.S.) verboten.
> I suppose some big science publishers are blocking AI bots too.
That's a travesty, considering that a huge chunk of science is public-funded; the public is being denied the benefits of what they're paying for, essentially.
The public can still access the sites themselves.
> The public can still access the sites themselves.
Indefinitely? Probably not.
What about when a regime wants to make the science disappear?
So the solution is to allow the AI scraping and hide the content, with significantly reduced fidelity and accuracy and not in the original representation, in some language model?
Don't forget the onslaught of ads that will distort the actual publications even more going forward.
What has that got to do with blocking AI crawlers?
If it's publicly funded, why shouldn't AI crawlers have access to that data? Presumably those creating the AI crawlers paid taxes that paid for the science.
> If it's publicly funded, why shouldn't AI crawlers have access to that data?
Becase it costs money to serve them the content.
Crawlers accessing public data could be required to provide searchable access to the public data they collect. Value-for-value.
If I build a business based off of consumption of publicly funded data, and that’s okay, why isn’t it okay for AI?
Is the answer regulate AI? Yes.
> If I build a business based off of consumption of publicly funded data, and that’s okay, why isn’t it okay for AI?
Because when you build it you aren't, presumably, polling their servers every fifteen minutes for the entire corpus. AI scrapers are currently incredibly impolite.
Thank god for pubmed and deterministic search operators.
Publishers like The Guardian and NYT are blocking the IA/Wayback Machine. 20% of news websites are blocking both IA and Common Crawl. As an example, https://www.realtor.com/news/celebrity-real-estate/james-van... is unarchivable, with IA being 429ed while the site is accessible otherwise.
And whilst the IA will honour requests not to archive/index, more aggressive scrapers won't, and will disguise their traffic as normal human browser traffic.
So we're basically decided we only want bad actors to be able to scrape, archive, and index.
> we're basically decided we only want bad actors to be able to scrape, archive, and index
AI training will be hard to police. But a lot of these sites inject ads in exchange for paywall circumvention. Just scanning Reddit for the newest archive.is or whatever should cut off most of the traffic.
That 20% number is for a limited list of relatively large news websites. If you include the long tail of news, the % of blocking is much smaller.
I'm part of that small but (hopefully) growing percentage, because Common Crawl is a deeply dishonest front for AI data scraping. Quoting Wikipedia:
""" In November 2025, an investigation by technology journalist Alex Reisner for The Atlantic revealed that Common Crawl lied when it claimed it respected paywalls in its scraping and requests from publishers to have their content removed from its databases. It included misleading results in the public search function on its website that showed no entries for websites that had requested their archives be removed, when in fact those sites were still included in its scrapes used by AI companies. """
My site is CC-BY-NC-SA, i.e. non-commercial and with attribution, and Common Crawl took a dubious position on whether fair use makes that irrelevant. They can burn.
Did you see our reply? https://commoncrawl.org/blog/setting-the-record-straight-com...
Also, if your site has CC-BY-NC-SA markings, we have preserved them.
Hopefully my site is no longer part of Common Crawl. I'm not interested in participating in your project, block CCBot in robots.txt, and have requested deletion of my data via your form.
Did you see our reply? Edit: by which I mean, we sent you an email that explains what we did and how to verify it. Did you not receive an email reply? If not, please contact us again.
Also, if your site has CC-BY-NC-SA markings, we have preserved them.
I don't care. Is blocking your bot and requesting removal sufficient? If not, what is?
Please read our email reply. I have no idea if we received your request —- your HN username doesn’t match any request we have received.
Oh, and thanks for letting me know that I need to add our reply to Wikipedia.
Can you give a reference for The Guardian blocking IA? I just checked with an article from today - already archived, and a manual re-archive worked.
Presumably someone has already built this and I'm just unaware of it, but I've long thought some sort of crowd sourced archival effort via browser extension should exist. I'm not sure how such an extension would avoid archiving privileged data though.
That exists for court documents (RECAP) but I think they didn't have to solve the issue of privilege as PACER publishes unprivileged docs.
In particular, habeas petitions against DHS, and SSA appeals aren’t available online for public inspection: you have to go to a clerk’s office and pay for physical copies. (I think this may have been reasonable given the circumstances in past decades… not so now.)
I feel like a government funded search engine would resolve a lot of the issues with the monetized web.
The purpose of a search engine is to display links to web pages, not the entire content. As such, it can be argued it falls under fair use. It provides value to the people searching for content and those providing it.
However we left such a crucially important public utility in the hands of private companies, that changed their algorythms many times in order to maximize their profits and not the public good.
I think there needs to be real competition, and I am increasingly becoming certain that the government should be part of that competition. Both "private" companies and "public" governement are biased, but are biased in different ways, and I think there is real value to be created in this clash. It makes it easier for individuals to pick and choose the best option for themselves, and for third independent options to be developed.
The current cycle of knowledge generation is academia doing foundational research -> private companies expanding this research and monetizing it -> nothing. If the last step was expanded to the government providing a barebones but useable service to commodotize it, years after private companies have been able to reap immense profits, then the capabilities of the entire society are increased. If the last step is prevented, then the ruling companies turn to rentseeking and sitting on their lawrels, turn from innovating to extracting.
> However we left such a crucially important public utility in the hands of private companies, that changed their algorythms many times in order to maximize their profits and not the public good.
No one "left" a crucially important public utility in the hands of private companies. Private companies developed the search engine themselves in the late 90s in the course of doing for-profit business; and because some of them ended up being successful (most notably Google), most people using the internet today take the availability of search engines for granted.
We can start by forcing sites to treat crawlers equally. Google's main moat is less physical infrastructure or the algorithms, and more that sites allow only Google to scrape and index them.
They can charge money for access or disallow all scrapers, but it should not be allowed to selectively allow only Google.
It's not like only allowing Google actually means that only Google is allowed forever. Crawlers are free to make agreements with sites to allow themselves to crawl easier or pretend they are a regular user to bypass whatever block they are trying to do.
The government having the power to curate access to information seems bad. You could try to separate it as an independent agency, but as the current US administration is showing, that’s not really a thing.
And in a world where running a Google-like search engine is just one of the many jobs the US federal government has, why shouldn't how the government runs that search engine be a national-level political question decided by elections, just like the management of all the other things the US federal government does is? Regardless of how the government curated access to information, a huge chunk of the US electorate would be mad about how they were doing it, reflecting very real polarization among the population.
The idea is that the government is biased towards hiding certain information and private companies are biased towards hiding a different set.
While unlikely, the ideal would be for the government to provide a foundational open search infrastructure that would allow people to build on it and expand it to fit their needs in a way that is hard to do when a private companies eschews competition and hides its techniques.
Perhaps it would be better for there to be a sanctioned crawler funded by the government, that then sells the unfiltered information to third parties like google. This would ensure IP rights are protected while ensuring open access to information.
I'm feeling it. Addressing the other reply: zero moderation or curation, and zero shielding from the crawler, if what you've posted is on a public network. Yes, users will be able to access anything they can think of. And the government will know. I think you don't have to worry about them censoring content; they'll be perfectly happy to know who's searching for CSAM or bomb-making materials. And if people have an issue with what the government does with this information (for example, charging people who search for things the Tangerine-in-Chief doesn't want you to see), you stop it at the point of prosecution, not data access. (This does only work in a society with a functioning democracy... but free information access is also what enables that. As Americans, with our red-hot American blood, do we dare?)
I wonder if these publishers would be more amenable to a private archiver that only serves registered academic / journalistic research projects (the way most physical private archives do), with a specific provision to never provide data to companies that would resell it or use it for training of generative models.
They already have archives with online and printed articles which they license to libraries, because the libraries take care of rate limiting and limiting abuse.
They probably have internal archives if they're smart; but that isn't accessible to the public. I think the issue isn't whether the data is archived, but whether that information is available to the public for the foreseeable future.
They sure have archives of the newspapers, they're much less likely to have archives of what they publish online.
And a local archive is one fire, business decision, poor technical choice etc away from getting permanently lost
Yes. Most publishers already do syndication deals. This is a fine idea.
The problem with the LLMs is they capture the value chain and give back nothing. It didn’t have to be this way. It still doesn’t.
Time for a crowd source plugin that relays copies of what individuals view right from the browser.
Users control what sites they want to allow it to record so no privacy worries, especially assuming the plugin is open source.
No automated crawling. The plugin does not drive the users browser to fetch things. Just whatever a user happens to actually view on their own, some percentage of those views from the activated domains gets submitted up to some archive.
Not every view, just like maybe 100 people each submit 1% of views, and maybe it's a random selection or maybe it's weighted by some feedback mechanism where the archive destination can say "Hey if the user views this particular url, I still don't have that one yet so definitely send that one if you see it rather than just applying the normal random chance"
Not sure how to protect the archive itself or it's operators.
SingleFile does the archiving fairly well.
> no privacy worries
This is harder than you might expect. Publishing these files is always risky because sites can serve you fingerprinting data, like some hidden HTML tag containing your IP and other identifiers.
>SingleFile does the archiving fairly well.
As does Tranquility Reader, if you're interested only in the primary content of the page ... and, usually, in a much smaller footprint ... with a PDF option.
oof good point
For a historical archive, the issue with this is that it could be difficult to ensure that the data being sent from users' devices wasn't modified in some way, leading to an inaccurate archival copy.
Cross-reference. When a site is archived by one client (who visited it directly), request a couple other clients to archive it (who didn’t visit it directly, instead chosen at random, to ensure the same user isn’t controlling all clients).
Isn't the real problem here the unscrupulous AI scrapers? These sites want to be paid for their content to be used for AI training, if this same content is scraped by the Internet Archive the AI companies can get the content for free.
It's unfortunate that this undermines the usefulness of the Internet Archive, I don't see an alternative. IMHO, we'll soon see these AI scrapers cease to advertise themselves leading to sites like the NY Times trying to blacklist IP ranges as this battle continues. Fun times ahead!
The silver lining is that it's increasingly not worth being archived as well.
We really lucked out existing at a time when the internet was a place for weirdos and enthusiasts. I think those days are well and done.
The internet can't simultaneously be a place for weirdos and enthusiasts, and a vital part of the economy that everyone uses for a huge number of disparate things in daily life. Parts of the internet can be places for weirdos and enthusiasts, but spaces that cater to weirdos and enthusiasts are by necessity not popular or viral spaces.
Agreed. It’s mostly just disposable clickbait masquerading as journalism at this point. Outside of feeding people's FOMO, there's little content worth preserving for history.
Punishing archive.org for archive.today's sins
My first impression is that news companies don't want their content scraped for copyright reasons, and roundaboutly scapegoating AI
As a website owner I hate the fact that more than 90% of my traffic is now bots, fake bots, bots masquerading as real visitors and real visitors who try try to use my platform to spam others.
Now AI companies are using residential proxies to get around the obvious countermeasures, I have resorted to blocking all countries that are not my target audience.
It really sucks. The internet is terminally ill.
Yeah sure, "AI scraping concerns". No, they don't want to get caught secretly editing and deleting articles.
It's obviously not that, or they would have done this years ago. It very clearly is AI scraping concerns. Their content has new value because it's high quality text that AI scrapers want, and they don't want to give it away for free via the internet archive.
They will announce official paid AI access plans soon. Bookmark my works.
Too little, too late. AI scrapers are better and better at acting human. AI scrapers already have a massive corpus; the marginal value of today’s need is low and will remain so long after access is cut off. When they manage to block archive.is too then I will believe they are at least a little serious.
Brewster’s concerns about the historical record are real and will eventually affect news orgs: their journalism may as well be ephemeral now without separate archiving. If a Wikipedia contributor, for example has to jump through extra hoops to get a stable link of a Times article, why wouldn’t they end up choosing an equally reliable WaPo article instead?
Tragedy of the commons.
Given the Times and the Guardian are British they will be archived by the British Library, as it's a legal obligation.
That doesn't mean anything American library that doesn't pay authors Public Lending Right fees gets to.
Prof. Jeff Jarvis speaking about copyright for news in front of Congress:
https://www.youtube.com/watch?v=tX26ijBQs2k
Proposed solution:
Sell a "truck full of DAT tapes" type service to AI scrapers with snapshots of the IA. Sort of like the cloud providers have with "Data Boxes".
It will fund IA, be cheaper than building and maintaining so many scrapers, and may relieve the pressure on these news sites.
Even sites with that option already (like wikipedia) still report being hammered by scrapers. It's the full-funded aligned with the incompetent at work here.
IA has always been in legal jeopardy without offering paid access. For that to work we need to get rid of copyright first.
Or offer it in countries with lax copyright. The industry will find ways to work around it.
But - as another poster pointed out - Wikipedia offers this, and still gets hammered by scrapers. Why buy when free, I guess?
There's a mundane version of this that hits small businesses every day. Platform terms of service pages, API documentation, pricing policies, even the terms you agreed to when you signed up for a SaaS product - these all live at URLs that change or vanish.
I've been building tools that integrate with accounting platforms and the number of times a platform's API docs or published rate limits have simply disappeared between when I built something and when a user reports it broken is genuinely frustrating. You can't file a support ticket saying "your docs said X" when the docs no longer say anything because they've been restructured.
For compliance specifically - HMRC guidance in the UK changes constantly, and the old versions are often just gone. If you made a business decision based on published guidance that later changes, good luck proving what the guidance actually said at the time. The Wayback Machine has saved me more than once trying to verify what a platform's published API behaviour was supposed to be versus what it actually does.
The SOC 2 / audit trail point upthread is spot on. I'd add that for smaller businesses, it's not just formal compliance frameworks - it's basic record keeping. When your payment processor's fee schedule was a webpage instead of a PDF and that webpage no longer exists, you can't reconcile why your fees changed.
> The Financial Times, for example, blocks any bot that tries to scrape its paywalled content, including bots from OpenAI, Anthropic, Perplexity, and the Internet Archive
But then it was not really open content anyway.
> When asked about The Guardian’s decision, Internet Archive founder Brewster Kahle said that “if publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”
Well - we need something like wikipedia for news content. Perhaps not 100% wikipedia; instead, wikipedia to store the hard facts, with tons of verification; and a news editorial that focuses on free content but in a newspaper-style, e. g. with professional (or good) writers. I don't know how the model could work, but IF we could come up with this then newspapers who have gatewalls to information would become less relevant automatically. That way we win long-term, as the paid gatewalls aren't really part of the open web anyway.
Wikipedia relies on the institutional structure of journalism, with newsroom independence, journalistic standards, educational system and probably a ton of other dependencies.
Journalism as an institution is under attack because the traditional source of funding - reader subscriptions to papers - no longer works.
To replicate the Wikipedia model would need to replicate the structure of Journalism for it to be reliable. Where would the funding for that come from? It's a tough situation.
> Well - we need something like wikipedia for news content.
The Wikipedia folks had their own Wikinews project which is essentially on hold today because maintenance in a wiki format is just too hard for that kind of uber-ephemeral content. Instead, major news with true long-term relevance just get Wikipedia articles, and the ephemera are ignored.
> we need something like wikipedia for news content
Interesting idea. It could be something that archives first and releases at a later date, when the news aren't as much new
> it was not really open content anyway
Practically no quality journalism is.
> we need something like wikipedia for news
Wikipedia editors aren’t flying into war zones.
Well, and it would be considered "original research" anyway which some admin would revert.
Original reporting is allowed and encouraged by the Wikimedia Foundation sister org Wikinews, which may be cited by Wikipedia.
https://en.wikinews.org/wiki/Wikinews:Original_reporting
Wikinews is on hold nowadays. Original research that is of real long-term relevance can go onto Wikijournal, which does peer review.
Statistically, at least a few of them live in war zones. And I'm sure some of them would fly in to collect data if you paid them for it.
> at least a few of them live in war zones
Which is a valuable perspective. But it's not a subsitute for a seasoned war journalist who can draw on global experience. (And relating that perspective to a particular home market.)
> I'm sure some of them would fly in to collect data if you paid them for it
Sure. That isn't "a news editorial that focuses on free content but in a newspaper-style, e. g. with professional (or good) writers."
One part of the population imagines journalists as writers. They're fine on free, ad-supported content. The other part understands that investigation is not only resource intensive, but also requires rare talent and courage. That part generally pays for its news.
Between the two, a Wikipedia-style journalistic resource is not entertaining enough for the former and not informative enough for the latter. (Importantly, compiling an encyclopedia is principally the work of research and writing. You can be a fine Wikipedia–or scientific journal or newspaper–editor without leaving your room.)
Those roles seem to be diverging:
- crowdsourced data, eg, photos of airplane crashes
- people who live in an area start vlogs
- independent correspondents travel there to interview, eg Ukraine or Israel
We see that our best war reporting comes from analyst groups who ingest that data from the “firehose” of social media. Sometimes at a few levels, eg, in Ukraine the best coverage is people who compare the work of multiple groups mapping social media reports of combat. You have on top of that punditry about what various movements mean for the war.
So we don’t have “journalist”:
- we have raw data (eg, photos)
- we have first hand accounts, self-reported
- we have interviewers (of a few kinds)
- we have analysts who compile the above into meaningful intelligence
- we have anchors and pundits who report on the above to tell us narratives
The fundamental change is that what used to be several roles within a new agency are now independent contractors online. But that was always the case in secret — eg, many interviewers were contracted talent. We’re just seeing the pieces explicitly and without centralized editorial control.
So I tend not to catastrophize as much, because this to me is what the internet always does:
- route information flows around censorship
- disintermediate consumers from producers when the middle layer provides a net negative
As always in business, evolve or die. And traditional media has the same problem you outline:
- not entertaining enough for the celebrity gossip crowd
- too slow and compromised by institutional biases for the analyst crowd, eg, compare WillyOAM coverage of Ukraine to NYT coverage
https://www.youtube.com/@willyOAM
> a news editorial that focuses on free content but in a newspaper-style
Isn't that what state funded news outlets are?
Framing this as some anti-AI thing is wild. The simpler, more obvious, and more evidenced reason for this is that these sites want to make money with ads and paywalls that an archived copy tends to omit by design. Scapegoating AI lets them pretend that they're not the greedy bad guys here — just like how the agricultural sector is hell-bent on scapegoating AI (and lawns, and golf courses, and long showers, and free water at restaurants) for excess water consumption when even the worst-offending datacenters consume infinitesimally-tiny fractions of the water farms in their areas consume.
Yeah I assume what the news publishers actually care about is the thing where, when someone posts a paywalled news article on hacker news, one of the first comments is invariably a link to an archive site that bypasses the paywall so people can read it without paying for it.
> just like how the agricultural sector is hell-bent on scapegoating AI (and lawns, and golf courses, and long showers, and free water at restaurants) for excess water consumption when even the worst-offending datacenters consume infinitesimally-tiny fractions of the water farms in their areas consume.
When I learned about how much water agriculture and industry uses in the state of California where I live, I basically entirely stopped caring about household water conservation in my daily life (I might not go this far if I had a yard or garden that I watered, but I don't where I currently live). If water is so scarce in an urban area that an individual human taking a long shower or running the dishwasher a lot is at all meaningful, then either the municipal water supply has been badly mismanaged, or that area is too dry to support human settlement; and in either case it would be wise to live somewhere else.
This is a natural response to AI companies plundering the web to enrich themselves and provide no benefit to the sites being scraped.
Seems more like an easy excuse to shut down a means for people to bypass their paywalls. It would be trivial for AI companies to continue getting this data without using the Internet Archive.
I imagine that's a consideration, but there's plenty of pushback against AI companies scraping outside of this.
I’m coming at this from a founder/product angle, not a technical one, so excuse the naive framing.
What worries me isn’t scraping itself, but the second-order effects. If large parts of the web become intentionally unarchivable, we’re slowly losing a shared memory layer. Short-term protection makes sense, but long-term it feels like knowledge erosion.
Genuinely curious how people here think about preserving public knowledge without turning everything into open season for mass scraping.
This partially feels like an intentional pendulum swing from Twitter/Facebook cancel culture and other forms of policing.
I'm thinking in particular about the rise of platforms like Discord where being opaque to search/archiving is seen as a feature. Being gatekept and ephemeral makes people more comfortable sharing things that might get a takedown notice on other platforms, and it's hard for people who don't like you in the future to try to find jokes/quotes they don't like to damage your future reputation.
Clearly very different than news articles going offline, but I do think there's been a vibe shift around the internet. People feel overly surveilled in daily life, and take respite in places that make surveillance harder.
Yup. Recently built something that needs to do low volume scraping. About 40% success rate - rest hits bot detection even on first try
Did you have rate limits built in? Ultimately scraping tools will need to mimic humans. Ironic.
I wonder if bots/ai will need to build their own specialized internet for faster sharing of data, with human centered interfaces to human spaces.
IPFS and IPNS already exist.
As someone who has been dealing with SOC 2, HIPAA, ISO 9001, etc., for years, I have always maintained copies of the third-party agreements for all of our downstream providers for compliance purposes. This documentation is collected at the time of certification, and our policies always include a provision for its retrieval on schedule. The problem is when you certify their policy said X and were in compliance, they quietly change that and don't send proper notification downstream to us, and captain lawsuit comes by, we have to be able to prove that they did claim they were in compliance and the time we certified. We don't want to rely on their ability to produce that documentation. We can't prove that it wasn't tampered with, or that there is a chain of custody for their documentation and policies. If I wanted to use a vendor that wouldn't provide that information, then I didn't use them. Welcome to the world of highly regulated industries.
What does this have to do with news sites blocking AI scrapers and the Internet Archive?
Are you a bot?
editorialised. Original title (submitted previously a few times correctly by others):
News publishers limit Internet Archive access due to AI scraping concerns
Explain it to me like I’m 5, why is ai scraping the way back machine bad?
The issue of digital decay and publishers blocking archiving efforts is indeed concerning. It's especially striking given that news publishers, perhaps more than any other entity, have profoundly benefited from the vast accumulation of human language and cultural heritage throughout history. Their very existence and influence are built upon this foundation. To then, in an age where information preservation is more critical than ever (and their content is frequently used for AI training), actively resist archiving or demand compensation for their contributions to the collective digital record feels disingenuous, if not outright shameless. This stance ultimately harms the public good and undermines the long-term accessibility of our shared knowledge and historical narrative.
The death of trust on the cloud.
<richevans>How does it feel to live long enough to see all your favorite sites go down in flames?</richevans>
The internet isn't so simple anymore. I think it's important to separate commercial websites from non-commercial ones. Commercial sites shouldn't be expected to be achievable to begin with, unless it's part of their business model. A lot of sites (like reddit), started of as ad-supported sites, but now they're commercial (not just post-IPO, but accept payments and sell things to/from consumers). Even for ad-supported sites, there is a difference between ad-supported non-profit, and sites that exist to generate revenue from ads. As in, the primary purpose of the site is to generate ad-revenue, the content is just a means to that end.
I've said it before, and I'll say it again: The main issue is not design patterns, but lack of acceptable payment systems. The EU with their dismantling of visa and mastercard now have the perfect opportunity to solve this, but I doubt they will. They'll probably just create a european wechat.
I mean why wouldn’t they? All their IP was scraped for at their own cost of hosting it for AI training. It further pulls away from their own business models as people ask the AI models the questions instead of reading primary sources. Plus it doesn’t seem likely they’ll ever be compensated for that loss given the economy is all in on AI. At least search engines would link back.
Those countermeasures don't really have an effect in terms of scraping. Anyone skilled can overcome any protection within a week or two. By officially blocking IA, IA can't archive those websites in a legal way, while all major AI companies use copyrighted content without permission.
For sure. There are many billions and brilliant engineers propping up AI so they will win any cat and mouse game of blocking. It would be ideal if sites gave their data to IA and IA protected it exactly from what you say. But as someone that intentionally uses AI tools almost daily (mainly open evidence) IMO blame the abuser not the victim that it has come to this.
I'm not blaming the victim, but don't play the 'look what you made me do' game. Making content accessible to anyone (even behind a paywall) is a risk they need to take nevertheless. It's impossible to know upfront if the content is used for consumption or to create derived products (e.g. write an article in NYT style etc.). If this was a newspaper, this would be equivalent to scanning paper and then training AI. You can't prevent scanning, as the process is based on exactly the same phenomenon what makes your eyes see, iow information being sent and received. The game was lost before it even started.
That is a good question. However, copyright exists (for a limited time) to allow for them to be compensated. AI doesn't change that. It feels like blocking AI-use is a ploy to extract additional revenue. If their content is regurgitated within copyright terms, yes, they should be compensated.
The problem is that producing a mix of personalized content that doesn't appear (at least on its face) to violate copyright still completely destroys their business model. So either copyright law needs to be updated or their business model does.
Either way I'm fairly certain that blocking AI agent access isn't a viable long term solution.
> Either way I'm fairly certain that blocking AI agent access isn't a viable long term solution.
Great point. If my personal AI assistant cannot find your product/website/content, it effectively may no longer exist! For me. Ain't nobody got the time to go searching that stuff up and sifting through the AI slop. The pendulum may even swing the other way and the publishers may need to start paying me (or whoever my gatekeeper is) for access to my space...
Let’s be honest, one of the most-common uses of these archive sites has been paywall circumvention. An academics-only archive might make sense, or one that is mutually-owned and charges a fee for lookup. But a public archive for content that costs money to make obviously doesn’t work.
if that’s the real motive, why don’t they allow access to scrape content after some period? when that news is not as relevant. For example after 6 months.
> why don’t they allow access to scrape content after some period? when that news is not as relevant. For example after 6 months
I belive many publications used to do this. The novel threat is AI training. It doesn't make sense to make your back catalog de facto public for free like that. There used to be an element of goodwill in permitting your content to be archived. But if the main uses are circumventing compensation and circumventing licensing requirements, that goodwill isn't worth much.
Enabling research is a business model for many publications. Libraries pay money for access to the publishers’ historical archives. They don’t want to cannibalize any more revenue streams; they’re already barely still operating as it is.
i see, i didn’t consider this angle. thanks for pointing that out.
BitCoin fixes this.
The end of traditional news sites is coming. At least for the newspaper websites. Future mcp like systems will generate on the fly newstites in your desired style and content. Journalists will have some kind of paid per view model provided by these gpt like platforms which of course take a too big of a chunk. I can't imagine a WSJ is able to survive.
This is awful, they need to at the very least allow private archivals.
Maybe the Internet Archive might be ok to keeping some things private until x time passes; or they could require an account to access them
We need something like SETI@home/Folding@home but for crawling and archiving the web or maybe something as simple as a browser extension that can (with permission) archive pages you view.
This exists although not in the traditional BOINC space, it's Archiveteam^1. I run two of their warrior^2 instances in my home k3s instance via the docker images. One of them is set to the "Team's choice" where it spends most of its time downloading Telegram chats. However, when they need the firepower for sites with imminent risk of closure, it will switch itself to those. The other one is set to their URL shortener project, "Terror of Tiny Town"^3.
Their big requirement is you need to not be doing any DNS filtering or blocking of access to what it wants, so I've got the pod DNS pointed to the unfiltered quad9 endpoint and rules in my router to allow the machine it's running on to bypass my PiHole enforcement+outside DNS blocks.
^1 https://wiki.archiveteam.org/
^2 https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
^3 https://wiki.archiveteam.org/index.php/URLTeam
In the US at least, there is no expectation of privacy in public. Why should these websites that are public-facing get an exemption from that? Serving up content to the public should imply archivability.
Sometimes it feels like ai-use concerns are a guise to diminish the public record. While on the other hand services like Ring or Flock are archiving the public forever.
Ring and Flock are not a standard we should be striving towards. Their massive databases tracking citizens need to go.
Your TV probably does that, and you definitely gave it permission when you clicked "accept" on the terms.
good thing I don't have a TV!
I run an ArchiveBox instance locally. Recommended! https://archivebox.io/
This is a good idea. Not sure what ToS it would violate. But a good idea.
But wait, I thought AI was so great for all industries? Publishers can have AI-generated articles, and instantly fix grammar problems, And translate it seamelessly to every language, and even use AI-generated images where appropriate to enrich the article. It was going to make us all so productive? What happened? Why would you want to _block_ AI from ingesting the material?
I fear that these news publishers would come after RSS next as I see hundreds of AI companies misusing the terms of the news publishers's RSS feed for profit on mass scraping.
They do not care and we will be all worse off for it if these AI companies keep continuing to bombard news publishers RSS feeds.
It is a shame that the open web as we know it is closing down because of these AI companies.
Dear news publications - if you aren't willing to accept an independent record of what you published, I can't accept your news. It's a critical piece of the framework that keeps you honest. I don't care if you allow AI scraping either way, but you have to facilitate archival of your content - independently, not under your own control.
How is the publisher supposed to fund their operations let along make a profit. How about a 1 year lock on the archive pages. There are many ways of keeping that record but not taking views undermining the business model
The same way they did back in the day, where libraries still existed that allowed people to read newspapers for free.
I kind of doubt that internet archive is really taking very much business away from them. Its a terrible UI to read the daily news.
The LWN model feels practical here:
> We ask that you grant LWN exclusive rights to publish your work during the LWN subscription period - currently up to two weeks after publication.
News is valuable when it is timely, and subscribers pay for immediate access.
https://lwn.net/op/AuthorGuide.lwn
> How is the publisher supposed to fund their operations let along make a profit.
There used to be plenty newspapers sponsored by wealthy industrialists; the latter would cover the former's gaps between the costs and the sales, the former would regularly push the latter's political agenda.
The "objective journalism" is really quite a late invention IIRC, about the times of WW2.
Objectivity was already a principle in the 1890s.
https://en.wikipedia.org/wiki/Journalistic_objectivity
"To give the news impartially, without fear or favor." — Adolph Ochs, 1858-1935
Objectivity is the default state of honest storytelling. If I ask what happened ? and somebody only tells the parts that suit an agenda, they have not informed me. The partisan press exists, because someone has a motive to deviate from the natural expectation of fair story telling and story recounting.
> Objectivity is the default state of honest storytelling. If I ask what happened ? and somebody only tells the parts that suit an agenda, they have not informed me.
Already at the level of what stories are covered you have made choices about what's important or not.
Your newspaper not covering your neighbors lawsuit against the city against some issue because they find it to be "not important" is already a viewpoint-based choice
A newspaper presenting both sides on an issue (already simplifying on the "there are two sides to an issue" thing) is one thing. Do you also have to present expert commentary that says that one side is actually just entirely in bad faith? Do you write a story and then conclude "actually this doesn't matter" when that is the case?
There are plenty of descriptions that some people would describe as fair story telling and others would describe as a hit piece. Probably for any article on any controversial topic written in good faith you are likely able to find some people who would claim it's not.
I think it's important to acknowledge that even good faith journalism is filled with subjectivity. That doesn't mean one gives up, you just have to take into account the position of the people presenting information and roll with that.
You make it sound like bias is completely relative and undecidable. But there is a clear line journalists can cross - if they're intentionally misleading their reader, that's bias. It's qualitatively different from neglecting to cover a story or not finding a suitable expert or whatever. It's intentional deception because they want the readers to have wrong knowledge. And they do it all the time.
If an independent press is critical to open societies, perhaps some sort of citizen directed funding is needed to maintain independence from both capital and government?
It's a great question, but they didn't seem to have a problem with this before AI, so I have to assume that the presence of a free available copy wasn't really impacting their revenue.
Maybe it would be better if these news operations had to find better ways to sustain themselves than the current paradigms. Also, the internet archive is not the only archive, and there will be more. This ins't something they can really stop.
Reconfigure human society so that services like news don't need to make a profit and still remain credible.
It's pretty easy to hold publications accountable without forcing them to publish content - just make them publish hashes of their content.
They won't, of course, because they don't want accountability.
I don't know even one news source I "trust." I expect them to push an agenda.
I also don't think they care even a bit. They're pushing agendas, and not hiding it; rather, flaunting it.
People need to abandon the notion of "trust" being a single axes between trustworthy to untrustworthy.
Every source has it's biases, you should try to be aware of them and handle information accordingly.
I prefer when the bias is "we don't run xyz story" vs "we run a slanted version of xyz story."
They're both a bias, of course, but one is more palatable.
"everyone is stupid but me" is a bit too prevalent in the tech industry
You are doing it to the parent comment right now.
Why not interpret it to mean something like “no news organization has biases that are fully aligned with my best interests”
Not everyone surely. But some people
Persinally i think people harp on news bias too much.
I think the real problem is that they often dont put events in context, which leads people to misunderstand them. They report the what not the why, but most events don't just happen one day, they are shaped by years or even decades of historical context. If you just understand the literal event without the background context, i dont think you are really informed.
I consider almost all news to be entertainment unless I need its perspective to make a decision (which is almost never). It is a lot safer to remain uninformed on a subject as it settles than to constantly attempt to be informed.
Information bias is unfortunately one of the sicknesses of our age, and it is one of the cultural ills that flows from tech outward. Information is only pertinent in its capacity to inform action, otherwise it is noise. To adapt a Beck-ism: You aren't gonna need it.
Is it more likely that no one is speaking the truth, or, more likely, to you, the truth looks like an agenda?
What I'm talking about is that the news tries to tell you what to think. You can read headlines on Google News about the same story, and see the bias of the publication in the headline pretty often.
Instead of reporting just the facts, they include opinions, inflammatory language, etc.
Reuters writes in a relatively neutral tone, as an example. Fox News doesn't, and CNN doesn't, as examples of the opposite.
If you don't notice, I doubt you're reading the news. It's part of the offering. Fox does it on purpose, not accidentally.
What is wrong with reading other people opinions?
Newspapers in my country always were blogs before the internet existed. Its why they are still around and doing quite well- they don't just bring news.
It taints the full story pretty often--they omit details.
Everyone has an agenda. The question is whether they are also reporting facts.
This is the particular thing I care about. If I can count on their facts, I can mostly subtract their agenda.
See: https://app.adfontesmedia.com/chart/interactive
The problem comes in when I can't count on the "facts" being reported.
The records already exist. Check your local library. The entire point of this is the scrapers undermine the business model.
If anything, we should simply me asking archive.org to limit their access to humans.
Libraries dont keep all periodicals and dont keep them forever. And microfilm is really lossy, unreliable, and difficult to search.
I'm not sure what microfilm has got to do with this. Plenty of national libraries have extensive digital collections of various artifacts - books and even websites. Check out the National Library of Australia as an example: https://www.library.gov.au/discover/what-we-collect/archived...
To hell with contempt of business model. Business models aren't sacred. Besides which, with business models owners capture the newsroom anyway.
As a news publisher (RedBankGreen.com) I’ll tell you that pretty much nobody is in it for the money anymore, at least at the local level.
It’s passion and love of the community, despite the many struggles and drawbacks.
AI bots scrape our content and that drastically reduces the number of people who make it to our site.
That impacts our ability to bring on subscribers and especially advertisers - Google and Meta own local advertising and AI kills the relatively tiny audience we have.
I dread the day that it happens in realtime - hear sirens? Ask AI who already scraped us.
Every business (even news) needs a business model.
Yes, but not every business works, and not every business model works, and not every business model works with every business, etc etc.
It's on the business to find a model that works within the environment of the free market and within the social framework.
If a business model only works by limiting competition, it's a bad model.
If it only works by limiting the rights of consumers, it's a bad model.
If it only works by blocking a legal activity (website crawling and scraping of publicly-facing data, for instance), it's a bad model.
And if their business can't operate otherwise, it's a bad business. No business has an intrinsic right to exist.
If a business model only works by copyright washing is it a bad model?
> No business has an intrinsic right to exist.
Do AI businesses have an intrinsic right to exist?
I think the question of is a business allowed to have something free only for humans (presumably with advertising) does not have a clear best answer - politicians can decide.
News has a business model: do actual journalism. I don't see much reason to fund the people who are giving me the same story as everyone else who received the same press release, with no additional details: I might as well subscribe to the press releases.
And people wonder why we’re all locked in a race to the bottom.
If they don't have a business model we won't have newspapers to complain that we don't have archives for.
First thing that came to my mind went along the same reasoning.
The second thing that came to mind was paywall evasion. Any time a news article behind a paywall gets posted here, someone in the comments has the archive link ready to go, because of course they do.
The incentives for online news are really wacky just to begin with. A coin at the convenience store for the whole dang paper used to be the simplest thing in the world.
I suppose that could be solved with a delay. Limit internet archive for articles that are less than a week old.
> Limit internet archive for articles that are less than a week old.
I mean this as a side note rather than a counterargument (because people learn to take screenshots, and because what can you do about particularly bad faith news orgs?): Immediate archival can capture silent changes (and misleadingly announced changes). A headline might change to better fit the article body. An editor's note might admit a mistakenly attributed quote.
Or a news org might pull a Fox News [1][2] by rewriting both the headline and article body to cover up a mistake that unravels the original article's reason for existing: The original headline was "SNAP beneficiaries threaten to ransack stores over government shutdown". The headline was changed to "AI videos of SNAP beneficiaries complaining about cuts go viral". An editor's note was added [3][4]: "This article previously reported on some videos that appear to have been generated by AI without noting that. This has been corrected." I think Fox News deleted the article.
[1] https://xcancel.com/KFILE/status/1984673901872558291
[2] https://archive.ph/NL6oR
[3] https://xcancel.com/JusDayDa/status/1984693256417083798
[4] https://archive.ph/XEI9E
That would diminish archival accuracy, an outlet could amend the text without third party proof.
I don't see the connection to adding the delay. I think the suggestion was to have a snapshot at time of publication but wait a week to make it public.
I actually didn't initially think of the parent's objection nor your rebuttal. This is why I like reading HN comments.
If the content was fully paywalled, it wouldn't be possible to archive it (unless the archiver paid for a subscription).
The reason the archiving works is because they expose the content publicly so search engines can index it.
> Any time a news article behind a paywall gets posted here, someone in the comments has the archive link ready to go, because of course they do.
I have no idea why this behavior is even acceptable.
I have seen zero evidence that independent archives “keep news media honest”. In fact, I have on several occasions noticed news media directly contradicting their own stance from just a few years prior, with no mention of the previously published account at all. This is true even for highly respected newspapers of record.
I can indeed find clear records of that in the archives. But what do I do with them? How do I use that evidence to hold news media to account? This is meaningless moral posturing.
Contact the journalist of the new article with the contradicting article? Letters to the editor? Submit an opinion article?
I've contacted multiple journalists over the years about errors in their articles and I've generally found them responsive and thankful.
Sometimes it's not even their fault. One time a journalist told me the incorrect information was unknowingly added by an editor.
I get that it's popular on HN and the internet to bash news media, and that there are a lot of legitimate issues with the media, but my personal experience is that journalists do actually want to do a good job and respond accordingly when you engage them (in a non-antagonistic manner).
The incidents I’m referring to aren’t “errors” though.
If a major article claims that certain groups don’t exist, while the same newspaper published a detailed report about those exact groups and how dangerous they are just two years earlier, it’s not because the journalist wasn’t able to do a 10-second Google search where their own paper’s article would have been among the top results.
> But what do I do with them? How do I use that evidence to hold news media to account?
Contact their rivals with the story, have them write a hit piece. "Other newspaper is telling porkies: here's the proof!" is an excellent story: not one I'd expect a journalist to have time to discover, but certainly one I'd expect them to be able to follow up on, once they've received a tip.
That’s not how publishing works. News outlets (especially those of roughly similar political leaning) very rarely call out each other’s misconduct. In fact, they often seem to operate as a quasi-conglomerate rather than competitors.
That’s good. I don’t like archival sites. Let things disappear.
Yea.. I’ve noticed data hoarding largely resembles yet-another form of death denialism.
If most of the Internet is AI-generated slop (as is already the case), is there really any value in expensing so much bandwidth and storage to preserve it? And on the flip side, I'd imagine the value of a pre-2022 (ChatGPT launch) Internet snapshot on physical media will probably increase astronomically.
The sites that are most valuable to preserve are likely the same ones that are most likely to put up barriers to archiving
Perhaps the AI slop isn't worth preserving, but the unarchivability of news and other useful content has implications for future public discourse, historians, legal matters and who knows what else.
In the past libraries used to preserve copies of various newspapers, including on microfiche, so it was not quite feasible to make history vanish. With print no longer out there, the modern historical record becomes spotty if websites cannot be archived.
Perhaps there needs to be a fair-use exception or even a (god forbid!) legal requirement to allow archivability? If a website is open to the public, shouldn't it be archivable?
Erm, there is still a newspaper stand in the supermarket I go to (Wallmart for the Americans). Not sure if the British library keeps a copy of the print news I see, but they should!
This is a good thing, IMO.
I am sad about link rot and old content disappearing, but it's better than everything be saved for all time, to be used against folks in the future.
> I am sad about link rot and old content disappearing, but it's better than everything be saved for all time, to be used against folks in the future.
I don't understand this line of thinking. I see it a lot on HN these days, and every time I do I think to myself "Can't you realize that if things kept on being erased we'd learn nothing from anything, ever?"
I've started archiving every site I have bookmarked in case of such an eventuality when they go down. The majority of websites don't have anything to be used against the "folks" who made them. (I don't think there's anything particularly scandalous about caring for doves or building model planes)
Consider the impact, though, on our ability to learn and benefit from history. If the records of people’s activities cannot be preserved, are we doomed to live in ignorance?
I don't think so. Most of my original creations were before the archiving started, and those things are lost. But they weren't the kind of history you learn and benefit from--nor is most of the internet.
The truly important stuff exists in many forms, not just online/digital. Or will be archived with increased effort, because it's worth it.
Like it or not, the Internet is today’s store of record for a significant proportion—if not the majority—of the world’s activities.
If you don’t want your bad behavior preserved for the historical record, perhaps a better answer is to not engage in bad behavior instead of relying on some sort of historical eraser.
Behavior that isn't bad, becomes bad retrospectively after a regime change
That's a risk we all take. Not that long ago, homophobia was the norm. Being on the wrong side of history can be uncomfortable, but people do forgive when given the right context.
Think about the stuff archeologists get to work with.
What's that famous quote - those who do not learn from history ...
BUT, it's hard to learn from history if there's no history to learn...
Kind of the "think of the children" argument: most things that are worth archiving have nothing to do with content that can be used against someone in the future. But the raw volume is making it impossible to filter out the worthwhile stuff from the slop (all forms of, not just AI), even with automation (again, not AI, we've been doing NLP using regular old ML for decades now).
Man I cannot disagree more. This is a terrible thing.