My Hacker News items table in ClickHouse has 47,428,860 items, and it's 5.82 GB compressed and 18.18 GB uncompressed. What makes Parquet compression worse here, when both formats are columnar?
Sorting, compression algorithm +level, and data types can all have an impact. I noted elsewhere that a Boolean is getting represented as an integer. That’s one bit vs 1-4 bytes.
There is also flexibility in what you define as the dataset. Skinnier, but more focused tables could be space saving vs a wide table that covers everything -will probably break compressible runs of data.
Plus isn't the least wasteful format, native duckdb for instance compacts better. That's not just down to the compression algorithm, which as you say got three main options for parquet.
I recall that this became a big problem for the Homebrew project in terms of load on the repo, to the extent that Github asked them not to recommend/default-enable shallow clones for their users: https://github.com/Homebrew/brew/issues/15497#issuecomment-1...
This is likely to be lower traffic, and the history should (?) scale only linearly with new data, so likely not the worst thing. But it's something to be cognizant of when using SCM software in unexpected ways!
"The dataset is organized as one Parquet file per calendar month, plus 5-minute live files for today's activity. Every 5 minutes, new items are fetched from the source and committed directly as a single Parquet block. At midnight UTC, the entire current month is refetched from the source as a single authoritative Parquet file, and today's individual 5-minute blocks are removed from the today/ directory."
So it's not really one big file getting replaced all the time. Though a less extreme variation of that is happening day to day.
Was thinking the same thing. probably once a day would be more than enough.
if you really want a minute by minute probably a delta file from the previous day should be more than enough.
That is just the archive part, if you just would finish reading the paragraph you would know that updates since 2026-03-16 23:55 UTC are "are fetched every 5 minutes and committed directly as individual Parquet files through an automated live pipeline, so the dataset stays current with the site itself."
So to get all the data you need to grab the archive and all the 5 minute update files.
Your family is starving and your dog died of radiation poisoning from the fallout but at least your local LLM can browse this and recommend a good software stack for your automated booby traps.
By posting comments on this site, you are relinquishing your right to that content. It belongs to YC and it is theirs to enforce, not yours. https://www.ycombinator.com/legal/
Create a new account every so often, don’t leave any identifying information, occasionally switch up the way you spell words (British/US English), and alternate using different slang words and shorthand.
funnily enough, if everyone did this (at least make a new account often), it would prove more destructive to what HN (purposefully) wants to do than deleting the occasional account data
And do what I do - paste everything into ChatGPT and have it rephrase it. Not because I need help writing, but because I’d rather not have my writing style used against me.
Perhaps you could use a local translation model to rephrase (such as TranslateGemma). If translating English to English doesn't achieve this effect then use an intermediate language, one the model is good at to not mangle meaning too much.
> At midnight UTC, the entire current month is refetched from the source as a single authoritative Parquet file, and today's individual 5-minute blocks are removed from the today/ directory.
This is great. I've soured on this site over the past few years due to the heavy partisanship that wasn't as present in the early days (eternal September), but there are still quite a few people whose opinions remain thought-provoking and insightful. I'm going to use this corpus to make a local self-hosted version of HN with the ability to a) show inline article summaries and b) follow those folks.
As someone who had made a project analysing hackernews who had used clickhouse, I really feel like this is a project made for me (especially the updated every 5 minute aspect which could've helped my project back then too!)
Your project actually helps me out a ton in making one of the new project ideas that I had about hackernews that I had put into the back-burner.
I had thought of making a ping website where people can just @Username and a service which can detect it and then send mail to said username if the username has signed up to the service (similar to a service run by someone from HN community which mails you everytime someone responds to your thread directly, but this time in a sort of ping)
[The previous idea came as I tried to ping someone to show them something relevant and thought that wait a minute, something like ping which mails might be interesting and then tried to see if I can use algolia or any service to hook things up but not many/any service made much sense back then sadly so I had the idea in back of my mind but this service sort of solves it by having it being updated every 5 minutes]
Your 5 minute updates really make it possible. I will look what I can do with that in some days but I am seeing some discrepancy in the 5 minute update as last seems to be 16 march in the readme so I would love to know more about if its being updated every 5 minutes because it truly feels phenomenal if true and its exciting to think of some new possibilities unlocked with it.
We have LLMs and links to TOS, this is easily answerable by _anyone_ on the internet at this point.
Comments+posts are defined as user generated content, you have no right to its privacy/control in any capacity once you post it - https://www.ycombinator.com/legal/
YC in theory has the right to go after unauthorized 3rd parties scraping this data. YC funds startups and is deeply vested in the AI space. Why on Earth would they do that.
Is is possible to only download a subset? e.g. Show HNs or HN Whoishiring. The Show HNs and HN Whoishiring are very useful for classroom data science i.e. a very useful set of data for students to learn the basic of data cleaning and engineering.
> By uploading any User Content you hereby grant and will grant Y Combinator and its affiliated companies
The user content is supposed to be licensed only Y Combinator and (bleah) its affiliated companies (which are many, all the startups they fund, for example).
I.e. this section is talking to additional rights to the content you post to ALSO go to YC, not that YC is guaranteeing it (+friends) will be the only one to hold these rights or will enforce who else should hold the rights to your publicly shared content for you.
There's a more intricate conversation to be had with GDPR and public data on forums in general but that's wholly unrelated to what YC's legal page says and still unlikely to end up in an alarming result.
I think that's incorrect. Exclusivity would be something you grant to YC. These terms need to make sense to be valid. Claiming exclusive rights would mean they are forbidding YOU from licensing YOUR rights to anyone else.
Imagine Facebook claiming that by uploading images you are granting them exclusive usage rights to that image. It would mean you couldn't upload it to any other site with similar terms anymore.
Your submissions to, and comments you make on, the Hacker News site are not Personal Information and are not "HN Information" as defined in this Privacy Policy.
Other Users: certain actions you take may be visible to other users of the Services.
Eh, fuck that agreement. I'm kind of old school in that I believe if you put it on the internet without an auth-wall, people should be allowed to do whatever they want with it. The AI companies seem to agree.
Then again, I'm not the guy that is going to get sued...
Legal theory about public data is fun right up until someone with money decides their ToS mean something and files suit, because courts are usually a lot less impressed by "I could access it in my browser" once you pulled millions of records with a scraper. Scrape if you want, just assume you're buying legal risk.
"I'm kind of old school in that I believe if you put grass on the ground without a fence, people should be allowed to do whatever they want with it. The noblemen with a thousand cows seem to agree."
And that, my friends, is how you kill the commons - by ignoring the social context surrounding its maintenance and insisting upon the most punitive ways of avoiding abuse.
Context is important, but isn’t HN’s social context, in particular, that the site is entirely public, easily crawled through its API (which apparently has next to no rate limits) and/or Algolial, and has been archived and mirrored in numerous places for years already?
Grass and property require upkeep. Radio waves and electromagnetic radiation do not.
I don't want your dog to piss on my lawn and kill my grass. But what harm does it cause me if you take a picture of my lawn? Or if I take a picture of your dog?
If I spend $100M making a Hollywood movie - pay employees, vendors, taxes - contribute to the economic growth of the country - and then that product gets stolen and given away completely for free without being able to see upside, that's a little bit different.
But my Hacker News comment? It's not money.
I think there are plausible ways to draw lines that protect genuine work, effort, and economics while allowing society and innovation to benefit from the commons.
They already refuse to comply with CPRA, instead electing to replace your username with a random 6(?) character string, prefixed with `_`, if I remember correctly.
I know, because I've been here since maybe 2015 or so, but this account was created in 2019.
So any PII you have mentioned in your comments is permanent on Hacker News.
I would appreciate it if they gave users the ability to remove all of their personal data, but in correspondence and in writing here on Hacker News itself, Dan has suggested that they value the posterity of conversations over the law.
To be incredibly pedantic to the point of being irrelevant: technically the sign up page 1) doesn't have a clickwrap "I agree" checkbox, and 2) there's no link to the TOS on the sign up page.
That makes the implicit TOS agreement legally confusing depending on jurisdiction.
(Not that it really matters, but I find these technicalities amusing)
There is also flexibility in what you define as the dataset. Skinnier, but more focused tables could be space saving vs a wide table that covers everything -will probably break compressible runs of data.
The bigger concern is how large the git history is going to get on the repository.
This is likely to be lower traffic, and the history should (?) scale only linearly with new data, so likely not the worst thing. But it's something to be cognizant of when using SCM software in unexpected ways!
See also: https://github.com/orgs/Homebrew/discussions/225
So it's not really one big file getting replaced all the time. Though a less extreme variation of that is happening day to day.
> The archive currently spans from 2006-10 to 2026-03-16 23:55 UTC, with 47,358,772 items committed.
That’s more than 5 minutes ago by a day or two. No big deal, but a little bit depressing this is still how we do things in 2026.
So to get all the data you need to grab the archive and all the 5 minute update files.
archive data is here https://huggingface.co/datasets/open-index/hacker-news/tree/...
update files are here (I know that its called "today" but it actually includes all the update files which span multiple days at this point) https://huggingface.co/datasets/open-index/hacker-news/tree/...
probably uncalled for
they are suggesting that the huggingface description should be automatically updating the date & item count when the data gets updated.
Perhaps you could use a local translation model to rephrase (such as TranslateGemma). If translating English to English doesn't achieve this effect then use an intermediate language, one the model is good at to not mangle meaning too much.
sample content from users on this page: https://news.ycombinator.com/leaders
and ask the LLM to rephrase it in their voice
Wouldn't that lose deleted/moderated comments?
Your project actually helps me out a ton in making one of the new project ideas that I had about hackernews that I had put into the back-burner.
I had thought of making a ping website where people can just @Username and a service which can detect it and then send mail to said username if the username has signed up to the service (similar to a service run by someone from HN community which mails you everytime someone responds to your thread directly, but this time in a sort of ping)
[The previous idea came as I tried to ping someone to show them something relevant and thought that wait a minute, something like ping which mails might be interesting and then tried to see if I can use algolia or any service to hook things up but not many/any service made much sense back then sadly so I had the idea in back of my mind but this service sort of solves it by having it being updated every 5 minutes]
Your 5 minute updates really make it possible. I will look what I can do with that in some days but I am seeing some discrepancy in the 5 minute update as last seems to be 16 march in the readme so I would love to know more about if its being updated every 5 minutes because it truly feels phenomenal if true and its exciting to think of some new possibilities unlocked with it.
Comments+posts are defined as user generated content, you have no right to its privacy/control in any capacity once you post it - https://www.ycombinator.com/legal/
YC in theory has the right to go after unauthorized 3rd parties scraping this data. YC funds startups and is deeply vested in the AI space. Why on Earth would they do that.
Copyright doesn't seem to matter unless you're an IP cartel or mega cap.
https://www.ycombinator.com/legal/
Mods, enforce your license terms, you're playing fast and loose with the law (GDPR/CPRA)
The user content is supposed to be licensed only Y Combinator and (bleah) its affiliated companies (which are many, all the startups they fund, for example).
If it's owned by you and only licensed by HN shouldn't you be the one enforcing it?
> ... a nonexclusive
I.e. this section is talking to additional rights to the content you post to ALSO go to YC, not that YC is guaranteeing it (+friends) will be the only one to hold these rights or will enforce who else should hold the rights to your publicly shared content for you.
There's a more intricate conversation to be had with GDPR and public data on forums in general but that's wholly unrelated to what YC's legal page says and still unlikely to end up in an alarming result.
Imagine Facebook claiming that by uploading images you are granting them exclusive usage rights to that image. It would mean you couldn't upload it to any other site with similar terms anymore.
That said, there are "no scraping" and "commercial use restricted" carve-outs for the content on HN. Which honestly is bullshit.
Your submissions to, and comments you make on, the Hacker News site are not Personal Information and are not "HN Information" as defined in this Privacy Policy.
Other Users: certain actions you take may be visible to other users of the Services.
Then again, I'm not the guy that is going to get sued...
I agree. It's the owners of the sites that have to follow rules, not us.
And that, my friends, is how you kill the commons - by ignoring the social context surrounding its maintenance and insisting upon the most punitive ways of avoiding abuse.
Grass and property require upkeep. Radio waves and electromagnetic radiation do not.
I don't want your dog to piss on my lawn and kill my grass. But what harm does it cause me if you take a picture of my lawn? Or if I take a picture of your dog?
If I spend $100M making a Hollywood movie - pay employees, vendors, taxes - contribute to the economic growth of the country - and then that product gets stolen and given away completely for free without being able to see upside, that's a little bit different.
But my Hacker News comment? It's not money.
I think there are plausible ways to draw lines that protect genuine work, effort, and economics while allowing society and innovation to benefit from the commons.
I know, because I've been here since maybe 2015 or so, but this account was created in 2019.
So any PII you have mentioned in your comments is permanent on Hacker News.
I would appreciate it if they gave users the ability to remove all of their personal data, but in correspondence and in writing here on Hacker News itself, Dan has suggested that they value the posterity of conversations over the law.
https://www.ycombinator.com/legal/
See: User Content Transmitted Through the Site
That makes the implicit TOS agreement legally confusing depending on jurisdiction.
(Not that it really matters, but I find these technicalities amusing)