NHacker Next
login
▲Show HN: Zeekstd – Rust Implementation of the ZSTD Seekable Formatgithub.com
165 points by rorosen 1 days ago | 29 comments
Loading comments...
rwmj 10 hours ago [-]
Seekable formats also allow random reads which lets you do trickery like booting qemu VMs from remotely hosted, compressed files (over HTTPS). We do this already for xz: https://libguestfs.org/nbdkit-xz-filter.1.html https://rwmj.wordpress.com/2018/11/23/nbdkit-xz-curl/

Has zstd actually standardized the seekable version? Last I checked (which was quite a while ago) it had not been declared a standard, so I was reluctant to write a filter for nbdkit, even though it's very much a requested feature.

simeonmiteff 11 hours ago [-]
This is very cool. Nice work! At my day job, I have been using a Go library[1] to build tools that require seekable zstd, but felt a bit uncomfortable with the lack of broader support for the format.

Why zeek, BTW? Is it a play on "zstd" and "seek"? My employer is also the custodian of the zeek project (https://zeek.org), so I was confused for a second.

[1] https://github.com/SaveTheRbtz/zstd-seekable-format-go

rorosen 10 hours ago [-]
Thanks! I was also surprised that there are very few tools to work with the seekable format. I could imagine that at least some people have a use-case for it.

Yes, the name is a combination of zstd and seek. Funnily enough, I wanted to name it just zeek first before I knew that it already exists, so I switched to zeekstd. You're not the first person asking me if there is any relation to zeek and I understand how that is misleading. In hindsight the name is a little unfortunate.

etyp 9 hours ago [-]
Zeek is well known in "security" spaces, but not as much in "developer" spaces. It did get me a bit excited to see Zeek here until I realized it was unrelated, though :)
stu2010 11 hours ago [-]
This is cool, I'd say that the most common tool in this space is bgzip[1]. Have you thought about training a dictionary on the first few chunks of each file and embedding the dictionary in a skippable frame at the start? Likely makes less difference if your chunk size is 2MB, but at smaller chunk sizes that could have significant benefit.

[1] https://www.htslib.org/doc/bgzip.html

jeroenhd 10 hours ago [-]
Looking at the spec (https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...), I don't see any mention of custom dictionaries like you describe.

The spec does mention:

> While only Checksum_Flag currently exists, there are 7 other bits in this field that can be used for future changes to the format, for example the addition of inline dictionaries.

so I don't think seekable zstd supports these dictionaries just yet.

With multiple inline dictionaries, one could detect when new chunks compress badly with the previous dictionary and train new ones on the fly. Could be useful for compressing formats with headers and mixed data (i.e. game files, which can contain a mix of text + audio + video, or just regular old .tar files I suppose).

ikawe 4 hours ago [-]
Custom dictionaries are a feature of vanilla (non-seekable) zstd. As I understand it, all seekable-zstd are valid zstd, so it should be possible?

https://github.com/facebook/zstd?tab=readme-ov-file#the-case...

mbreese 5 hours ago [-]
I’m trying to learn more about the seekable zstd format. I don’t know very much about zstd, aside from reading the spec a few weeks ago. But I thought this was part of the spec? IIRC, zstd files don’t have to have just one frame. Is the norm to have just one large frame for a file and the multiple frame version just isn’t as common?

Gzip can also have multiple “frames” concatenated together and be seamlessly decrypted. Is this basically the same concept? As mentioned by others bgzip uses this feature of gzip to great effect and is the standard compression in bioinformatics because of it (and is sadly hard coded to limit other potentially useful Gzip extensions).

My interest is to see if using zstd instead of gzip as a basis of a format would be beneficial. I expect for there to be better compression, but I’m skeptical if it would be enough to make it worthwhile.

teraflop 5 hours ago [-]
The Zstd spec allows a stream to consist of multiple frames, but that alone isn't enough for efficient seeking. You would still need to read every frame header to determine which compressed frame corresponds to a particular byte offset in the uncompressed stream.

"Seekable Zstd" is basically just a multi-frame Zstd stream, with the addition of a "seek table" at the end of the file which contains the compressed and uncompressed sizes of every other frame. The seek table itself is marked as a skippable frame, so that seekable Zstd is backward-compatible with normal Zstd decompressors (the seek table is just treated as metadata and ignored).

https://github.com/facebook/zstd/blob/dev/contrib/seekable_f...

mbreese 3 hours ago [-]
Got it. That’s incredibly helpful. Thank you!

The way that’s handled in the bgzip/gzip world is with an external index file (.gzi) with compressed/uncompressed offsets. The index could be auto-computed, but would still require reading the header for each frame.

I vastly prefer the idea of having the index as part of the file. Sadly, gzip doesn’t have the concept of a skippable frame, so that would break naive decompressors. I’m still not sure the file size savings would be big enough to switch over to zstd, but I like the approach.

threeducks 7 hours ago [-]
Assuming that frames come at a cost, how much larger are the seekable zstd files? Perhaps as a graph based on frame size and for different kinds of data (text, binaries, ...).
mcraiha 7 hours ago [-]
It depends on content and compression options. ZSTD has four different compression methods: Raw literals, RLE literals, Compressed literals and Treeless literals. I assume that the last two might suffer the most if content is splitted.
tyilo 10 hours ago [-]
I already use zstd_seekable (https://docs.rs/zstd-seekable/) in a project. Could you compare the API's of this crate and yours?
tyilo 10 hours ago [-]
Correct me if I'm wrong, but it doesn't seem like you provide the equivalent of Seekable::decompress in zstd_seekable which decompresses at a specific offset, without having to calculate which frame(s) to decompress.

This is basically the only function I use from zstd_seekable, so it would be nice to have that in zeekstd as well.

DesiLurker 9 minutes ago [-]
great I can use it to pipe large logfiles and store for later retrival. is there something like zcat also?
ncruces 10 hours ago [-]
How's tool support these days to create compress a file with seekable zstd?

Given existing libraries, it should be really simple to create an SQLite VFS for my Go driver that reads (not writes) compressed databases transparently, but tool support was kinda lacking.

Will the zstd CLI ever support it? https://github.com/facebook/zstd/issues/2121

conradev 3 hours ago [-]
This is really cool! It strikes me as being useful for genomic data, which is always stored in compressed chunks. That was the first time I really understood the hard trade-off between seek time and compression.
mgraczyk 5 hours ago [-]
Maybe a dumb question, but how do you know how many frames to seek past?

For example say you want to seek to 10MB into the uncompressed file. Do you need to store metadata separately to know how many frames to skip?

teraflop 5 hours ago [-]
A seekable Zstd file contains a seek table, which contains the compressed and uncompressed size of all frames. That's enough information to figure out which frame contains your desired offset, and how far into that frame's decompressed data it occurs.
rwmj 5 hours ago [-]
Not sure about zstd, but in xz the blocks (frames in zstd) are stored across the file and linked by offsets into a linked list, so you can just scan over the compressed file very quickly at the start, and in memory build a map of uncompressed virtual offsets to compressed file positions. Here's the code in nbdkit-xz-filter:

https://gitlab.com/nbdkit/nbdkit/-/blob/master/filters/xz/xz...

Imustaskforhelp 4 hours ago [-]
Seekable format is so cool! Like I used to think things like having a zip file which can be paused and recontinued from the moment as one of my friend had this massive zip file (ahem) and he said it said 24 hours and I was like pretty sure there's a way...

And then kinda learned about criu and I think criu can technically do it but IDK, I in fact started to try to create the zip project in golang but failed it over... Pretty nice to know that zstd exists

Its not a zip file but technically its compressed and I guess you can technically still encode the data in such a way that its essentially zip in some sense...

This is why I come on hackernews.

11 hours ago [-]
b0a04gl 9 hours ago [-]
how do you handle cases where the seek table itself gets truncated or corrupted? do you fallback to scanning for frame boundaries or just error out? wondering if there's room to embed a minimal redundant index at the tail too for safety
throebrifnr 10 hours ago [-]
Gz has --rsyncable option that does something similar.

Explanation here https://beeznest.wordpress.com/2005/02/03/rsyncable-gzip/

Scaevolus 9 hours ago [-]
Rsyncable goes further: instead of having fixed size blocks, it makes the block split points deterministically content-dependent. This means that you can edit/insert/delete bytes in the middle of the uncompressed input, and the compressed output will only have a few compressed blocks change.
andrewaylett 8 hours ago [-]
zstd also has an rsyncable option -- as an example of when it's useful, I take a dump of an SQLite database (my Home Assistant DB) using a command like this:

    sqlite3 -readonly "${i}" .dump | zstd --fast --rsyncable -v -o "${PART}" -
The DB is 1.2G, the SQL dump is 1.4G, the compressed dump is 286M. And I still only have to sync the parts that have changed to take a backup.
77pt77 10 hours ago [-]
BTW, something similar can be done with zlib/gzip.
rwmj 8 hours ago [-]
It's true, using some rather non-obvious trickery: https://github.com/madler/zlib/blob/develop/examples/zran.c

I also wrote a tool to make a randomly modifiable gzipped disk image: https://rwmj.wordpress.com/2022/12/01/creating-a-modifiable-...

dafelst 8 hours ago [-]
Sure, but zstd soundly beats gzip on every single metric except ubiquity, it is just straight up a better compression/decompression strategy.
cogman10 6 hours ago [-]
It's pretty impressive how fast zstd has risen and been integrated into just about everything. It's already part of most browsers for compression. Brotli took a lot longer to get integrated even though it's better than gzip as well (but not as good as zstd).