Spent the day at @nllgg. That was great fun, met some cool people! Definitely worth doing more often.

Fewer dependencies and smaller binary size vs. convenience and less code to write and test.

VNDB.org and Manned.org have now been migrated from 10 to 11. That was a painless upgrade. :blobcheer:

Parallelism is tricky in any language and any environment. I don't know why I had forgotten that.

I close Firefox and my system's total memory usage goes down from 2GB to 150MB.

This isn't a complaint about Firefox. It's the web.

I didn't expect much complexity in writing a simple job queue for the ChiFS Hub implementation, but it turns out to not be very trivial after all.

My initial approach was to keep a log of finished jobs and run some SQL on that log to extract a queue of jobs to run next. But the queries ended up somewhat complex, inflexible and hard to optimize.

So now I try to schedule the next jobs as soon as one job finishes. Here's my initial buggy and untested attempt to do so as a PostgreSQL trigger.

BLAKE2b | Keccak | SHA2 | TTH;
JSON | CBOR | XML | other data exchange format;
gzip | xz | zstandard | lz4 | ...;
Rust | Go | C | C++ | ...;
TOML | JSON | .env | custom config file format;
Async I/O | Threads;
rust-postgres | Diesel.rs;
r2d2 | Arc<Mutex<Vec<Connection>>>;
Rocket.rs | Tiny custom webserver | ...;

And that's just scratching the surface of choices I had to make while working on ChiFS. Indecisiveness is, I suppose, one of the biggest enemies of a developer.

> Opens Zerochan for a little break.
> Sees spam uploads.
> Does some work as a moderator.

That wasn't much of a break. :blobhiss:

Gotta love those high-quality toothpicks that leave more chunks of wood in your teeth than that they remove leftovers from dinner.

Me, coding: "I don't really want this code to handle the case where the connection with Postgres is lost or where the query fails for some weird reason, so let's gracefully crash and burn with a panic when that happens."

...how does one crash and burn "gracefully"?

I wrote an abomination and I'm proud of it.

It's been bothering me for a while that streaming JSON parsers are super rare, especially when compared to XML. It's like everyone is in denial that JSON can be used for large amounts of data, too.

As a super idiotic side-effect, the most common approach to storing large amounts of data in JSON is to store it as a newline-separated list of JSON values, despite JSON having a perfectly usable way of encoding a "list of things".

Rewriting some code, expecting it to be more flexible but somewhat slower ("I'll optimize this later"), and suddenly it turns out to be twice as fast as the original implementation.

I used to be fairly good at estimating performance, but evidently I don't know how computers work anymore.

"License: CC0, attribution kindly requested. Blame taken too, but not liability."
- tiny-keccak @ docs.rs/tiny-keccak/1.4.2/tiny

"Compliant implementations MUST support GSSAPI"
- RFC 1928

You can't force me!

I may have a too high tolerance for bugs in the software that I use. Been haunted by [1] for almost a month now. It's super annoying, but for some reason I haven't downgraded or switched or anything yet...

1. github.com/tridactyl/tridactyl

Weird dreams I have lately.

Yesterday I woke up with a dream in which my PC was on fire. I remember slowly unplugging the thing, taking it to a safe location, and disassembling it (while it was burning) to see which components were affected and making sure that my graphics card was safe (because that thing's expensive!).

Today I woke up dreaming about snapping the laces of my new shoes, getting angry at its quality and complaining to the shop.

And behold said "horrible Rust hack".

: "You can only safely construct a DST by coercing struct with a size known at compile time"
Me: "Behold my copy-paste power!"

It goes on for 1000 lines. I really ought to be expelled from the programming community for this abomination.

By using checksums for comparison, a horrible Rust hack to construct memory-efficient structs, and reading the Share index twice, I can get the memory requirements for solution 2 down to ~136MB per million files. That's much better, but I'm not sure it's worth it...

Show more
niu.moe

We are a cute and loving international community O(≧▽≦)O !