Note that the timing numbers aren't 100% accurate. It's been running for about 16 hours, with probably ~1000 total read errors.

Show thread

A reminder that Mastodon and the Fediverse do NOT use cryptocurrency, blockchains, NFTs, tokens, coins, mining, web3 or anything like that.

Masto and the Fedi run on traditional servers and use a sustainable network federation model somewhat similar to e-mail (that's why Fediverse addresses look similar to e-mail addresses).

Also a reminder there are no venture capital firms or other investors either. No one owns the network, each server is independent. Masto and Fedi server running costs are paid by their owners, sometimes with donations from users.

No one is getting rich from the Fediverse, it is all volunteers with some getting donations and a few getting modest grants from foundations. Please remember this when you interact with admins or developers.

(There might be some individual users who post about cryptocurrency/blockchain, but the infrastructure this place runs on doesn't use it at all.)

Tl:dr - Decentralisation does NOT mean cryptocurrency/blockchain

#Fediverse

So what those statistics mean in my case: there's about 114 MiB of blocks (multiple sectors here) that had some form of read error. An additional 52 GiB of disk space hasn't been read yet (from being skipped over because read error), meaning, at most, i have 52 GiB + 114 MiB unrecoverable data. 3.59 TiB have been successfully read and stored, from the total 3.64 TiB drive size.

Here's some visuals for the disk reads, from ddrescues map file (list of "Block of lengh X, offset Y, is status Z")

Show thread

After this, it runs backwards to find the rough end of those bad runs, creating the "non-trimmed" set. This is then trimmed, or searched sector by sector for start/end sectors, and is moved to "non-scraped."

Scraping is reading every sector in that area to find which are good. Now you just have the bad sectors. Read them a number of times (32 in my case) to try to get a good read, or just give up and 0-fill the output device.

(...)

Show thread

For those of you that have not used ddrescue before: it's a program that, like dd, copies from one block device to another. Unlike dd, it's meant for data recovery, with a 5-pass system.

Effectively, it tried to read entire chunks ("blocks") from the device. If it can, great, write them to the second device (or in my case, disk image file). if it can't well... that's where the fun begins. Every fail is marked and then it skips forward some to read more.

(...)

Show thread

P.S.: Could also wipe out metadata (covers, posters, NFO file) for media, but that can all be downloaded / rebuilt a lot easier than the entire media file itself.

And here's the current state of things: (I'll explain in a sec)

Show thread

It's a Proxmox Backup Server datastore. So either I destroy a backup manifest / metadata (rip one backup), or a destroy a chunk file, which, after verifying everything, would be a rip several backups.
But the likelyhood of that is relatively small, take. I'm assuming XFS writes to the disk in ascending order (first LBA to last LBA) for spinning disks, and basically all the errors are in the first 20% of the disk map. Media takes up ~90% of the disk.

Show thread

And this is what I get.
That 4 TB disk that I used as a NAS transfer? Failing.
What's failing? Oh, just my entire mediacenter (eh) and network backups.

Media, meh. 3.3 TiB of data, and so far, about 108 MiB of disk regions to search for bad sectors.
As long as i don't clobber the beginning or end of a video file, the structural areas, it should survive, just with either A/V artifacts, or by dropping frames as corrupt.

Backups, 300 GiB. More important, but, well...

(...)

in theory it's maybe a slight bit better driver-wise in exchange for some stability issues. At the same time, I'm willing to trust ZFS enough that I'll live with it. It does seem like CORE is going to end up merging into SCALE anyways.

Show thread

Looks like I just forced my hand out of laziness.

I'm using an XFS formatted drive as my temporary store while changing NASes. I chose XFS because in this scenario it's going to be more performant than plain ext4.

CORE (the based version) only imports... UFS (Unix / BSD), NTFS, FAT, and ext2 (yes, 2! 3 and 4 are not fully supported).

So I guess I've forced myself into TrueNAS SCALE (the option) just so I don't spend ANOTHER day doing transfers, to do transfers.

NAS moving time. Migrating the data off since the disks are going to be reused.

What could go wrong

Today's being a total monday. Not only was McDonald's out of Dr pepper, not only did the staff not realize that Coke and diet coke have a very distinct flavor difference, they're running out of diet Coke syrup because their diet coke is watery.

Next up on the blog backlog:

explained. Start to finish. Hashes, symmetric ciphers, asymmetric ciphers, key exchanges, ECC, ending with a complete dive into the PKIX, S/MIME, and a full breakdown of what an X.509 certificate is, how it's encoded, and what *every* part does.

I didn't realize just how huge 's .blend files can be.

Then again, having looked at the actual data structure format, the lengths they've gone to achieve compatibility are incredible... and the actual format to do it, even more so (there's literally a thing called DNA and RNA, fun fact)

Please boost for reach. So, anyone that is working on FOSS projects like web apps, sites, Linux apps, desktop environments, or other user interfaces, please let me know if you want them tested for accessibility. I can do both CLI, web, and GUI testing, and app testing on Android. I'm running Debian (on a ChromeBook but that's not too important), so just give me the name of the package, or the URL of the site or app. I can also do Flatpak!

#a11y #foss #linux #flatpak #accessibility

Darktable is a free open source raw photo processing app for Linux, Mac and Windows, sort of a libre alternative to Adobe's Lightroom.

You can follow at:

➡️ @darktable

You can download it from the project's website at darktable.org

#DarkTable #Photography #Photos #Raw #Camera #Developer #Development #FOSS #FLOSS #Libre #FreeSoftware #OpenSource #LightRoom #Adobe #Alternatives

Next up on my shopping list: an actual color checker chart.. probably Datacolor 48 patch.

Sure it'll make whoever I'm shooting feel like a prisoner for 1 photo, but processing my batch of ~250 graduation shots (into 24 usable outs, including two that needed manual stacking / tone mapping between foreground/background), I think the ability to rely on more than just my camera's preset white balance for accurate colors would help.

Nikon's Preset WB is pretty good though.

@Charadon @Little__Ham
I don't understand why DaltaChat is so little known. It is open source, federated, automatic E2EE, and works.

I change my Ethernet fluid every 3 months or 10,000 gigabytes, whichever comes first.

#network #maintenance #nowYouKnow

So I'm about to have a weird tech stack, I'll be using for my IdP, but.. almost all through LDAP providers, except for services that don't support LDAP and need something like OIDC / OAuth (or, lord help me, SAML).

Why? Because just slapping your username and password into the login box is less friction then finding the "Sign in with TD-StorageBay SSO" button to direct you to another page.

Show older
Tek's Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!