Third Spruce Tree On The Left

No, not that one… up…. right… yes. That one. – Fedi-follow @tezoatlipoca@awadwatt.com

From the Splainer-man not Man-splainin Dept.

If you want me to close the browser tab on something, all you have to do is mention cryptocurrency or blockchain unironically. I recently learned about DAOs and my eyes rolled so hard, right out of their sockets.

And you thought Nigerian prince scams were sad

I get asked about crypto and NFTs and the bitcoins all the time, but to explain in layman’s terms because none of my friends or family members are Computer Science/Engineering graduates. Every article that attempts to explain the topic will sortof talk about the blockchain, and maybe mention “proof of work” or “proof of stake”, “consensus mechanisms” but they don’t actually explain what those things are, what they actually mean. They just kindof wave their hands like “don’t worry about it” or “MAGIC” . And recently my son asked me to explain exactly what it is, so... here you go.

If you know of someone you love who has any significant part of their savings or retirement portfolio invested in anything to do with ANY cryptocurrency, perhaps you might want to slide this over to them to read.

TL;DR:

  • cryptocurrency / blockchain technologies have the potential of great promise – eventually
  • it’s not magic money that springs out of nowhere; not entirely
  • it IS completely unregulated and controlled (at the moment)
  • we’re still in the early days of this technology; any investment in crytocurrencies – or to a lesser extent, blockchain technology – is pure speculation; treat it like investing in a night out at the casino – you can make $$$, but can also (and likely) lose it all.

Cryptocurrency is dangerous crap for all the reasons it's proponents say it is the next thing since sliced bread. To explain to you why cryptocurrency is dangerous bullshit, I need to show you where it comes from – seemingly thin air. To show you that it magically coalesces out of thin air, I need to explain how it is produced, and that will take a little time. Then, we need to talk about financial economics and money theory for a bit. Then you'll throw your tapioca against the wall and fire up your fidelity.com account to make sure you're nowhere near anything crypto-related.

Blockchain Hocus Pocus would be a great band name

Suppose we have a series of transactions that we want to keep track of. Let’s call this the ledger. In the ledger of a bunch of financial transactions, – for example – if you want to make a transaction, it has to be done with the blessing of a central authority: the bank itself (or its computers). The problems with this type of ledger are:

  • there's one point of failure. Bank systems down? No transaction for you!
  • there's one authority. The bank disagrees with your transaction? No transaction for you!
  • there are only a few modes of access. Want to interact with the Bank? Have to use the Bank's apps, ATMs, branch locations or website.

Networked computers were supposed to help us get around problems like this. Back in the late 90s and early 2000s, the problem of not being able to download digital music, video, and files from central authorities was literally that those (websites/services) didn't exist yet. Or if they did, they were charging outrageous prices or crippling downloads with so much DRM it made the media unusable. So people created peer-to-peer file sharing systems like Napster and Limewire, and later on, BitTorrent software. P2P file distribution got around all 3 of our problems.

So what if we could have a whole network of participating computers manage, using peer-2-peer transfers, an open distributed ledger of data instead? A ledger of file transactions. A history of change edits, medical records, you name it.

Ok. So now we have a bunch of computers – nodes – all participating in our open ledger. If a node wants to make a transaction or add an event or a record to the ledger, it announces it to its peers, and the transaction/event/record slowly gets spread around the whole network. Depending on various internet-y things, it could take a while for announcements of transactions to get around. The order of transactions in YOUR local copy of the ledger might be different from the order in someone else’s. And how do we add new transactions to our ledger in a way that the record becomes indelible? How do we get everyone in the network to agree on the same version of history? How do we make it so noone can tamper with it?

Hashes are only good in data transfer and breakfast.

So we’ve collected together a bunch of pending, invalidated transactions from our peers. One way computers can make sure that transmitted or archived data doesn’t change over time is to calculate a checksum or a “hash” for a chunk of data. File transfers do this all the time. Before browsers and mobile clients started doing this automatically behind the scenes, and especially back when large files could be zipped or chunked into smaller bits, and you wanted to make sure each part was downloaded to transmitted properly, you could also download a checksum or hash file to go with each. You’d run a verification tool on your downloaded data, and if the hash it produced matched the checksum file, you knew your data was good. If not, you’d go download it again.

Note that a hash isn’t encryption. A hash is simply a characteristic fingerprint. The same hash algorithm run on the same data will always produce the same hash value. Always.

So, for any data, we can compute a hash; there are many types, but a common one is an MD5 hash, which represents a chunk of bytes as a 128-bit value. If I calculate an MD5 hash on the previous paragraph, I get the following:

e73435e3a4af75ce6466e5c8a0e5f119

If I remove the last period, the one after Always, the hash changes to:

21437c658fa3f6ff85a086a099b96d90

So what we can do is take a collection of transactions, or records (whatever our ledger is keeping track of) and make a bundle called a block. Then we calculate a hash using the data in our bundle, and by checking the hash value of the block, we can verify it hasn’t been monkeyed with.

If we include information about the previous block (its block ID, maybe ITS hash) along with the transactions in our block, we can effectively chain our blocks together – every block references the last, and so on.

So who gets to decide what the next block is?

We now have a block of transactions or records – these have been shouted out by nodes who are making or announcing transactions – and we have information about the previous block. Maybe we include some other data like the current time, or some randomly generated number too. Calculating the hash for our prospective block is relatively easy (for a computer that is).

What if we impose some arbitrary rules that the hash for our block has to adhere to? Maybe its hash value has to have 17 0’s in it. Maybe it has to start with six zeros. Maybe it has to have an even number of 1’s. Whatever. Now we’ve imposed a “difficulty” to hash generation. These rules are codified into the algorithm of the particular blockchain.

So, any node that is trying to define what the next block is now has to produce a hash that also meets some arbitrary criteria.

  1. calculate hash using block data
  2. Does hash meet the criteria? – yes: Yell “BINGO, i has next block!”
  3. no? Fiddle with block inputs, re-roll the random bits, and go to #1.

Depending on the difficulty, it could take many, many attempts to produce a hash that meets the criteria.

When a node DOES calculate a winning hash for ITS block, it literally shouts out and says, “Here’s my block! Here’s a hash that meets the criteria! I call BINGO!”. Then, a few other nodes (how many of which depend on the rules of the blockchain) double-check by ALSO calculating the hash for the block. This is called “validation”. If they agree, then the node who “did the work” gets the prize.

This is what you may have heard of as “proof of work.” The work here is coming up with block hashes that meet some arbitrary rules, which is computationally expensive. Note that to double check or validate the block’s hash is easy; you just have to do the hash calculation on the block’s content once, not thousands of times like the original miner did (because it had to find a hash that met all the criteria).

Without this proof of work, the decision of what the next block in the chain should be would come down to a consensus of all the participating nodes. If you can get all YOUR nodes, creating a majority (51%), to say, “No, no, the next block is THIS one” – which features a bogus transaction that benefits you – then you can game the system. Remember, there’s no central authority here; all entities participating in the blockchain participate equally.

Once a “mined” block is re-validated by enough peers, the network decrees: “THIS is the new block, and it contains THESE transactions that have now been validated”. The validated transactions are now considered immutable, fixed in stone, and accepted by all. Then, every node participating in mining throws its work on THIS block away, forgets about the validated transactions, and starts afresh with a new block of any remaining unvalidated transactions, and the cycle repeats.

How many miners are active on the blockchain and how many transactions are being made will determine how quickly your transaction is validated in a block on the blockchain. Depending on the blockchain algorithm, the mining pool and validation nodes participating at any given time, it could be a few minutes or it could be a few days.

Now we know what the blockchain is and what mining blocks is; in part 2 we discover where the moolah comes from.

Changelog

2024-10-10 – initial

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

A while ago I installed the minimalist RSS reader Yarr (Yet Another RSS Reader) here.

One of the things that appealed to me about Yarr is that you can tell Yarr to run at a specific port from the command line:

#!/bin/bash
/usr/local/bin/yarr/yarr -addr "<ip to bind to>:<port>" -auth <user name>:<user yarr pwd not their system pwd> > /var/log/yarr.log 2>&1

(as well as secure it with a password)

I wanted this because I hadn't set up any kind of reverse proxy yet. Well, now I have and it turned out to be a lot easier than I anticipated. I now host:

  • WriteFreely (blog – you're reading it)
  • Yarr – in fact I host 3 of them for family members
  • Navidrome – music streaming host (because haven't had success getting FunkWhale to work) got it working:
  • Funkwhale – tunez.awadwatt.com
  • Jellyfin
  • a pet project under development

all from the same host behind NGINX. Also, a side benefit to NGINX is that if I create a blanket subdomain LetsEncrypt certificate, NGINX can use/apply that certificate to everything it reverse proxies. So yay, Yarr gets HTTPs for free! (also handy: this solved the problem I had been having with WriteFreely not using my self-generated LetsEncrypt certificates, I could only get it to work with the ones IT generates if you enable auto-certs.)

Here's my NGINX config file (mildly redacted). Created with some help from the very excellent Nginx Configuration Generator:

server
{
	listen 443 ssl;
	listen [::]:443 ssl;
	http2 on;
	server_name awadwatt.com, www.awadwatt.com;

	# SSL
	ssl_certificate /etc/letsencrypt/live/awadwatt.com/fullchain.pem; # managed by Certbot
	ssl_certificate_key /etc/letsencrypt/live/awadwatt.com/privkey.pem; # managed by Certbot
	ssl_trusted_certificate /etc/letsencrypt/live/awadwatt.com/chain.pem;

	# security
	include nginxconfig.io/security.conf;

	# logging
	access_log /var/log/nginx/access.log combined buffer=512k flush=1m;
	error_log /var/log/nginx/error.log warn;

	# reverse proxy
	location /
	{
		proxy_pass http://127.0.0.1:7035;
		proxy_set_header Host $host;
		include nginxconfig.io/proxy.conf;
	}

	# additional config
	include nginxconfig.io/general.conf;


}

Basically each “service” that Im running on a different port will be mapped to a different subdomain server that NGINX will listen for and redirect. Carving out the access and error logs for each into their own dedicated files.

server
{
	listen 443 ssl;
	listen [::]:443 ssl;
	http2 on;
	server_name jelly.awadwatt.com;
	ssl_certificate /etc/letsencrypt/live/awadwatt.com/fullchain.pem; # managed by Certbot
	ssl_certificate_key /etc/letsencrypt/live/awadwatt.com/privkey.pem; # managed by Certbot
	ssl_trusted_certificate /etc/letsencrypt/live/awadwatt.com/chain.pem;

	include nginxconfig.io/security.conf;
	access_log /var/log/nginx/jellyfin.access.log combined buffer=512k flush=1m;
	error_log /var/log/nginx/jellyfin.error.log warn;
	location /
	{
		proxy_pass http://127.0.0.1:8096;
		proxy_set_header Host $host;
		include nginxconfig.io/proxy.conf;
	}
	include nginxconfig.io/general.conf;
}


server
{
	listen 443 ssl;
	listen [::]:443 ssl;
	http2 on;
	server_name navi.awadwatt.com;
	ssl_certificate /etc/letsencrypt/live/awadwatt.com/fullchain.pem; # managed by Certbot
	ssl_certificate_key /etc/letsencrypt/live/awadwatt.com/privkey.pem; # managed by Certbot
	ssl_trusted_certificate /etc/letsencrypt/live/awadwatt.com/chain.pem;

	include nginxconfig.io/security.conf;
	access_log /var/log/nginx/navi.access.log combined buffer=512k flush=1m;
	error_log /var/log/nginx/navi.error.log warn;
	location /
	{
		proxy_pass http://127.0.0.1:4533;
		proxy_set_header Host $host;
		include nginxconfig.io/proxy.conf;
	}
	include nginxconfig.io/general.conf;
}


server
{
	listen 443 ssl;
	listen [::]:443 ssl;
	http2 on;
	server_name yarr.awadwatt.com;
	ssl_certificate /etc/letsencrypt/live/awadwatt.com/fullchain.pem; # managed by Certbot
	ssl_certificate_key /etc/letsencrypt/live/awadwatt.com/privkey.pem; # managed by Certbot
	ssl_trusted_certificate /etc/letsencrypt/live/awadwatt.com/chain.pem;

	include nginxconfig.io/security.conf;
	access_log /var/log/nginx/yarr.access.log combined buffer=512k flush=1m;
	error_log /var/log/nginx/yarr.error.log warn;
	location /
	{
		proxy_pass http://famine:5000;
		proxy_set_header Host $host;
		include nginxconfig.io/proxy.conf;
	}
	include nginxconfig.io/general.conf;
}

And this bit just redirects insecure HTTP port 80 traffic permanently to secure HTTPS mappings above.

# HTTP redirect
server
{
	if ($host = www.awadwatt.com)
	{
		return 301 https://$host$request_uri;
		} # managed by Certbo
	}
	# managed by Certbot

	if ($host = jelly.awadwatt.com)
	{
		return 301 https://$host$request_uri;
	}

	if ($host = navi.awadwatt.com)
	{
		return 301 https://$host$request_uri;
	}

	if ($host = yarr.awadwatt.com)
	{
		return 301 https://$host$request_uri;
	}


	if ($host = awadwatt.com)
	{
		return 301 https://$host$request_uri;
		} # managed by Certbo
	}
	# managed by Certbot


	listen 80;
	listen [::]:80;
	server_name awadwatt.com, www.awadwatt.com, lists.awadwatt.com, jelly.awadwatt.com, navi.awadwatt.com,  yarr.awadwatt.com;
	#include     nginxconfig.io/letsencrypt.conf;

	location /
	{
		return 301 https://awadwatt.com$request_uri;
	}


}

Also shoutout to the NGINX config file Beautifier for making it look purty.

Changelog:

2024-04-18 – initial

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

(this blog post isn't what you think its about)

We're not an entirely vegan household. Vegan Lawyer Girlfriend is obviously. Since I do all the grocery shopping and all the cooking, out of practical necessity, the majority of our meals are either entirely vegan or vegan with sometimes a chicken breast or turkey sausage on the side; there's no way I'm buying two of everything. Except for cheese (vegan cheese still needs improvement), lunch meat for the boy and I, and whatever I put on the side on rare occasions, everything else is vegan: butter, milk, all ingredients.

So when VLGF goes away, as she did recently, the boy and I get to indulge a bit in our old carnivorous ways. No pork of course** but a giant roast on clearance or a turkey isn't out of the question.

At the grocery store deli counters you used to be able to buy the bits and sorts left over – you know, the heel or butt of the salami, or that two slices they took off to make your order 400g exactly. While most deli meat might run you $2-3 per 100g, the “ends” were like $0.99 / 100g. Which fit my frugal budget perfectly and I used to get the bits 'n ends all the time for my sandwiches. (and it was always a treat. Literal “mystery meat”. Sometimes it was macaroni and cheese loaf (meh) and sometimes it was prosciutto (score!))

Here in Canada, when mandatory nutritional labeling came in a decade or so ago, all the big grocery stores stopped selling the ends; while the nutritional labels for each meat were printed automatically by the scale, they have no way of knowing what meats were in THIS bag of ends. So they just stopped doing it to not run afoul of the inspectors.

Recently I've been trying to boycott Loblaws brand stores (because fuck those guys), so that means no more Valu-Mart right around the corner.

We do have a great Farmer's Market in town – 3 actually. And several Farm Boys; but they're all as pricey if not more expensive than Loblaws. But there is Central Fresh Mart. Its not easy to get to sometimes but everytime I go I forget how good it is. Their meat department is phenominal.

So anyway, I'm in Central the other day and I see THEY have the deli meat ends so I'm like fuck yes and I buy some. The boy and I are making our sandwiches this morning and I finished one package of ends and said something like “oh, its the End of Meat” (I think John Scalzi's End of All Things popped into my head). And then we started joking about They're Made of Meat, which is an old Terry Bisson short story (which has been adapted several times).

“I think it works better as The Ends of Meat or The End of Meat” said the boy. “hrmm. maybe The Meat's Ends”.

“The Meat's Ends. heh... the Ends Justify the Meats.”

“oo!” said he, “The Meat's Ends: Its about the people in the human centipede just trying to get by after the events of the movie***. Like how do they take the bus? Would they need a handicapped parking pass? How do they stand in line at the DMV? What would their coworkers say about them behind their back in the office? Look, its just 3 ordinary people sewn together trying to make their way in the world.” and then we fell about laughing for 10 minutes straight.

Ok maybe it seemed funnier this morning.

** – the agreement is VLGF looks the other way on meat in the house so long as its not pork or veal. And no fish (she had to clean the fish cleaning hut at her family's camp, it was traumatizing). We're slowly reducing our animal usage, we only have meat maybe 1 out of 5 meals now.

*** yeah I know two of them die.

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

Dr. (it hurts to use that) Jordan Peterson.. notorious anti-LGBT+, transphobic, libertarian, incelphilic self-help psychology writer, bullshitter and speaker was accredited to practice Psychology in the province of Ontario, Canada.

not YES – As of 2024-02-01 – Still has a psychology license from the College of Psychologists of Ontario.

Unfortunately.

If you know any better, msg me on Mastodon at @tezoatlipoca@mas.to or email tezoatlipoca AT gmail.

Why does this matter?

or

Why do you hate Jordan Peterson?

or

Why is Jordan Peterson such a douchebag?

in progress. but he is such a huge asshole

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

Lemme skip to how to find RSS URLS for * Youtube * Substack and Wordpress * Reddit * Medium * Videosift

It used to be, back in the day, that when you wanted to have a presence, a voice on the internet to show people stuff you had to have a website. And to do that, either you needed to have the computery skillz to setup your own webserver or pay people to do it. Or use a freemium hosting service like MySpace or Geocities (unrelated, go checkout neocities.org, its back baby).

And then there was the problem of how do people find out when you add new shit to your website? If you were a newspaper, or a company or institution that had STUFF that people wanted to read, it was a bit easier. Chances are people would type www.yourdomain into their web browser manually or have it bookmarked, and they’d visit it every day or so, checking for new stuff. Apart from that the only way news and content spread on the internet was through:

  • someone else mentioned and linked the thing from their website
  • someone sent the link to you via email or over one of the early “chat” programs like ICQ, Microsoft/Yahoo/AIM messenger or the very early (and shitty but good for their times) real-time chat/voice over IP programs like Teamspeak or Skype (before Microsoft bought it and it sucked)
  • you saw the link in a newsgroup, email distribution list, IRC channel or a .plan update. Or a CGI/PHP bulletin board: think self-hosted micro-Reddit websites, one each for people playing certain games or who had foot fetishes; even the nazis; little self-contained bubble echo chambers.

Twittle or TubeTok didn’t exist with their bullshit algorithms so you would only discover something if someone sent it TO you, or you went to go find it.

So into this late 90s, early 2000s void of not getting news shoved into your face, arose RSS or Really Simple Syndication. Basically, anything that produced content, or would periodically produce new or updated content, could also publish an RSS feed. This feed would always exist at the same URL address, and all an interested reader on the internet would have to do to subscribe to that websites stream of very interesting stuff, would be to periodically poll that site’s RSS feed and see what’s new.

Here’s the main “all articles” RSS feed for the Toronto Star, found at https://www.thestar.com/search/?f=rss&t=article&c=news.

<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:georss="http://www.georss.org/georss">
    <channel>
        <title>www.thestar.com - RSS Results in news* of type article</title>
        <link>https://www.thestar.com/search/?f=rss&amp;t=article&amp;c=news*&amp;l=50&amp;s=start_time&amp;sd=desc</link>
        <atom:link href="https://www.thestar.com/search/?f=rss&amp;t=article&amp;c=news*&amp;l=50&amp;s=start_time&amp;sd=desc" rel="self" type="application/rss+xml" />
        <description>
		www.thestar.com - RSS Results
		in section(s) news*
		only for asset type(s) of article
	</description>
        <generator>TNCMS 1.72.4</generator>
        <docs>http://www.rssboard.org/rss-specification</docs>
        <ttl>30</ttl>

        <item>
            <title>Libya says it suspended oil production at largest field after protesters forced its closure</title>
            <description>CAIRO (AP) — Production at Libya’s largest oil field was suspended Sunday, the country’s state-run oil company said, after protesters forced the facility to close over fuel shortages.</description>
            <pubDate>Sun, 07 Jan 2024 13:42:38 -0500</pubDate>
            <guid isPermaLink="false">http://www.thestar.com/tncms/asset/editorial/31dcf06d-7fcf-53e4-9520-f2587deed7d8</guid>
            <link>https://www.thestar.com/news/world/africa/libya-says-it-suspended-oil-production-at-largest-field-after-protesters-forced-its-closure/article_31dcf06d-7fcf-53e4-9520-f2587deed7d8.html</link>
            <dc:creator>Samy Magdy The Associated Press</dc:creator>
            <enclosure url="https://bloximages.chicago2.vip.townnews.com/thestar.com/content/tncms/assets/v3/editorial/b/48/b48b8604-36d2-5f31-9535-8bcb5a9d7844/659af12d2f84d.image.jpg?resize=300%2C200" length="67488" type="image/jpeg" />
        </item>

        <item>
            <title>Third shooting in three days in Coquitlam, B.C., sends man to hospital</title>
            <description>COQUITLAM, B.C. - Mounties in Coquitlam, B.C., say a third shooting in as many days has left a man with life-threatening injuries.</description>
            <pubDate>Sun, 07 Jan 2024 13:20:41 -0500</pubDate>
            <guid isPermaLink="false">http://www.thestar.com/tncms/asset/editorial/5012996d-44d7-59fd-8baa-34182011f78b</guid>
            <link>https://www.thestar.com/news/canada/third-shooting-in-three-days-in-coquitlam-b-c-sends-man-to-hospital/article_5012996d-44d7-59fd-8baa-34182011f78b.html</link>
            <dc:creator>The Canadian Press</dc:creator>
            <enclosure url="https://bloximages.chicago2.vip.townnews.com/thestar.com/content/tncms/assets/v3/editorial/a/1e/a1e24cb0-d7f1-549c-a70f-4db470e33610/659aec8c313f8.image.jpg?resize=300%2C200" length="182257" type="image/jpeg" />
        </item>

At the top there’s some metadata about what type / format of RSS stream this is, then some metadata about the website including some self referencing info for the news reader application... and then individual items, one for each thing in the feed. Here’s one:

        <item>
            <title>A dog shelter appeals for homes for its pups during a cold snap in Poland, and finds a warm welcome</title>
            <description>WARSAW, Poland (AP) — With a deep freeze approaching, an animal shelter in Krakow sent out an urgent appeal to people to adopt or at least temporarily shelter some of its dogs until the dangerous cold spell passes.</description>
            <pubDate>Sun, 07 Jan 2024 12:16:44 -0500</pubDate>
            <guid isPermaLink="false">http://www.thestar.com/tncms/asset/editorial/44fd1fe5-8d57-5a19-b00a-18539063092b</guid>
            <link>https://www.thestar.com/news/world/europe/a-dog-shelter-appeals-for-homes-for-its-pups-during-a-cold-snap-in-poland/article_44fd1fe5-8d57-5a19-b00a-18539063092b.html</link>
            <dc:creator>The Associated Press</dc:creator>
            <enclosure url="https://bloximages.chicago2.vip.townnews.com/thestar.com/content/tncms/assets/v3/editorial/1/d2/1d28714f-0754-51fa-a49b-df10f12b5163/659add415e9f0.image.jpg?resize=300%2C200" length="213355" type="image/jpeg" />
        </item>

Its got a title, a publication date and a description. Depending on the RSS feed, the content of the description could be a traditional news slug (i.e. a short summary or hook like description of the article to get you to click) or it could be the whole body of the article itself.

Here’s what that news article looks like in the news reader I use (yarr – Yet Another RSS Reader):

browser screenshot showing YARR - Yet Another News Reader - a 3 column view: list of RSS feeds, list of items or articles IN one of those feeds, and the content of one of those items

In the left I’ve got a list of all the RSS feeds I subscribe to, the center is the list of articles (or items) in one of those feeds and the right is the content. By default, the Toronto Star’s RSS feed only shows me the headline, byline and the slug. But if I click that Read Here button

closeup of the YARR interface showing the Read Here button

.. my newsreader goes and retrieves the content of the article and displays it here. Woot.

YARR screenshot showing how YARR has gone and retreived the full body content of the referenced article for display here, and not in the shitty website

Ok, sometimes this doesn’t get any pictures, but it DOES strip out all the cookie popups, ads, scripting and all the other bullshit. This also (usually) sidesteps any annoying paywalls. The behavior of this depends a lot on a) what newsreader you use and how it behaves b) the nature of the RSS feed and how content is published / linked to it by the content provider.

Well this is pretty cool. Why the hell don’t we get news this way now?

I dunno. The RSS feeds never went anywhere – any content publishing platform that has existed since oh, 1999 or so, has been capable of publishing RSS feeds. And unless the websites that use these platforms deliberately turn them off, they’ve always been there.. still are there.

I think between Yahoo/Google/Bing news “home pages” in your browser, and algorithms in Twitter and Facebook shoving news in our faces (or getting people in our social networks to SEND us stuff) we just fell out of practice of actively going to where the content is and curating our feeds ourselves. Its easy to zone out in front of the TV when there’s a steady drip of just-engaging-enough on the tube; same thing online; if what the algorithm shoves in your face is just engaging enough, we lose interest in finding out the really good stuff that takes a bit of work.

Subscribing to RSS feeds takes back that control. Read only what YOU want to read.

Ok this is nice and all but how do I read these RSS feeds?

Chances are you already have an app that does it. Every major email client made in the last 20 years has had RSS feed reading capabilities. e.g: – outlook: https://www.howtogeek.com/710645/how-to-use-microsoft-outlook-as-an-rss-feed-reader/ – thunderbird: https://blog.thunderbird.net/2022/05/thunderbird-rss-feeds-guide-favorite-content-to-the-inbox/

(not just these, almost everyone I can recall using has been able to to some degree: Forte, Eudora..)

RSS reader apps: Then, you have purpose built RSS reader applications of which I won’t even bother mentioning because I’m sure you can use a search engine/app store.

The problem with using standalone RSS reader apps or using your email client is that if you want to read RSS feed content on another device you have to copy/clone the news feeds that you subscribe to that device. (although there are OPML files which are used to export/import these) And then, you have to manually pick through what you’ve already read on the other device vs. this device. That leads us to:

RSS ‘Feed Aggregators’ services or apps that synchronize between devices: These apps or services let you subscribe to RSS feeds in one spot and read (and keep track of what you’ve read) across multiple devices. Some are websites, some are apps, some are browser plugins.

Again six seconds in any search engine will bring up dozens of these services, but some examples just off the top of my head are Feedly, Feeder, NewsBlur but there are tons of others.

The problem with these is that you gotta create an account and maybe have a subscription... or deal with ads. And you have to use their APP. Whatever happened to just using a web browser. What if I want to self-host my own? You can do that!

Self host your own RSS news aggregator

There are lots out there, and you can find them easily searching for self-hosted RSS aggregator. I’ve tried a few including: – FreshRSS -https://github.com/FreshRSS/FreshRSS – TinyRSS -https://tt-rss.org/

.. and more from https://alternativeto.net/software/feedly/?platform=self-hosted amongst others.

The problem I faced is that I just wanted something simple, single-user, doesn’t require a Docker/K8 container; reading the feeds could be done through a browser. AND, I want it served on a custom port. For one, a little security through obscurity, another I already run THIS WriteFreely site on 80 and 443, and I didn’t want to futz around with a reverse proxy – yet.

Enter

Yarr – Yet Another RSS Reader

get it: https://github.com/nkanaev/yarr

You can run Yarr on a desktop – it then self-hosts at localhost:port and puts an icon down in your system tray that launches your browser to that address.

Or you can set it up as a linux service, which is what I have done.

(see below for that)

Where /How do I find RSS feeds for various sites/platforms?

Ok so you’ve got a suitable reader setup, how do you find RSS feeds? If you’re lucky the website will show it to you:

Snip from the techdirt website showing they have a nice SUBSCRIBE TO MAH RSS FEED RIGHT HERE button

Other times you have to go digging and searching a bit using site search or search engine. For example here’s the Washington Post’s list of RSS feeds:

https://www.washingtonpost.com/discussions/2018/10/12/washington-post-rss-feeds/

Sometimes a blog or a news site doesn’t WANT you to know their RSS address... because if you could use an RSS reader you could doge their cookies, ads, proprietary apps and other bullshit (and we can’t have that happen!) But even though they’re hidden, I bet the RSS is still there, you just have to be sneaky:

Substack / WordPress RSS

Strip off the url, +/feed. Doesn't work with sub-substacks or “channels” i.e. sub-blogs of a larger account.

https://www.snackstack.net+ /feed –> (https://www.snackstack.net/feed)

Reddit RSS

You can subscribe to individual subreddits (OR users) by simply tacking /.rss onto the end of the subreddit or user URL – each new top level post/thread is an item in RSS parlance, and you don’t get the comments and replies. (ugh, you’ll have to use the Reddit app like a filthy commoner) https://www.reddit.com/r/canada/.rss https://www.reddit.com/u/tezoatlipoca/.rss

(how long before they close THIS hole I wonder?)

Medium RSS

Medium is actually up front about theirs, good job Medium. https://help.medium.com/hc/en-us/articles/214874118-Using-RSS-feeds-of-profiles-publications-and-topics

except for paywalled content all you’ll get is the slug essentially.

VideoSift RSS

You can get a stream of everything submitted, but before its “sifted” https://videosift.com/.rss

You can also RSS subscribe to individual user posts: https://videosift.com/motorsports/rss2/newtboy/member.xml and their bookmarks: https://videosift.com/motorsports/rss2/newtboy/memberfav.xml (these are on their user page)

but you can’t follow channels or tags

Youtube RSS

Find the Youtube channel URL you want to follow: https://www.youtube.com/channel/UCRarNme4iUanPLflogg-Ntg

Now take that channel ID off the end and tack it to the end of this: https://www.youtube.com/feeds/videos.xml?channel_id=

voila, RSS feed: https://www.youtube.com/feeds/videos.xml?channel_id=UCRarNme4iUanPLflogg-Ntg

thanks to https://danielmiessler.com/p/rss-feed-youtube-channel/ for this one.

What if I can’t find the RSS feed?

A lot of RSS aggregator services have “scraping” capabilities, where they’ll scrape the webpage in question and generate a feed for you. (yarr doesn’t do this, it only works on a well formed rss file)

If you have a website that you want to generate an RSS feed from and can’t figure out how, msg me at @tezoatlipoca@mas.to and maybe I can figure out how.. but really all Ill be doing is using a search engine. But, figure out how to get RSS from a website I haven’t mentioned, msg me and Ill add it here!

Yarr linux installation

  1. grab the source and compile or grab the prebuilt binary of your choosing
  2. dump the binary somewhere (/usr/local/bin/yarr/yarr) and chmod appropriately for execution by users
  3. write a user script (/home/<user name>/yarr.sh)that launches said application and chmod for execution ONLY by that user:

    #!/bin/bash
    /usr/local/bin/yarr/yarr -addr "<ip to bind to>:<port>" -auth <user name>:<user yarr pwd not their system pwd> > /var/log/yarr.log 2>&1
    
  4. make sure user above has permission to write to that log file (otherwise yarr won’t run)

  5. If you want to run it as is, just run the script. If you want to install it as a system service that autostarts, create a systemd file that identifies this as a system service:

[Unit]
Description=Yarr.<user>
After=network.target

[Service]
Type=simple
User=<user>
Restart=always
ExecStart=/home/<user name>/yarr.sh

[Install]
WantedBy=multi-user.target
  1. register and start the service.

When it runs, yarr will host itself at <host/ip>:<port> and it will prompt for the credentials specified on the command line in yarr.sh. Its database of rss feeds and read status is stored at /home/<user>/.config/yarr/storagedb as a single self-contained sqlite database file, so whatever user/home backup system you have in place will automatically snab user yarr info too.

By changing the user and ports used in the launching script file (and creating suitable systemd entries) you can create a custom yarr instance for multiple users.

Changelog

2024-01-07 – initial 2024-01-07A – moved yarr config to bottom. + toc

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

Sometimes we get stuck in a bog. Everything seems impossible, insurmountable, too much. No motivation, no energy, why bother? where to start? The school of No More Zero Days simplifies things down to a single binary. Do one thing, no matter how trivial – make it a Not Zero Day – and you win. Because anything at all is better than nothing.

Many years ago this got posted to the sub-reddit /r/GetDisciplined by user /u/ryans01 in response to a user's post (since deleted) about having a tough couple of days and beating themselves up about it.

This shot to the front page, and even spawned memes, motivational posters, video anaglyphs, and a dedicated sub for disciples at [https://www.reddit.com/r/NonZeroDay/]

It has been perhaps one of the biggest crutches in my day to day life since then, and it has been a huge help to the ADHD neurospicy girlfriend.

TL;DR – No More Zero Days

The Rules:

1. No More Zero Days

Promise yourself that you will do one thing every day that takes you one step closer to your goal(s)

2. Be Grateful To The Three "You"s

Past Self: Thank your past self for the favours they did for you
Present Self: Do your future self a favour - they deserve it!
Future Self: Your absolute best friend, and a great person.

3. Forgive Yourself

It's okay to fuck up. Forgive your past self, and be a better friend for your future self.

4. Exercise and Books

Get your heart rate up and try to read when you can. Also the easiest way to avoid a Zero Day!
Read more...

The anti-woke, anti-EDI Stop SOPs lost the election, but they're still up to their dickish ways.

A few weeks ago I wrote about the epic battle leading up to this year's quadrennial Law Society of Ontario bencher election. Every four years, all 57k lawyers and 10k paralegals in the province elect “benchers” to their governance board.

On one hand, you have the Bencher Good Governance Coalition which wants to do/keep doing “good” things like support and enforce a statement of principles, promote and support Equality, Diversity, and Inclusion initiatives and outreach programs to bring legal supports to marginalized communities. You know, to fight the legacy of the law profession is a bunch of straight old white dudes, where you can't be openly gay, trans, female, or not white.

On the other hand, you have the Stop SOP team. Anti-woke, anti-.. well.. everything. Who feels that EDI programs and adherence to a statement of principles (to not be racist, hateful, phobic dicks) is compelled speech... and it costs money they don't want to spend. And generally just like complaining about everything. If we were south of the border, they'd be Republicans. And their biggest supporter is Dickimus Prime himself, Jordan Peterson.

Well, I am pleased to say that the BGGC won. All 45 slots (20 from Toronto, 20 from outside, and five paralegals).

Like.. it wasn't even close. Here are the votes from the LSO's election results page, vs. the BGGC's slate of candidates (the Stop SOP slate has been removed from their dank website, almost as if they're embarrassed by their shellacking). The red lines are where the BGG slates end.

results 1 - lawyers from Toronto results 2 - lawyers from outside Toronto results 3 - paralegals

Now, since the election results, a few of the Stop SOPers have been promoted to fill vacancies in the benchers – one bencher became the new “Treasurer” (the LSO “president” if you will), and three were appointed to actual benches aka be judges.

One of the “promoted” Stop SOP dudes, Murray Klippenstein, is suing the LSO over that 2016 survey/report that was the genesis of the LSO's proposed Statement of Principles.. complaining amongst other things that:

  1. respondent rate was low, at 6%
  2. respondents were self-selected
  3. didn't differentiate between lawyers and paralegals
  4. the actual report was never released outside of the LSO committee on equality and diversity (who crafted the SOP)

Yeah – doesn't like the SOP, so let's attack the statistical validity of the report that suggested its necessity. But Murray needs to stay in his lane on this one.

  1. any statistics wonk will tell you that 10% of your sampled population is the most practical up to a sample size of 1000; anything more than 1000 samples is overkill. 6% of 57,000 lawyers + 10,000 paralegals is – I know math for lawyers is hard, lemme see – 4,020. So way more than 1000 is required for statistical rigor.

  2. So fucking what – if a subset of LSO members feel sexualized, racialized, discriminated, or marginalized, and a subset doesn't.... and if the former are the only ones who respond to a survey on EDI, does it matter if the latter does not? Of course not. The former is saying there's a problem; just because the latter isn't aware of a problem doesn't make the problem not exist. Ludicrous.

  3. Again, so fucking what – does it matter if someone is a lawyer or a paralegal when they're being racialized, sexualized, discriminated against, or marginalized? Of course, it fucking doesn't.

  4. This is the only one where Murray may have a point. But, usually when survey reports aren't openly distributed, there are good reasons. Personal testimonies or unique characterizations that might doxx individuals for one. Yeah, this can be sanitized, but that costs money, and Murray and the SSoP heads are already whining about how $wasteful$ the LSO is. Maybe he should join the EDI committee.

Oh no wait, he doesn't believe it's necessary to have an EDI committee.

Changelog

2023-07-03 – initial 2023-07-03 – bork link line 1

follow –> AT tezoatlipoca AT mas.to, or AT tezoatlipoca AT awadwatt.com to follow this blorg directly on the Fediverse.

You like the style of my blog? It only took me all afternoon. I may have some work catch-up to do this weekend.

So one of the reasons I broke up with WordPress, other than it was heckaspensive to host a domain, and it was over featured, and I wanted to self-host, etc., etc. – was because I couldn't futz with the stylesheet for this blog without paying even more money. That's dumb.

I mean all I wanted was monospaced text for snips of code, is that too much to ask for?
Read more...

If you live in #Ontario #Canada and aren't a #lawyer you probably are not aware of the monumental battle going on right now in the province's legal circles. But you should be.

The Law Society of Ontario, #LSO, is the regulatory body for all lawyers and paralegals in the province. All of them: 57k lawyers, 10k paralegals. Uniquely, its one of the few provincial regulatory agencies with no government representation (why? because how can you litigate against the gov. if you're professionally regulated by it).

So they are entirely self-governing. And for the past few weeks they have been conducting their every-4-year LSO Bencher election. i.e. 20 lawyers from Toronto, 20 from outside and 5 paralegals who set the executive direction of the LSO. These are the benchers. Last day of voting is Friday.

Read more...

This is one of my favourite pictures. Main and Danforth, Toronto, 1965 looking NE; a PCC Streetcar has just left the Main St. turnaround loop and is turning westward back downtown along Danforth I was sitting with my parents a few years ago at breakfast and we were talking about the house and childhood homes. I knew that my dad (1949) grew up in the east end of Toronto and that his father was had been a butcher at Dominion. His mother and aunt had owned and run the flower shop they had inherited from their father. After high school (at Riverdale Collegiate) was over in the afteroons, dad would drive over to the flower shop and make deliveries.

Curious about that I asked where the flower shop had been. The corner of Main and Danforth. Up popped this picture, courtesy of Beach Metro Community News, David Van Dyke and the Toronto Beaches Branch Public Library. “This look familiar?” I asked as I slid the tablet over. “That’s it.” he smiled, and then he laughed.

Read more...