<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">

<channel>
<title>Sateallia's Domain</title>
<link>https://sateallia.org/</link>
<description>I miss blogs.</description>

<item>
<title>cso-thumbnailer or: How I Learned to Stop Procrastinating and Write the Post</title>
<link>https://sateallia.org/blog/cso-thumbnailer/</link>
<guid>https://sateallia.org/blog/cso-thumbnailer/</guid>
<description>
<![CDATA[
<p>
<p>I lied when I said I intended to release a blog post along my releases. Let me make it right.</p>
<p>I have a lot of PSP UMD games. When I back them up on my computer, I back them up as CSO images which are registered as <a href="https://gitlab.freedesktop.org/xdg/shared-mime-info/-/blob/2.4/data/freedesktop.org.xml.in?ref_type=tags#L1783">&quot;application/x-compressed-iso&quot;</a> in <a href="https://gitlab.freedesktop.org/xdg/shared-mime-info">XDG's MIME database</a>. They don't compress as much as CHDs but they run on real hardware so I decided to use them.</p>
<p>Most Linux desktop environments (except KDE which has its own thing) use GNOME's thumbnailer standard. I decided I wanted some flair to my disc backups so I ended up writing a CSO thumbnailer. According to the timestamps in the release of cso-thumbnailer, I wrote it in May of 2022 and released it in February of 2024. It took about two years to release it simply because I code dirty so it took me a lot to get around to clean the source code of it up. This blog post was written in April of 2025. That's a lot of procrastination. I need to do better.</p>
<p>Let's start by running some sanity checks on the input file. We'll check whether the file is not too small, whether it starts with the format's magic bytes and whether its header is corrupt.</p>
<pre><code>#define CISO_MAGIC 0x4F534943 // &quot;CISO&quot;
#define ZISO_MAGIC 0x4F53495A // &quot;ZISO&quot;
unsigned char header_pvd[6] = {0x01, 0x43, 0x44, 0x30, 0x30, 0x31}; // 0x01 + &quot;CD001&quot;

ptr_input = fopen(argv[1], &quot;r&quot;);
if (ptr_input == NULL) error_and_exit(&quot;Error opening input file\n&quot;);
fseek(ptr_input, 0L, SEEK_END);
uint64_t size_file = ftell(ptr_input);
rewind(ptr_input);
if(size_file &lt; 32) error_and_exit(&quot;Input file too small to be valid\n&quot;);

unsigned char header_cso[0x18];
fread(header_cso, 1, sizeof(header_cso), ptr_input);	
magic = header_cso[0x0] + (header_cso[0x1] &lt;&lt; 8) + (header_cso[0x2] &lt;&lt; 16) + (header_cso[0x3] &lt;&lt; 24); 
if(magic != CISO_MAGIC &amp;&amp; magic != ZISO_MAGIC) error_and_exit(&quot;File couldn't be identified as CISO or ZISO\n&quot;);

uint64_t size_uncompressed = 0; 
for(int i = 7; i &gt;= 0; --i) { 
	size_uncompressed &lt;&lt;= 8; 
	size_uncompressed |= (uint64_t)header_cso[0x8 + i]; 
}
size_block = header_cso[0x10] + (header_cso[0x11] &lt;&lt; 8) + (header_cso[0x12] &lt;&lt; 16) + (header_cso[0x13] &lt;&lt; 24);
int version = header_cso[0x14];
alignment_index = header_cso[0x15];
if((magic == ZISO_MAGIC &amp;&amp; version != 1) || !size_uncompressed || !size_block) error_and_exit(&quot;Corrupt header in specified ZISO file\n&quot;);
</code></pre>
<p>The file we want to extract is named &quot;ICON0.PNG&quot;. While every official release I backed up have the image file at the exact same location on the disc, I also have some games I applied translation patches to which have the image file at different locations. To avoid decompressing the entire disc image to extract a single image file, we can locate the image file's location from the disc image's table of contents section.</p>
<pre><code>#define MB64_IN_BYTES 64 * 1024 * 1024
unsigned char record_icon0[10] = {0x09, 0x49, 0x43, 0x4F, 0x4E, 0x30, 0x2E, 0x50, 0x4E, 0x47}; // 0x09 + &quot;ICON0.PNG&quot;

int total_block = floor(size_uncompressed / size_block);
int limit64mb_block = floor(MB64_IN_BYTES / size_block);
if(total_block &gt; limit64mb_block) total_block = limit64mb_block; // First 64MB should be enough to get the location of the file, we can decrypt more if needed later
int size_index = (total_block + 1) * sizeof(uint32_t);

buffer_index = calloc(1, size_index);
buffer_output = calloc(1, size_block);
buffer_input = calloc(1, size_block * 2);
file_iso = (int*) calloc(1, size_uncompressed * sizeof(int));
if(!buffer_index || !buffer_output || !buffer_input || !file_iso) error_and_exit(&quot;Couldn't allocate enough memory\n&quot;);
fread(buffer_index, 1, size_index, ptr_input);
decompress(total_block);
	
uint64_t *location_pvd = memmem(file_iso, size_uncompressed * sizeof(int), header_pvd, sizeof(header_pvd));
unsigned char pvd[2048] = {0};
unsigned char magic_iso[8] = {0};
unsigned char first18bytes[18] = {0};
memcpy(&amp;pvd, location_pvd, 2048);
		
uint64_t *location_record_icon0_name = memmem(file_iso, size_uncompressed * sizeof(int), record_icon0, sizeof(record_icon0));
uint64_t *location_record_icon0 = location_record_icon0_name - 4;
memcpy(&amp;first18bytes, location_record_icon0, 18);
int location = first18bytes[2] + (first18bytes[3] &lt;&lt; 8) + (first18bytes[4] &lt;&lt; 16) + (first18bytes[5] &lt;&lt; 24);
location = location * size_block;
int size = first18bytes[10] + (first18bytes[11] &lt;&lt; 8) + (first18bytes[12] &lt;&lt; 16) + (first18bytes[13] &lt;&lt; 24);
</code></pre>
<p>While we're at it, let's check whether we can confirm that the disc image is really a PSP UMD disc backup somewhere around here.</p>
<pre><code>unsigned char magic_psp[8] = {0x50, 0x53, 0x50, 0x20, 0x47, 0x41, 0x4D, 0x45}; // &quot;PSP GAME&quot;

for(int i = 0; i &lt; 8; i++) magic_iso[i] = pvd[8 + i];
for(int i = 0; i &lt; 8; i++) if(magic_iso[i] != magic_psp[i]) error_and_exit(&quot;File couldn't be identified as PSP ISO\n&quot;);
</code></pre>
<p>Now that we know for sure that we're dealing with a PSP UMD disc backup and where its &quot;ICON0.PNG&quot; file is located, we can selectively decompress its location and extract it without decompressing the entire image.</p>
<pre><code>if((location + size + (2048 * 8)) &gt; MB64_IN_BYTES) {
	free(buffer_index);
	free(file_iso);
	
	total_block = floor((location + size + (2048 * 8)) / size_block);
	size_index = (total_block + 1) * sizeof(uint32_t);
	
	memset(buffer_output, 0, size_block);
	memset(buffer_input, 0, size_block * 2);
	buffer_index = calloc(1, size_index);
	file_iso = (int*) calloc(1, (location + size + (2048 * 8)) * sizeof(int));

	if(!buffer_index || !buffer_output || !buffer_input || !file_iso) error_and_exit(&quot;Couldn't allocate enough memory\n&quot;);
	fread(header_cso, 1, sizeof(header_cso), ptr_input);
	fread(buffer_index, 1, size_index, ptr_input);
	decompress(total_block);
}

memcpy(png, (file_iso + (location / sizeof(int))), size);
</code></pre>
<p>Congratulations to us as we have just extracted a file from another file. Write the boilerplate for the thumbnailer standard and you have a thumbnailer in your hands.</p>
<center><img src="https://sateallia.org/img/cso_thumbnailer.jpg"></center>
<p>Writing a thumbnailer is easy and fun. Why don't we have a lot more of them?</p>
&nbsp;
</p>
]]>
</description>
<pubDate>2025-04-10 00:00:00</pubDate>
<author>Sateallia</author>
</item>
<item>
<title>The Illusion of Hardware Slowing Down</title>
<link>https://sateallia.org/blog/the-illusion-of-hardware-slowing-down/</link>
<guid>https://sateallia.org/blog/the-illusion-of-hardware-slowing-down/</guid>
<description>
<![CDATA[
<p>
<p>Does <em>your</em> hardware get twice as fast every two years? It does not! I believe that <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore's Law</a>, while historically and industrially important as a metric, is little use for everyday consumers. Hardware gets faster <em>in periodic bursts as it is upgraded or replaced</em>. Software inefficiency on the other hand keeps growing at a consistent rate. <a href="https://en.wikipedia.org/wiki/Wirth%27s_law#Other_names">May's Law</a>, a <a href="https://en.wikipedia.org/wiki/Wirth%27s_law">Wirth's Law</a> variation, is therefore incorrect. Growing software inefficiency does not compensate Moore's Law; in the real world, excluding a brief post hardware upgrade period, it overshadows it! This phenomenon creates an illusion that devices get slower with time when they do not. Well, maybe. More on this later.</p>
<p>The phenomenon Wirth's Law describes is a necessity! To explain the reasoning behind this thought, I thought up two possible reasons software gets more inefficient by the day:</p>
<ul>
<li>More lenient coding practices</li>
<li>Abstractions</li>
</ul>
<p>To understand both points better, please visualize a single-axis spectrum in your head with &quot;ease of development&quot; on the left side and &quot;optimized code&quot; on the right side.</p>
<p>More lenient coding practices are trade-offs that trade optimization for ease of development. How far away from the right side of the spectrum varies from case to case.</p>
<p>Abstractions on the other hand are placed on the furthest edge of the left side in the spectrum. That is not to say they are useless! Abstractions are necessary evils that come into being by valid reasons such as maintaining backwards compatibility, easing cross platform development to the point they're now possible in feasible time frames, reducing repeated code and eliminating divided codebases. In fact, I'm actually quite fond of abstractions! <em>Any</em> developer that has ever worked on a cross-platform project before will sing abstractions' praises and having worked on projects like that, I too think they're invaluable. Of course, in an ideal world, I'd be singing <a href="https://en.wikipedia.org/wiki/Progressive_web_app">Progresive Web Apps</a>' praises but that ship has already sailed. One can't always get what they want as the world is built on compromises and this is the one we ended up with. The fact however remains that abstractions, no matter how necessary or useful they may be, add onto the already complex software stack that is quasi-necessary today to feasibly develop, resulting in inefficiency.</p>
<p>The inefficiency caused by these two reasons are expected to be negated by Moore's Law.</p>
<p>My friends complain about my blog posts being walls of text so I made the following graph as a visual aid to explain my thinking better:</p>
<center><img src="https://sateallia.org/img/graph_law.svg" height=320 width=400></center>
<p>The very simplified graph above assumes the following:</p>
<ul>
<li>Performance effects caused by <a href="https://en.wikipedia.org/wiki/Transistor_aging">silicon/transistor aging</a> are within the margin of error and therefore negligible.</li>
<li>Software inefficiency and computing power grow at similar speeds within the margin of error. (They do not!)</li>
<li>There are no power throttling measures in place. Some battery-powered devices nowadays use power throttling measures based on battery statistics such as remaining battery capacity in order to maintain a stable battery time throughout the devices' lifetime. There is also temperature related power throttling to consider as many chips nowadays throttle themselves when they measure above ideal temperatures in order to protect their own integrity.</li>
</ul>
<p>I have spent a month asking around and browsing to find a name for this phenomenon but my efforts were unfortunately in vain. If there is a name for it that I am unaware of, feel free to contact me using the e-mail address in this site's footer. Until then, I am taking this opportunity to not so humbly call it &quot;Sateallia's Law&quot;.</p>
<p>It goes something like this:
<em>Hardware relies on periodic updates to catch up with the growing needs of ever more inefficient software, resulting in an unstable performance timeline graph.</em></p>
&nbsp;
</p>
]]>
</description>
<pubDate>2022-10-06 00:00:00</pubDate>
<author>Sateallia</author>
</item>
<item>
<title>The Tragedy of Lost Efficiency</title>
<link>https://sateallia.org/blog/the-tragedy-of-lost-efficiency/</link>
<guid>https://sateallia.org/blog/the-tragedy-of-lost-efficiency/</guid>
<description>
<![CDATA[
<p>
<p>The mere thought of the amount of energy waste caused by computers running precompiled binaries not optimized for processors they run on keeps me up at night. There needs to be a better solution than just compiling on-site.</p>
<h2>Potential Solutions</h2>
<p>I see two problems each with two potential solutions here:</p>
<p><strong>How many build choices will we supply?</strong></p>
<ol>
<li>For every optimization level combination possible.</li>
<li>Only for Processor Suppliment ABIs.</li>
</ol>
<p>In an ideal world we would have choices for each optimization level combination possible but in the real world it's just not practically viable. Arch Linux merged <a href="https://gitlab.archlinux.org/archlinux/rfcs/-/merge_requests/2">a RFC to provide x86-64-v3</a> feature level builds that also talks about problems that will be encountered while working on it in April 2021. The efforts to make the RFC a reality are <a href="https://lists.archlinux.org/pipermail/arch-dev-public/2022-January/030646.html">still ongoing</a>.</p>
<p><strong>How will the optimized code be loaded?</strong></p>
<ol>
<li>Every build will be compiled per optimization level and be provided as a different download.</li>
<li>The main application will be compiled normally but the processor intensive part will be compiled optimized per optimization level and stored in different shared objects.</li>
</ol>
<p>Let's do an experiment to see how hard the second option would be.</p>
<h2>The Experiment</h2>
<p>I'll create a basic function to export later. Let's call this file libshared.c for future referring.</p>
<pre><code>#include &lt;stdio.h&gt;
#include &lt;math.h&gt;
 
int fun_had(int in1) {
    int ret = in1;
    ret = ret * ret - 1;
    ret = pow(ret, 2);
    return ret;
}
</code></pre>
<p>In the build script I'll build libshared.c in both unoptimized and optimized forms, then convert both builds into shared objects. Let's call this file build.sh for future referring.</p>
<pre><code>rm -r libsharednormal.o libsharedoptimized.o a.out binary libsharednormal.so libsharedoptimized.so 2&gt;/dev/null

gcc -c -Wall -Werror -fpic libshared.c -o libsharednormal.o
gcc -c -Wall -Werror -fpic -march=native -mtune=native -O3 libshared.c -o libsharedoptimized.o

echo $(md5sum libsharednormal.o)
echo $(md5sum libsharedoptimized.o)

gcc -shared -o libsharednormal.so libsharednormal.o -lm
gcc -shared -o libsharedoptimized.so libsharedoptimized.o -lm

rm -r libsharednormal.o
rm -r libsharedoptimized.o

gcc -Wall -O0 -o binary main.c # build the binary normally, no optimizations
</code></pre>
<p>I'll start building our main program to call the shared objects from. Let's call this file main.c for future referring.</p>
<pre><code>#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;string.h&gt;
#include &lt;dlfcn.h&gt;
#include &lt;sys/time.h&gt;

int (*fun_had)(int); // declaring a prototype before the actual function is loaded
struct timeval start, stop;

int main(int argc, char *argv[]) {
    if(argc &lt; 2) return -1;
    int num = strtol(argv[1], NULL, 10); // argv[1] is now stored in num
    
    char toLoad[256] = &quot;&quot;;
    __builtin_cpu_init();
    if(__builtin_cpu_supports(&quot;avx2&quot;)) strcat(toLoad, &quot;./libsharedoptimized.so&quot;);
    else strcat(toLoad, &quot;./libsharednormal.so&quot;);
    printf(&quot;\nlib to load: %s\n&quot;, toLoad);
    
    void *handler_dl = dlopen(toLoad, RTLD_NOW);
    if(!handler_dl) { 
        printf(&quot;dlopen error: %s\n&quot;, dlerror()); 
        return -1; 
    }
    fun_had = dlsym(handler_dl, &quot;fun_had&quot;); // the actual function is now loaded
	
    gettimeofday(&amp;start, NULL);
    int final = 0;
    for(int i = 0; i &lt; 100000; i++) final = fun_had(num);
    printf(&quot;%i\n&quot;, final);
    gettimeofday(&amp;stop, NULL);
    printf(&quot;function took %lu us\n&quot;, (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
    dlclose(handler_dl);
    
    printf(&quot;\n&quot;);
    printf(&quot;loading unoptimized function for testing purposes\n&quot;);
    handler_dl = dlopen(&quot;./libsharednormal.so&quot;, RTLD_NOW); // let's load the unoptimized function for testing
    if(!handler_dl) { 
        printf(&quot;dlopen error: %s\n&quot;, dlerror()); 
        return -1; 
    }
    fun_had = dlsym(handler_dl, &quot;fun_had&quot;);
	
    gettimeofday(&amp;start, NULL);
    int final2 = 0;
    for(int i = 0; i &lt; 100000; i++) final2 = fun_had(num);
    printf(&quot;%i\n&quot;, final2);
    gettimeofday(&amp;stop, NULL);
    printf(&quot;function took %lu us\n&quot;, (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
    dlclose(handler_dl);
    
    return 0;
}
</code></pre>
<p>Most of this code is testing boilerplate so some explanation might be necessary here.</p>
<p>I'll go over the shared object loading first:</p>
<pre><code>#include &lt;dlfcn.h&gt;
int (*fun_had)(int);

void *handler_dl = dlopen(&quot;./libshared.so&quot;, RTLD_NOW);
if(!handler_dl) { 
    printf(&quot;dlopen error: %s\n&quot;, dlerror()); 
    return -1; 
}
fun_had = dlsym(handler_dl, &quot;fun_had&quot;);
</code></pre>
<p>This snippet here will create a prototype, open the shared object file and then finally point our prototype to the function exported from the shared library.</p>
<p>To feed to the above snippet, I also need to pick which shared library we need to load:</p>
<pre><code>char toLoad[256] = &quot;&quot;;
__builtin_cpu_init();
if(__builtin_cpu_supports(&quot;avx2&quot;)) strcat(toLoad, &quot;./libsharedoptimized.so&quot;);
else strcat(toLoad, &quot;./libsharednormal.so&quot;);
</code></pre>
<p>For testing purposes I used <code>__builtin_cpu_supports</code> to pick between them and used supporting AVX2 as a distinction. There are many other ways to do this including probing <code>/proc/cpuinfo</code> but for simplicity's sake I'll go with this.</p>
<p>Finally, I need to call the function. For testing purposes, I'll also measure the time it takes to call the function.</p>
<pre><code>#include &lt;sys/time.h&gt;
struct timeval start, stop;

gettimeofday(&amp;start, NULL);
int final = 0;
for(int i = 0; i &lt; 100000; i++) final = fun_had(num);
printf(&quot;%i\n&quot;, final);
gettimeofday(&amp;stop, NULL);
printf(&quot;function took %lu us\n&quot;, (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
</code></pre>
<p>Now I'll try running it.</p>
<pre><code>$ ./build.sh &amp;&amp; ./binary 59
84f6ada061366dc244fb474c5ba50347 libsharednormal.o
83851c036b3a04d53ff016e2cca48cad libsharedoptimized.o

lib to load: ./libsharedoptimized.so
12110400
function took 182 us

loading unoptimized function for testing purposes
12110400
function took 1901 us
</code></pre>
<p>This is not practical, at all! The runtime feature level detection code would evolve into spaghetti if we were to use it for real.</p>
<p>For more practical purposes I envision a library that takes care of all of this including mechanisms for building objects for every CPU feature level combination desired and picking between them at runtime.</p>
<h2>Steps Already Taken</h2>
<p>GNU C Library has something very similar! They call it <a href="https://sourceware.org/pipermail/libc-alpha/2020-June/115250.html">glibc-hwcaps</a> and it's very cool! Unfortunately I'm of the opinion that C libraries should be POSIX only and stuff like this should be handled at compilation or at runtime via libraries. In fact, GNU C Library uses <a href="https://www.gnu.org/software/libc/manual/html_node/Tunables.html">tunables</a> to do something similar internally already. Take a look at its <a href="https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86_64/multiarch/ifunc-impl-list.c">ifunc-impl-list.c</a> for some examples.<br>
You may also be interested in reading about <a href="https://www.kernel.org/doc/html/latest/arm64/elf_hwcaps.html">ARM64 ELF hwcaps</a> and <a href="https://kernel.org/doc/html/v6.0-rc2/powerpc/elf_hwcaps.html">POWERPC ELF hwcaps</a>.</p>
<p>On the compiler side of things, GCC has had x86-64-vX feature level support since <a href="https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=324bec558e95584e8c1997575ae9d75978af59f1">October 2020</a>. I do not use or follow other compilers so I will not be able to comment on them.</p>
<p>Likewise, there may be other efforts on the distribution side of things but I will not be able to comment on them as I don't use or follow them.</p>
<p>Future looks bright!</p>
&nbsp;
</p>
]]>
</description>
<pubDate>2022-08-31 00:00:00</pubDate>
<author>Sateallia</author>
</item>
<item>
<title>Middlemen Problem Towards Internet as a Necessity</title>
<link>https://sateallia.org/blog/middlemen-problem-towards-internet-as-a-necessity/</link>
<guid>https://sateallia.org/blog/middlemen-problem-towards-internet-as-a-necessity/</guid>
<description>
<![CDATA[
<p>
<p>I think it's safe to say that the internet has become a &quot;quasi-necessity&quot; today. That is to say while it is not necessarily a &quot;necessity&quot;, it is definitely not a &quot;luxury&quot; either. Banking, instant messaging, geo-locating and routing, <b>governmental applications</b>... While the question whether internet access is going to be a human right or not is a good debate topic, that's not what I'm going to rant about today.</p>
<h2>Middlemen</h2>
<p>Imagine a world where it is impossible to be a functioning member of the society without an internet connection. Not like today where it's practically improbable, think actually impossible. You can't buy a house that doesn't come connected to a remote smart housing server. You can't pay your taxes without internet access. You can't do your banking, you can't even open a savings account without internet. It's all digital.</p>
<p>Let's look at a few examples of how it would go:</p>
<h2>Mobile Applications</h2>
<p>There are a few features that I currently have to use my bank's mobile application for:</p>
<ul>
<li>Depositing/withdrawal using QR codes</li>
<li>Contactless payments using the phone</li>
<li>Applying for offers and promotions</li>
<li>Opening a branch-free banking account</li>
</ul>
<p>While I don't mind them being application exclusive features, there is one thing I do mind: the method I have to obtain the app. I <strong>have</strong> to obtain it through my operating system vendor's application store, meaning that to do banking with my bank, I also need to enter a contract with my OS vendor.</p>
<p>Since I can't just rant about drawbacks without also suggesting, let's also take a look at a few potential solutions:</p>
<h2>F-Droid/Linux style distribution</h2>
<p>In an ideal world, I would be able to have a store client that retrieves application packages from decentralized servers that all talk the same protocol. While this is already the reality within the open source community (e.g. <a href="https://f-droid.org/">F-Droid</a>, Linux package managers as per the topic header); this is simply, practically infeasible for real world applications. Why?</p>
<ul>
<li>Lack of security within such package manager systems, especially considering the high security needs of applications that deal with things such as banking or governmental processes, would be inexcusable. Consider that even today when they're relatively niche, there have been malicious/bad PPAs or AUR scripts (doesn't necessarily have to be intentional). If you have enough users, some of them are guaranteed to make typos or find malicious links!</li>
<li>Technical skills needed to operate such package managers. Probably the biggest reason this approach wouldn't work.</li>
</ul>
<p>Making each repository its own application wouldn't work either since then you either have to:</p>
<ul>
<li>Register with <em>that</em> repository's owner, shifting the problem</li>
<li>If you're maintaining your own repository, get users to sideload your app.</li>
</ul>
<h2>Sideloading</h2>
<p>While the term &quot;sideload&quot; implies a danger of intrusion that is not merited, that is the least of our worries here. Most instant messaging apps I use (even the proprietary ones) do provide direct package downloads for my mobile OS. I do appreciate that. While the same arguments of the previous model applies here but there is also one another thing to consider with this method: server connections.</p>
<p>There are currently two methods to achieve push notifications to a mobile applications:</p>
<ul>
<li>Always on server connection that is handled by the application. Requires your app to always run in the background. Certainly not the most battery efficient (considering the lack of optimization brought with &quot;sprints&quot; nowadays) so very much discouraged, even at OS level nowadays.</li>
<li>Centralized vendor provided connections. The norm nowadays. The device only needs to maintain one connection to a main server and all notifications are routed through that path.</li>
</ul>
<p>Do you see the issue here?</p>
<h2>Internet of Things / Smart Homes</h2>
<p>There are many, many, many examples of a vendor going out of business or deprecating an IoT device making them essentially paper weight. Locally hosted IoT is not a solution either considering then you either lose the ability to contact the device from outside of the local network or essentially expect your users to be system administrators.</p>
<h2>Dependency on Transportation</h2>
<p>As human population grew and expanded throughout the planet, an industry for transportation of goods emerged. That is fine. That is acceptable. What isn't is the dependency on these things. A society <em>needs</em> to be able to self-sufficient. Transported goods should only be pleasant bonuses. Fortunately for me, I don't need to theorize examples for this topic as it has already happened in history more than enough times (e.g. <a href="https://en.wikipedia.org/wiki/2021_Suez_Canal_obstruction">2021 Suez Canal obstruction</a>).</p>
<h2>History of Middlemen</h2>
<p>Historically, lumber and coal companies used to pay their employees with &quot;<a href="https://en.wikipedia.org/wiki/Company_scrip">company scrips</a>&quot;, issued by them and only accepted in company stores owned by them. While this was historically solved in United Kingdom with the &quot;<a href="https://en.wikipedia.org/wiki/Truck_Acts">Truck Acts</a>&quot;, it took until 1938 for the United States to solve this problem with the &quot;<a href="https://en.wikipedia.org/wiki/Fair_Labor_Standards_Act_of_1938">Fair Labor Standards Act of 1938</a>&quot;. The problem of middlemen isn't anything new, it just keeps reappearing with every new unregulated industry.</p>
<h2>The Problem of Middlemen</h2>
<p>Vendors <em>do</em> go out of business. Devices <em>do</em> become obsolete. It is simply foolish to depend on a vendor to provide you lifelong service. Look at <a href="https://twitter.com/internetofshit">Internet of Shit</a> when you're bored. Why are we as a society progressing towards solely relying on vendors to distribute our software? There needs to be a better way that doesn't compromise security or require developers to register with distributors. I'll think about it.</p>
<p><strong>UPDATE (5 Oct 2022):</strong> I did at first consider default pre-trusted repositories as F-Droid comes with a few of its own and the model seems to work fine for operating systems for providing root certificates (even though it historically was abused a few times by both manufacturers and trusted developers on the pre-trusted repositories). This solution, however, does not scale. At all.</p>
<p>Have you ever seen one of those &quot;Install This App at X Store&quot; kind of banners? In fact, <a href="https://f-droid.org/tutorials/add-repo/">F-Droid has something very similar</a>! I think this could work! This however does not solve the security aspect of this model as now you have to make sure your users are finding <em>your</em> site to discover your repository links and not anything malicious instead.</p>
&nbsp;
</p>
]]>
</description>
<pubDate>2022-03-29 00:00:00</pubDate>
<author>Sateallia</author>
</item>
<item>
<title>resistormaid and the Curse of Knowledge</title>
<link>https://sateallia.org/blog/resistormaid-and-the-curse-of-knowledge/</link>
<guid>https://sateallia.org/blog/resistormaid-and-the-curse-of-knowledge/</guid>
<description>
<![CDATA[
<p>
<p>After releasing <a href="https://sateallia.org/software/">resistormaid</a>, all the friends I showed it had the exact same question: &quot;No one made this before? Surely someone made this before.&quot; That <em>is</em> the logical answer. It's just a commandline resistor calculator, there is exactly zero percent of chance no one made it before. And I had the same answer every time as well: &quot;Well, let's look for one together.&quot; Every time, we find nothing. Which begs the question... Why can't I find one? There exist websites or apps that do this sure, but surely someone made this before the internet era.</p>
<p>Then one of my friends jokingly said, &quot;I feel like there is no way someone didn't do some resistor CLI app though... It's probably called something that doesn't make sense or some obscure dated joke&quot;.</p>
<p>&quot;That's what READMEs are for!&quot; I hear you say. Unfortunately, it's not that simple. Even if the solution to this in the software industry <em>is</em> that simple, this problem isn't only relevant there.</p>
<h2>It applies to all media</h2>
<p>The time a content is consumed and the time it's produced don't necessarily have to be close.</p>
<p>How many times have you read a classic book that didn't require you to pull out your phone or read a wall of text by the translator/editor to understand what obscure fact they were referencing in a one-liner? Many of them are practically unreadable because of this problem. How many times have you seen a new meme in a brand new video game? Big production media go through months if not years of approvals, producing, revisals, polish so by the time you get to play it, the meme's long dead. How many times have you watched an old movie and laughed at the decades old political joke?</p>
<p>One should not rely on the audience to have the same experiences as the author. It is important to appeal to the lowest denominator that is the human nature if you want the content to survive. In-jokes, political commentary, knowledge on a subject, humoral memes, pop culture references; these will only narrow the audience. Not in the sense of the immediate audience, no. Think human history. Think descendants. Think broader.</p>
<p>Let's look at Moby Dick for example. At its surface, it's an adventure thriller making it enjoyable by the common man. At its core, it's a complex take and commentary on human nature and ego. Herman Melville (shamefully) died penniless! Only years after his death did it gained the attention it deserved as people started delayering the onion and saw it for what it is. Even if no one hunts whales nowadays, it's still enjoyable.</p>
<h2>Immortality through Writing</h2>
<p>Like how a chain is as strong as its weakest link, a content is as relevant as its most dated part. Relying on a passing reference to pull a content together is sabotaging its lifespan. It is, in a sense, trading immortality to viral marketing. Such content will be forgotten to time, not being able to get its points across. The experiences the artist's trying to share will be forgotten to time and events that would be avoided by their wisdom will be repeated. Therefore, the artist should avoid relying on passing things to elaborate on their points if they wish to make their writings last.</p>
<h2>Mortality through Writing</h2>
<p>Mind you, I'm not refusing the worth of passing elements. There <em>is</em> need for short-term lifespanned or specifically targeted content. That's fine as long the content is <em>supposed to be</em> short-term lifespanned or specifically targeted.</p>
<p>Remember how I said &quot;It is important to appeal to the lowest denominator that is the human nature if you want the content to survive.&quot; before? That's the big if.</p>
<h2>Conclusion</h2>
<p>Writing balanced for both the current and the upcoming eras is hard. I don't exactly have a solution to this. I'm sorry. I also apologize for the whimsical writing. I realize my sentences keep swinging from subject to subject.</p>
<p>How many of you would recognize the last paragraph as a theatre play reference? I will doubt any percentage that's higher than 5%. That's exactly the point I'm trying to get across.</p>
<p>Also, don't take this post too seriously. I'm just emptying my mind here.</p>
&nbsp;
</p>
]]>
</description>
<pubDate>2022-03-08 00:00:00</pubDate>
<author>Sateallia</author>
</item>


</channel>
</rss>
