Dictionaries
I've been posting elsewhere about my recent adventures in setting up dictionaries on my computer.
On my computer I keep 21 dictionaries that can be queried using the command "dict
". I could simply use the freely available ones at dict.org, by either accessing the website or using my dict
command on my machine to to query their online dictionaries, but in the past my internet would often go down, which was particularly annoying if I was writing at the time and needed to check a dictionary, so I like to keep them locally on my machine, so I can still look up my dictionaries whether I have net connection or not.
Last night I was looking up the spelling of an unusual word which didn't appear in any of my local dictionaries, so I figured I'd see if it was in any of the 166 dictionaries at dict.org, so I used their pseudo-dictionary "all" to force it to search all their dictionaries, including all their English translation dictionaries. (It was found in the English-Finnish translation dictionary. I hadn't bothered to install any translation dictionaries.)
One thing I don't have is the ability to use a single query to check all my local dictionaries (those on my computer) for a word. I must find out how to do that. It is really convenient.
Oh, and you're probably wondering why I keep 21 dictionaries on my computer. Well, most of them are specialised. Here is the list of my local dictionaries:
bouvier - Bouvier's Law Dictionary, Revised 6th Ed (1856)
devils - The Devil's Dictionary ((C)1911 Released April 15 1993)
easton - Easton's 1897 Bible Dictionary
elements - Elements database 20001107
foldoc - The Free On-line Dictionary of Computing (27 SEP 03)
gazetteer2k - counties U.S. Gazetteer Counties (2000)
gazetteer2k - places U.S. Gazetteer Places (2000)
gazetteer2k - zips U.S. Gazetteer Zip Code Tabulation Areas (2000)
gazetteer - U.S. Gazetteer (1990)
gcide - The Collaborative International Dictionary of English v.0.48
hitchcock - Hitchcock's Bible Names Dictionary (late 1800's)
jargon - Jargon File (4.3.1, 29 Jun 2001)
journo - Free journalism dictionary, version 1.1
minimal - Minimal Dictionary (I created this)
moby-thesaurus - Moby Thesaurus II by Grady Ward, 1.0
vera - Virtual Entity of Relevant Acronyms (Version 1.9, June 2002)
web1913 - Webster's Revised Unabridged Dictionary (1913)
wn - WordNet (r) 2.0
world02 - CIA World Factbook 2002
world95 - The CIA World Factbook (1995)
www - Who Was Who 5000BC to Date
If you have "dict
" installed on your computer and you list the dictionaries available at dict.org then it will list 166 of them. The first 15 will be various specialised dictionaries (law, computing terms, thesaurus, English dictionaries, USA places, etc). 148 will be translation dictionaries (e.g. Croation-English, suomi-português, etc.) but the last 3 dictionaries are special:
english English Monolingual Dictionaries trans Translating Dictionaries all All Dictionaries (English-Only and Translating)
If you check their details these last three are zero bytes in size. I wanted to find out how to set up these kinds of pseudo dictionaries so I could search all my local dictionaries at once for a word. A google search didn't turn anything up, and rummaging around in the documentation at dict.org turned up little (I remembered years ago that it took me a lot of effort to find the documentation at dict.org).
So I asked ChatGPT-4o. It was eager to help and confidently told me how to do what I needed... except it was completely wrong. I spent much of the day trying its various suggestions until I finally decided to cut my losses and give up on the AI.
Somehow, I had forgotten that when I'd last looked into dict a while back I'd saved a whole lot of data that I'd found online. (Thank you to my obsessive nature.) So I spent some hours reading through the documentation, which I should have done right at the beginning. And I found the solution. It is unbelieveably simple. Add to the list of databases -- mine is in /var/lib/dictd/bd.list
-- a slightly different entry.
Most of the entries are like this:
database foldoc { data /opt/dict-dbs/foldoc.dict.dz index /opt/dict-dbs/foldoc.index }
"foldoc
" is the name of that dictionary of computing terms. The word data file and the index to it are kept locally on my machine at /opt/dict-dbs
. I have 21 of these kinds of entries.
To get a pseudo-database I simply need to add:
database_virtual comp { database_list "foldoc,jargon" }
In this case I'm defining a virtual database which I name "comp
" that I can search for computer terms. It contains a list of other databases to be searched. In this case they are foldoc
and jargon
-- the two computing dictionaries I have. The list of databases to be searched must be separated by commas and be enclosed in double quotes.
Now if I want to look up a computing term (like "fork") I can use:
dict -d comp fork
Which will search both the foldoc
(Free On-line Dictionary of Computing) and jargon
(The Jargon File) for their entry on fork.
So simple.
It's a pity I spent so much time on the AI's wild goose chases. I should have referred first to my enormous store of documentation. It's slow and frustrating to read documentation, but it would have ended up saving me many hours of time.
Nevertheless, I learned in my failed explorations with the AI that I can use various search strategies with dict
, such as regular expressions and soundex. That's exciting!
-----
Again, I'm such a dummy. Turns out I didn't need to spend all that time to make virtual databases for dict just simply to be able to search all 21 of my local databases for some word. All I needed to do was just not specify a database. 😃
For example if I want to search for "fork" in WordNet
I give this command:
dict -d wn fork
But if I want to search in all my databases I just have to give:
dict fork
Interestingly, it doesn't work exactly the same in the online databases. That simpler command just searches the all the conventional English databases and ignores the translation databases. In that case,
dict -d all fork
will search for "fork" in all the online databases.
But my effort wasn't entirely wasted. Apart from learning a lot, I created a whole bunch of virtual databases that let me restrict a search for words to my 2 dictionaries of computer terms, or my several databases of geographical places, or one of several other virtual databases.
So, yay! 🙂
New teeth
https://newatlas.com/medical/tooth-regrowing-human-trial/
No sound!!!!
I keep several operating systems on my computer, so I booted into another one, to check. Gratifying surprise! The sound worked. So it was not a hardware problem... thank goodness.
So back to my preferred operating system. Why isn't sound working in it? I tried everything I could think of, but was reluctant to spend hours on the net searching, so I began with asking ChatGPT-4o if it had seen anything on the net about this particular problem. It spent a little while searching the net and came back with 8 suggestions gleaned from its search.
The first suggestion I'd already tried.
The second suggestion (to check that alsamixer was not muted) solved the problem. How it had got muted I have no idea, but now everything is okay again.
Whew! My relief is overwhelming. :(
Scrolling between desktops in JWM, and terminal text editors
In previous versions of Puppy Linux, scrolling the mouse wheel while the pointer was over the desktop backdrop switched virtual desktops. In the latest version of JWM this behavior is no longer default. (JWM is Joe's window manager — a very capable, but small, fast window manager.) I have spent days, on and off, trying to work out how I might reinstate this behavior. ChatGPT, when I enlisted its help, sent me off on multiple wild goose chases, weirdly making up commands that don't exist. It was worse than useless because I wasted time on its hallucinations.
I even managed to crash my computer using a command very similar to the one that was the actual solution. Scarily, I couldn't start up the graphical interface. I had to use the commandline to fix the problem, which would have been easy if this version of Puppy had a simple terminal text editor. Previous Puppies had mp
, or Nano
, or another minimal text editor I can't remember the name of just now, but this one didn't even have tiny, ancient ed
! The only text editor it had for the terminal was vi
, which is decidedly not minimal. I ended up using sed
(stream editor), which every linux has. It is a weird little program — more an editing language than what you'd think of as a text editor, but it did the job. My GUI (graphical user interface) restarted. Whew!
This morning I finally solved the original problem:
<Mouse context="root" button="4">ldesktop</Key> <Mouse context="root" button="5">rdesktop</Key>
So simple! Now I can switch desktops with a simple scroll of the mouse wheel again.
For safety, I've installed some simple terminal-based text editors so I don't get caught having to edit something without the GUI again:
micro
ne
nano
Each of them is small and fast and has syntax highlighting, which makes life very easy.
-
micro
lets you navigate with keyboard or mouse. The keybindings are mostly pretty easy to remember. -
ne
uses menus accessed via the escape key. The keybindings are either control keys (e.g.^s
) or meta (alt) keys (e.g.[Q
) -
nano
is the easiest to use, with only 12 commands I think, and all displayed constantly at the bottom of the screen. If started with options -S and -a (nano -Sa test.txt
) it softwraps text at spaces, which is very handy to have. This is the one I'm most likely to use.
As an afterthought, I've also installed ed
, which is a weird little line editor, the predecessor to vi
, and I'll probably never use it — really only used it a couple of times before — but it is tiny, and just in the off-chance that I might need it...
Hmmm... times like this make me wonder, again, whether investing time into learning vi
might be worthwhile. It is extremely powerful and many old-time programmers swear by it, but it was made long before the "Common User Access" standards were developed, which let us use more or less the same key-combinations for most programs (Ctrl-S
to save, Ctrl-Q
to quit, Ctrl-C
to copy, Ctrl-V
to paste, and so on). As a result vi
uses lots of totally weird key combinations that nothing else does. I have taught myself to use it before, but it doesn't take long for my memory to fade, and I have to relearn it all over again.
Eek and Meek
Eek & Meek by Howie Schneider.
It is astonishingly difficult to find his cartoons online.
.jpg)
the pleasure of solving problems, and the impact of AI
I think I may have noticed something, though it might be my imagination... Perhaps I'm over-tired, but I seem to be finding it diifficult to program things that I've done many times before. I don't think it's the early stages of Alzheimer's because I still have my sense of smell (I think that's one of the first things to go when Alzheimer's strikes). As I say, it may just be that I'm tired most of the time these days. However a thought struck me since reading an article by John Gibbins recently. When I'm faced with a difficult programming problem I generally just read manual pages and search on the net for info until I solve it, but lately I've turned to AI to more quickly get the solutions I need.
Wild dogs are very smart. They will often solve problems that domesticated dogs are unable to, but domesticated dogs have humans. When they are unable to solve a problem they deliberately turn to their human. This makes the domesticated dog way smarter than the wild dog... but only if it has a human to fall back on.
In relying on AI to help me solve problems am I "domesticating" myself out of my ability to solve problems? Or am I stepping up to higher levels of problems that I wouldn't have previously solved, but now am able to using AI as a tool?
Interesting questions.
InternetArchive books, jpeg2000 format, and mtpaint
https://archive.org/details/1800-mechanical-movements-devices-and-appliances_202005
Depending on the kind of book I'm fetching I'll get one format or another. The InternetArchive almost always has a link to "SINGLE PAGE PROCESSED JP2 ZIP". I generally don't use that link, even though it is a zip file of the original page image scans in extremely high quality JPEG2000 format, because I have almost nothing to view the images or convert and resize them to something smaller and more usable. ImageMagick needs a special "delegate" converter which I haven't been able to get working. Netpbmtools' converter fails to even read the images. G'MIC needs a special computer vision library which I'm reluctant to add to my computer. And none of the dedicated viewer programs I have will even display it.
Just two programs will load jpeg2000 images: Gimp, and mtpaint.
I'd kinda expected Gimp would be able to. It is big and extremely capable -- able to do much, much more than I could learn in one lifetime.
The great surprise was mtpaint -- a tiny, exceedingly fast, deceptively simple, paint program. And I dimly recalled from somewhere that mtpaint could be used from the commandline without opening the GUI (graphical user interface). So I typed into the terminal:
mtpaint --help
It responded with:
mtPaint 3.49.13 Usage: mtpaint [option] [imagefile ... ] Options: --help Output this help --version Output version information --cmd Commandline scripting mode, no GUI -s Grab screenshot -v Start in viewer mode -- End of options
There it was, commandline scripting mode! I'd been greatly underestimating little mtpaint.
Next, I quickly looked through its manual to see what I could learn about its scripting mode, but didn't look closely enough so I missed it.
It occurred to me to ask ChatGPT-4o. It seemed very helpful, giving lengthy descriptions of how to use mtpaint scripting. So then I sat down to try its examples... but none of them worked.
So now I examined the manual more closely and found chapter 10 discussed scripting. It turns out EVERY example GPT gave was wrong, and not just in minor ways, but in every way. That really surprised me.
The only thing left was to patiently read through the manual's chapter on the topic.
I'm quite pleased at its scripting language. It uses the GUI as its reference. Commands are based upon it. So if I want to know the way a command should be written I simply find it in the program's GUI and that will define it for me. Very smart. For example if I want to load a jp2 image, scale it to 22%, with proportions fixed so height and width scale together, then save as an ordinary jpg image at 90% quality, then I write this on the commandline:
mtpaint --cmd -file/open="00.jp2" -image/scale w=22% fix=1 -file/as="00_22.jpg" f=jpeg q=90
The "file" menu in the GUI contains the "open" and "save as" commands, and the "image" menu contains the "scale" command. The other settings are part of those commands in the GUI, so immediately follow those commands with an unambiguous key (for example "width" or "w" both work fine).
Using mtpaint on the commandline I can quickly and easily convert and rescale hundreds of jp2 images to usable size and format using a simple looping command. Yay!
One last thing. I mentioned that I didn't have a program that would simply display jpeg2000 images. I was wrong. I noticed mtpaint has another little surprise. If I make the mimetype command for all jp2 images
mtpaint -v "$@"
then mtpaint displays them in its own fast, simple picture viewer.
I have great new respect for mtpaint. I'd been underestimating it awfully.
All 256 of Wolfram's 1D cellular automata
Only 15 rules (30, 45, 73, 75, 86, 89, 101, 110, 124, 135, 137, 149, 169, 193, 225) produce chaos in their universes. Even so, it is surprising how many of them did. No wonder Wolfram thinks he might be able to derive rules for this universe from similar kinds of 3D cellular automata.
Here is my quick awk program. It writes out .pbm format images, which most picture viewers should be able to display.
#! /bin/awk -f # ca.awk # # Miriam # Friday 2023-11-03 01:14:30 pm # updated 2024-11-03 # # cellular automaton - 1D life # # This runs through all 256 rules (0 to 255) # and writes them out as .pbm images. # # One of the most interesting rules is rule 30. # rule 30 = 01111000 # 7 6 5 4 3 2 1 0 # 111 110 101 100 011 010 001 000 # 0 1 1 1 1 0 0 0 # # Each bit represents how the next bit should be set or reset # depending on the state of the bit at that position and its 2 neighbors. # The generations are drawn under previous generations to produce these 2D patterns. # BEGIN { for (rule=0; rule<256; rule++){ # settings wide=512 high=512 wrap=wide+1 FS="" # turn the rule into an array r[] of binary digits b=1 for (i=0; i<8; ++i) { if(and(rule, b)) r[i]=1 ; else r[i]=0 b = lshift(b,1) #printf "%s", r[i] } #print " (the rule in binary)" # create the pbm file # the header print "P1" >rule ".pbm" print "# cellular automaton created with awk 2024-11-03" >>rule ".pbm" print wide " " high >>rule ".pbm" # the body of the image # the first line is special s="" for(x=1;x<wide/2;++x) s=s "0" s=s "1" for(x=1;x<wrap/2;++x) s=s "0" print s >>rule ".pbm" #" (first line)" # ---------- the rest of the body -------------- for(y=1;y<high;++y) { split(s,old) # make old array from string s="" # clear the working string # start building the next line for(x=1;x<wide+1;++x) { # make the edges wrap i=x ; m=x-1 ; p=x+1 if(m<1) m=wide if(p>wide) p=1 # calculate the index into the rule t=(old[m] * 4) + (old[i] * 2) + old[p] s = s r[t] } print s >>rule ".pbm" } print "" >>rule ".pbm" } }
eval without eval
Today I encountered the disadvantage, in that I needed something that worked like the eval() function many languages have, that would let me read in a string and execute it. In this case I had a long list of variables and the values I wanted stored into them. I needed to be able read them in to my program.
I tried a few ideas, including a few proposed by ChatGPT-4o. None worked... until I came up with a solution that even GPT-4 hadn't heard of. I made the settings file (named ".canvas") into a fake awk file with
function settings(){
at the top, then a list of the variable assignments, and the last line the closing curly brace.
In my main awk file I have
@include "./.canvas"
near the top of the file and whenever I want to read the variables into my program I simply have
settings()
in my program. Easy peasy. Yay! :D
Adventures in C programming with ChatGPT-4o
I tried breaking the program into smaller functions that could be solved in isolation, but that surprisingly made things worse because they got out of step, with one part calling another with the wrong parameters.
All this became moot when I re-read the manual of "feh" a small image display program which is supposed to update the display anytime the image is altered. My version of feh doesn't. It seems the binary I installed didn't have that ability, so I recompiled the source code, and yay! My freshly compiled version of feh does what I've been struggling to do with the help of GPT-4 for more than a week.
So... why do I want a program to continually update any changes to the display of netpbm files?
There are 6 variants of the netpbm, and 3 of them are plain text. The pixels are represented as just strings of space-separated "0" (white) and "1" (black) for monochrome (pbm), or strings of space-separated decimal numbers for gray-scale (pgm), or strings of space-separated decimal numbers representing red, green, and blue (ppm).
My favorite general-purpose programming language is currently awk, which shines best at handling text. But I want to do more graphics in the near future. Using awk to manipulate a text-based image format is a perfect solution... if I can have a way to immediately display my results. I now have that last part of the puzzle: feh.
Yay!
warping images
Today I think I've solved a crucial part of the problem. I used a simple pattern of waves and distorted them in a way that if taken further can create marble and granite textures.
Here is how I did it:
I began by making a very orderly heightfield of straight, parallel, sawtooth waves (so named because they ramp up from black to white in a linear gradient, then down again in a linear gradient, which if you graph their values look like the teeth of a saw). I made that using ImageMagick. Here's how...
Make a gradient:
convert -size 512x32 gradient: -rotate 90 ramp.png

Duplicate it reversed horizontally to make a single full sawtooth wave:
convert ramp.png \( +clone -flop \) +append sawtooth_1.png

Duplicate it multiple times to make 8 waves:
convert sawtooth_1.png +clone +append +clone +append +clone +append sawtooth_8.png

Okay. That's the nice orderly image that I want to mess up and distort. To do the next step I need a displacement map to tell G'MIC how much to displace each pixel in the image I want to distort. G'MIC can produce a number of noise images. Its "turbulence" command, is perfect for making a nice displacement map.
Generate a blank canvas of 255x255x1 (X,Y,Z) with one color channel so that it is grayscale. (3 channels would be 3 color rgb) Then add turbulent noise with a radius of 80 pixels, so that large structures dominate. All the other parameters, leave at their default settings.
gmic 512,512,1,1 turbulence 80 -o turbulence_80.png

Now we warp the orderly image with the turbulent noise image :
gmic turbulence_80.png sawtooth_8.png -warp[-1] [-2],1,2,2,1 -n[-1] 0,255 -o sawtooth_turbulence_80_warp.png

Ta-da!!!!
Hard to believe this took me so long to work out. Interestingly, GPT-4 was almost no help at all... other than the fact that it introduced me to G'MIC, which I hadn't heard of before. GPT-4 slowed me down by sending me off on several wild goose chases with its terrible efforts to write code.
finding files on my computer
I'm very careful to store all this data sensibly. Everything is named descriptively and stored in directories that form a branching tree of logical categories. All this is great when I can remember exactly where a file is stored and what it is named, but I have absurd amounts of data. If I saved a file, say, 30 years ago and I can't remember what I called it and what category I'd stored it under, that item is pretty-much lost. A few times this has happened to me and I had to turn to Google to find what the original was called, so that I could then locate it on my backup drives.
I'd often thought it would be great to have a search program like Google on my own machine. Over the years I've tried making something a bit like it. The Linux commands "grep" and "find" are useful, but they are awkward. The "find" command in particular, has a number of peculiarities that make it annoying to use.
Some decades back I wrote a very simple script, which I called "findfile", or "ff", which did most of what I needed a search program to do, but I've become dissatisfied with it. Often I need to search on multiple terms, for example "donald duck" gets right to the results I might be searching for because it would ignore all the people named Donald and all the waterfowl and actions of ducking performed by people. But my program allowed only one search term. I often got around this when absolutely necessary by piping the results into grep with it looking through the results for the second word. This works, but is awkward.
Another thing I felt a great need for was being able to use regular expressions in my searches. Regular expressions can be incredibly powerful.
So, recently I completely rewrote my findfile program. It now takes any number of search terms, like Google does, but it also allows me to use regular expressions. Yay! The result is still just a small bash script, but it has made my life so much easier now. I never realised just how much I needed it.
I also rewrote my old, similar search program, that looked for text inside files. It's named, appropriately, "findtext", or "ft". It, too, can now take multiple search terms and use regular expressions.
formatting text
When outputting text to the terminal, most commands don't do smart-wordwrapping, however text is generally much easier to read if words are NOT broken at the ends of lines. Here is a way to make any text output have smart-wordwrapping.
In your .bashrc add this line:
alias ww="fmt -s -w $(tput cols) - "
It uses the fmt command to wordwrap text according to how many columns wide (-w) the terminal window is (tput cols). The -s option prevents it joining short lines, so it only splits long lines. The final '-' makes it take its input from the standard input.
Now, when you output any text to the terminal just pipe it through ww and it will display as smart-wordwrapped, for example:
cat test.txt | ww
Don't use it to output to a file though, unless you want the extra linefeeds inserted into your file for some reason.
A big shortcoming of the fmt command is that you can't remove all carriage returns inside paragraphs, if you want to undo its formatting.
I solve this using a surprisingly complicated-looking sed command:
sed -z -r 's/[ \t]*\n[ \t]*/\n/g; s/([^\n])\n([^\n])/\1 \2/g'
It's simpler than it looks though.
The -z option treats any zero bytes as end-of-line markers, so since most text files don't contain any zero bytes it treats the entire file as a single line. This lets us treat newline characters as just any other character.
The -r option lets sed use extended regular expressions. This simplifies things because it means a lot of regular expression characters don't have to be prefixed with the '\' escape character.
The sed command breaks down into 2 instructions. They're both substitute commands: s/search/replace/g
The 's' means substitute. The text to be searched for and the text to replace it are both surrounded by '/' characters. And the 'g' means global replace, so every occurrence is replaced
The first command is:
s/[ \t]*\n[ \t]*/\n/g
It searches for any amount of spaces and/or tabs [ \t]* that immediately precede or follow a newline character and deletes those, leaving just the newline character \n. The asterisk means any number of the preceding pattern [ \t].
The second command does the real work:
s/([^\n])\n([^\n])/\1 \2/g
It is a bit weird. It marks some patterns with parentheses () so that sed can make use of what they match. If the first character of a pattern inside square brackets is '^' then it means the reverse, so [^\n] means any character that is not a newline character. So this search pattern means any newline character with ordinary characters on either side of it.
The replace pattern is a lot simpler. The first pattern character in parentheses is remembered and put back into the output. The newline is replaced by a space. And the second pattern character is remembered and replaced back into the output.
So this replaces with a space all lone newlines at the ends of lines, but doesn't touch paragraph ends that have multiple newline characters.
It took me quite a while to work out what is a pretty simple command. I tried a number of different languages. In the end sed rules them all, despite its dense-looking form.
I should rewrite it to let it ignore short lines, such as those in poetry... but that gets a bit complicated. I would have to find lines that are shorter than "normal" and leave them alone. What is "normal" or average line length for the file? How much shorter qualifies it as verse? Tricky.
Conversations with GPT4
I began by discussing the Michelson-Morley experiment and the general feeling that it disproved the luminiferous aether idea, and whether it really did disprove it, considering modern ideas of space, such as it being curved by mass, the cosmological expansion of space itself, and the quantisation of space at the smallest scales could be seen as reintroducing the idea of the aether.
The conversation then moved on to the Big Bang hypothesis, its problems, and why a version of the Steady State hypothesis might explain the universe. That moved on to quasars and their redshift seeming to place them at great distances. Could that redshift be explained by other things and why is redshift attributed sometimes to the doppler effect and other times to cosmic expansion?
I learned a lot. Need to think on it all much more, and obviously need to learn much more.
This felt like the old conversations I used to have with my nerdy friends when I was a teen... half a damn century ago... I didn't realise how much I missed those. It was nice doing it again.
where grep comes from
About a third of the way through the tutorial it gave an example of how to display all occurrences of lines that matched a regular expression. (If you don't know what a regular expression, or regex, is, it is a specialised, concise way of matching any particular text). The example used "re" to represent the regular expression. The command is:
g/re/p
Lightbulb moment!
For those unused to Linux and other Unix-like systems, there is an often-used tool to search for and list matching text in files. It has the very weird name of "grep". Not so weird now. :)
Oh, by the way, while reading the tutorial I suddenly realised I'd been thinking about the problem the wrong way. It was easily solved using sed:
sed -i '/ | /a \\n' textfile
-i means in-place. It replaces the input file with the output file.
' ' sed commands should be surrounded by single quotes.
/ / enclose the regular expression (in this case space, vertical bar, space.
a is the command to insert a line after the current one.
\ introduces the text to be inserted
\n is the special character for a newline
the textfile is the file operated on (it was actually called "eplist.txt")
One of the wonderful things about sed is that it repeats the command through the file for every occurrence of the search text. It is extremely fast, even on very large files.
kdenlive rabbithole
I'm getting close to deaf -- so much so that I can no longer watch videos without subtitles. Unfortunately, about a year ago one of my favorite YouTubers released a 2-hour video without subtitles, and for some reason blocked YouTube from making automated subtitles.
Last night I watched a video by another of my favorite YouTubers that demonstrated that the video editing program "kdenlive" can now automatically subtitle a video from its soundtrack. This was exciting!
This morning I installed kedlive on my computer... but it doesn't have any subtitling functions. The version for my computer is pretty old, created in 2017. So I went online to the kdenlive website and downloaded the latest version and tried it.
https://kdenlive.org
I won't go into why, but suffice to say it doesn't work on my machine. I ended up downloading 6 older versions and none worked on my computer. There are many more older versions, but I didn't bother. The first version to introduce speech-to-text subtitling was one of those I tried (version 22.04).
This was way more complicated than my description indicates. In the end, I'm convinced I need a better computer and a newer Linux.
A new short story: "gone"
https://miriam-english.org/stories/short-stories/gone/gone.html
It's science fiction, like most of my work, of course, but I experiment with something a bit different in this story. If I've done it properly you won't really notice.
The story idea came to me when I listened to a piece by the musician Epoch Collapse: "Prelude to the Morphing of the Night After" (I've linked to it at the top of the story).
I would greatly appreciate any feedback here or via the comment link at the end of the story.
-
If you want to hear that or any other works by Epoch Collapse, check out his YouTube channel at:
https://www.youtube.com/@epochcollapsedariusroberte8907