Linux file tree and over-complexificatoryization
For some time now I've been trying learn how to use Linux well... though, I have to admit, not in a concentrated, concerted effort. I've been attacking it piecemeal, hoping to absorb the info simply by immersion. It has worked well with many things I've done in the past, but it doesn't seem terribly successful in helping me get to grips with the file tree layout in Linux. I keep getting the impression that it owes a lot to bandaids patched onto other bandaids. This is one of the major problems with Windows... at least until Microsoft started all over again with WindowsNT.
In Linux there are all these bin and sbin directories -- the two at the root of the filesystem, then another couple in /usr, and yet another couple in /usr/local. I keep wondering, why so many? Other systems work fine keeping all their core executables in just one, or maybe two directories.
And the names of the major directories:
etc - for configuration files. Why not call it something descriptive, like... oh, I don't know, "config"??
usr - contains files that should not be changed by the user. Huh??? It is for files that you don't want to be overwritten when the system is upgraded.
var - is for temporary files, like log files and such... like tmp except it isn't. Hmmm...
What is worse, different flavors of Linux depart from the "standard" layout to greater or lesser degrees.
There comes a point when the time required to learn all this exceeds its potential usefulness. And unfortunately, all too often, learning this stuff is a bit like that joke: to understand recursion, first you must understand recursion.
The explanations in Linux manuals often require that you already understand what is being described in order to understand the explanation. [sigh]
Now, I don't think I'm a stupid person. I've taught myself almost 20 computer languages, and become proficient with about half of those. Some of those languages are assembly languages. I've designed and built digital circuits to perform a number of functions. If I have problems coming to grips with Linux, how can people who aren't even interested in computers going to fare?
Puppy is a lot easier to get my mind around than most of the other Linuxes I've fiddled with, but even it still leaves a lot to be desired.
It makes me positively nostalgic for the Amiga and for OS-9. Now there were a couple of clean operating systems. Not that they didn't have problems and shortcomings, but it really didn't take much effort to understand the layout and function of the system as a whole. A day or so invested in it and you were up and running, doing useful stuff. With Linux, Windows, and other "modern" operating systems you just better hope nothing ever goes wrong or that you don't need to track down a configuration problem, because you could be looking at days wasted in solving it.
In Linux there are all these bin and sbin directories -- the two at the root of the filesystem, then another couple in /usr, and yet another couple in /usr/local. I keep wondering, why so many? Other systems work fine keeping all their core executables in just one, or maybe two directories.
And the names of the major directories:
etc - for configuration files. Why not call it something descriptive, like... oh, I don't know, "config"??
usr - contains files that should not be changed by the user. Huh??? It is for files that you don't want to be overwritten when the system is upgraded.
var - is for temporary files, like log files and such... like tmp except it isn't. Hmmm...
What is worse, different flavors of Linux depart from the "standard" layout to greater or lesser degrees.
There comes a point when the time required to learn all this exceeds its potential usefulness. And unfortunately, all too often, learning this stuff is a bit like that joke: to understand recursion, first you must understand recursion.
The explanations in Linux manuals often require that you already understand what is being described in order to understand the explanation. [sigh]
Now, I don't think I'm a stupid person. I've taught myself almost 20 computer languages, and become proficient with about half of those. Some of those languages are assembly languages. I've designed and built digital circuits to perform a number of functions. If I have problems coming to grips with Linux, how can people who aren't even interested in computers going to fare?
Puppy is a lot easier to get my mind around than most of the other Linuxes I've fiddled with, but even it still leaves a lot to be desired.
It makes me positively nostalgic for the Amiga and for OS-9. Now there were a couple of clean operating systems. Not that they didn't have problems and shortcomings, but it really didn't take much effort to understand the layout and function of the system as a whole. A day or so invested in it and you were up and running, doing useful stuff. With Linux, Windows, and other "modern" operating systems you just better hope nothing ever goes wrong or that you don't need to track down a configuration problem, because you could be looking at days wasted in solving it.
no subject
But anyway, let me explain the reasoning, perhaps.
Unix generally makes perfect sense, but to understand precisely how Unix makes sense tends to require some explanation. HTH!
no subject
I read the bit you brought to my attention. Good points. I've recently been thinking about what makes for reliability in a computer environment. A number of those align well with some of the things I've come up with. It has given me more food for thought.
Hmmm... it took me quite a while to locate information on the filesystem layout in OpenBSD. Eventually a chance comment by someone in a forum led me to 'hier'. I searched the OpenBSD site for it. Nothing. I Googled for it. Nothing. Eventually I tried a man page for it, and there it was.
See? This is what I mean. An average person would have stopped looking ages back. We need a whole different approach to computing. It is like we've put the old-style horse and buggy makers in charge of transportation:
"I want a machine that can pull with about 150,000 watts of power."
"Okay, that'll require a team of two hundred horses. In order to manage that effectively you'll need a special harness designed in this pattern, and hire this many stable-hands and use a much bigger cart to carry enough food for all those horses."
"But there must be a simpler way."
"Nope. I can assure you that there are good and logical reasons for each of these decisions."
"But I've heard tell of a kind of machine that uses controlled explosions to produce the power of 200 horses, in a package the size of a single horse, powered by just liters of a volatile fluid."
"That's crazy talk. Look, I assure you we can't have high power without a big team of horses. I'm an expert. Can't be done. Nuh-uh. If it could be done I'd know about it because it would be written up in the horse-drawn carriages guidebook."
no subject
But I do agree with you that a different ground-up approach is needed. Something powerful and clean, but coated properly in the right sugar so that users aren't put off. I've personally been wanting to do something like this for a long time, but this requires a lot of resources, and I'm not just talking about the word in the computing sense.
Linux is not reaching this stage. It may be functional and possibly powerful, but everything atop it is quite frankly, messy.
no subject
In my early computing days I had a Tandy Color Computer (CoCo). I've always had a particular fondness for that machine, for a few reasons.
1. it used, what I still think, is one of the sexiest processors ever developed: the 6809. The team at Motorola who designed the 6809 spent ages researching the best possible instruction set, and did a brilliant job. Writing assembler for a 6809 is almost like working with a high level language. It is beautiful, clean, and incredibly compact.
2. The old CoCo had perhaps the first memory management system. It had banks of RAM or ROM that could be switched in and out of its 64K memory space in a single clock cycle.
3. It had OS-9. A multitasking, multiuser operating system... on a 64K machine!!! This meant the OS consisted of the essentials. Absolutely no bloat.
Recently a bunch of CoCo enthusiasts have got together and rebuilt OS-9 from the ground up (because it is still a proprietary product) called it NitrOS-9 (http://www.nitros9.org) and released it free, open source. (Amazingly, it runs considerably faster than the original OS-9.) This seems to me the perfect basis on which to build a really useful OS (though I need to look into whether it uses cooperative [bad] or pre-emptive [good] multitasking). I'm being forced slowly to the conclusion that this is something absolutely needed.
At the moment adding more parts to something makes the whole thing more risky. Computers have become like chains -- They're only as strong as the weakest link. A few things have been done to help, like pre-emptive multitasking, user-privileges, garbage collection, dynamic typing, online context-sensitive help, and human-readable error messages, but heaps more needs to be done. A computer crashing or a program locking up should be events for real concern, not a shrug and a sigh and reboot. We have become used to the big bad. We deserve better.
no subject