However, the package description contains this:
WARNING: Please do not run `tmpreaper’ on `/’. There are no protections against this written into the program, as that would prevent it from functioning the way you’d expect it to in a `chroot(8)’ environment.
After you install the package, you need to manually edit /etc/tmpreaper.conf
and remove or comment the SHOWWARNING=true
line to actually active it. Also review the settings in that file.
At least some versions of Ubuntu, and possibly Debian, do not install tmpreaper by default. I assume that is in accordance with the “principle of least surprise” but this policy may bother system administrators familiar with Red Hat or other systems where /tmp
is automatically cleaned out by default. Note that /tmp
and other directories are still cleaned at boot-time by the default /etc/init.d/bootclean
(Debian) or /etc/init.d/*-bootclean.sh
(Ubuntu) scripts.
The Red Hat and derivatives equivalent is ‘tmpwatch’ and is installed by default on those systems.
]]>The main motivation for writing lbzip2 was that I didn’t know about any parallel bzip2 decompressor that would exercise multiple cores on a single-stream bz2 file (i.e. the output of a single bzip2 run) and/or on a file read from a non-seekable source (e.g. a pipe or socket). Thus lbzip2 started out as lbunzip2, but with time it gained multiple-workers compression and single-worker decompression features. Due to the input-bound splitter of its multiple-workers decompressor, it should scale well to many cores even when decompressing.
Originally, the target audience for lbzip2 was experienced users and system administrators: up to version 0.15, lbzip2 deliberately worked only as a filter. Now at 0.17, lbzip2 is mostly command line compatible with bzip2, except it doesn’t remove or overwrite files it didn’t create. If lbzip2 will have a chance to enter the Debian alternatives system, as an alternative for bzip2, I’ll add this feature. In any case, you are encouraged always to verify lbzip2’s output manually before (or instead of automatically) removing its input, both when compressing and when decompressing. I also recommend perusing the README, installed as /usr/share/doc/lbzip2/README.gz on Debian, before switching over to lbzip2 eventually.
As lbzip2 was chiefly created for speeding up decompression of single-stream bz2 files and/or for speeding up decompression from a pipe, I’ll provide examples of decompression first. Basically all free software tarballs should be available on the net as tar.bz2 files, I’ll choose (not surprisingly) a kernel tarball.
The “traditional” method:
wget \ http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.31.1.tar.bz2 tar --use=lbzip2 -x -f linux-2.6.31.1.tar.bz2
The overlapped method:
wget -O - \ http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.31.1.tar.bz2 \ | tee -i linux-2.6.31.1.tar.bz2 \ | tar --use=lbzip2 -x
If wget fails to download the tarball for some reason (at which point at least tar will complain), you should remove the partially decompressed tree and fall back to the traditional method. To avoid losing the already downloaded part, pass -c to wget.
Another example might be the import of a Wikimedia Dump file, perhaps with a pipeline like this:
lbzip2 -d < enwiki-latest-pages-articles.xml.bz2 \ | php importDump.php
Finally, a compression/backup example with verification at the end:
tar --format=pax --use=lbzip2 -c -f tree.tar.bz2 tree tar --use=lbzip2 --compare -f tree.tar.bz2 -v -v
Hypothetically, with lbzip2 as the configured bzip2 alternative, we should be able to replace –use=lbzip2 with the well-known -j GNU tar option.
I posted a longish mail with feature analyses and performance measurements to the debian-mentors maling list. To reiterate what I said there: fundamentally, lbzip2 was created to fill a performance gap left by pbzip2.
After working on lbzip2 for a while, I found out that p7zip does in parallel the decompression of single-stream bz2 files, but (the last time I checked) it couldn’t scale above four threads, and it refused to read bz2 files from a pipe.
Bzip2 compression and decompression performance is very sensitive to the cache size that is dedicated to a single worker thread (i.e. a single CPU core). To my limited knowledge, this implies that among commodity desktops, lbzip2 performs best on multi-core AMD processors.
lbzip2 does have shortcomings. They are either inherent in the design or I deem then unimportant. I tried to document them all. Please read the debian-mentors post linked above, the README file, and the manual page.
As said above, I didn’t originally intend lbzip2 as a drop-in replacement for bzip2. Even though it is almost there now, you should nonetheless get to know it thoroughly before deciding to switch over to it.
Various versions of lbzip2 are available for Debian (squeeze and sid) and Ubuntu (karmic and lucid).
You should be able to install lbzip2 on lenny too; it shouldn’t break anything. I used the following commands:
cat >>/etc/apt/sources.list <<EOT deb http://security.debian.org/ testing/updates main deb http://ftp.hu.debian.org/debian/ testing main EOT apt-get update apt-get install lbzip2
Upstream releases are announced on the project’s Freshmeat page. I distribute the upstream version to end-users from my recently moved home page, which also links to other distributions’ lbzip2 packages.
A development library version is very unlikely. You can work around this by communicating with an lbzip2 child process over pipes via select(), and by checking its exit status via waitpid() after receiving EOF. This is not an unusual method; see, for example, gpg’s many –[^-]*-fd options.
I encourage you to test lbzip2. The upstream README describes the test method in general; let me instantiate that description here specifically for Debian.
Necessary packages, in alphabetical order:
Recommended packages, in alphabetical order:
Create a test directory (you will need lots of free space under that directory), and under it a well-compressible big file. For example:
mkdir -m 700 -v -- "$TMPDIR"/testdir tar -c -v -f "$TMPDIR"/testdir/testfile.tar /usr/bin/ /usr/lib/
Then issue the following commands, utilizing the test file created above. As this could take several hours, I suggest entering a screen session first. Your machine should be otherwise unloaded during the test, both IO- and CPU-wise.
cd /usr/share/lbzip2 dash test.sh "$TMPDIR"/testdir/testfile.tar
Any errors encountered during the test should be either handled or fatally rejected. In particular, utilities refusing to decompress from a pipe are handled.
Estimated disk space usage: when writing this article, I executed the above commands with a 100 MB test file. (You should aim at least at 1 GB.) The test directory ended up being 250 MB in size. M stands for 220, G stands for 230.
Estimated time span: supposing
then the full test should take around
S * (1879 + 2098 * 2 / N) * T / 240
seconds.
Estimated peak memory usage: N * 50 MB should be a very safe bet.
To view the test report:
less -- "$TMPDIR"/testdir/results/report
The only obscure entries in the table should be the “ws” ones. They mean “workers stalled” and give a percentage of how many times the (de)compressor worker threads tried to start munching a block but had to go to sleep because there was no block to munch. Anything above 1-2% usually implies some bottleneck and shows that lbzip2 couldn’t fully exhaust your cores. This shouldn’t occur, but if it does and lbzip2 and pbzip2 have performed similarly in the compression tests, then the bottleneck is in your system, not lbzip2.
]]>Many people, however, ignore periodic backups because they find it too much of a hassle. That’s why, the backup procedure must be fully automated and require no user intervention, at all.
Backupninja is a backup system that provides excellent automation and configuration facilities. You only need to instruct Backupninja once, and he will take silent duty of defending your valuable data. This can be done via direct edit of configuration files, or via a nice console wizard called ninjahelper
, which also helps to test the backup actions interactively.
Backupninja doesn’t do the hard work himself, but rather relies on specialized tools like rdiff and duplicity, thus following the Unix-way. There is built-in support for specialised backup actions, including things like the backup of Subversion repositories, or LDAP, MySQL, and PostgreSQL databases. It can do remote, incremental backups, as well as burning them to CDs or ISO images.
But the best part is that Backupninja is capable of learning new powerful skills, just by reading user-provided shell scripts. For example, I use the following script to dump important package information of my Debian system:
#!/bin/sh dpkg --get-selections > /var/backups/dpkg-selections if [ $? -ne 0 ] then error “dpkg selections dump failed” else info “dpkg selections dump done” fi aptitude search -F %p ‘~i’ > /var/backups/apt-installed && \ aptitude search -F %p ‘~i!~M’ > /var/backups/apt-installed-manual && \ aptitude search -F %p ‘~i ~M’ > /var/backups/apt-installed-auto if [ $? -ne 0 ] then error “installed package list dump failed” else info “installed package list dump done” fi
Note the use of some special functions: debug
, info
, and error
. They put descriptive messages into the log file. It allows me to quickly ensure that fresh backups have actually been created. I’ve been using Backupninja to backup my personal data for a long time.
Pros:
Cons:
Sometimes you know you just need to change a single line or a only a few things in a file, but for sure you don’t need syntax highlighting, Gnome VFS integration, or a plugin manager. Then you can spare a few seconds and start leafpad, instead of the usual Gedit/Kedit. Leafpad is is a very simple GTK editor, who can just do search/replace, line numbering and, yes, you can change the default font. Actually, as the result of creeping featurism, printing was added to Leafpad in version 0.8
Leafpad starts always in less that a second, in contrast to 3-4 seconds for gedit on my computer. And for just removing a single line, it makes a difference.
Since leafpad has an installed size of 672k, giving it a try will surely not clutter your hard drive.
Leafpad has been in Debian since at least Etch, and in Ubuntu since Dapper Drake.
]]>timeout (part of the SATAN package) is a nice little tool to terminate/send a signal to a process after a given time.
It usually takes two arguments, the first one is the time limit in seconds and the second the program to start. All trailing options are then passed to the started program.
It accepts a single numerical option which specifies what signal to send — be careful as its default is SIGKILL.
Quite useful on many occasions, e.g.: strace stats of a process PID for the next 300 seconds
timeout -2 300 strace -tt -c -p PID
Ensure that your kids don’t play childsplay all day long (of course you need to make sure that they won’t be able to restart it ;))
timeout 3600 childsplay
Similar programs could be timelimit.
Package is available in Debian for ages (at least since etch) and Ubuntu since at least dapper.
]]>yeahconsole is a “quake-like” dropdown terminal emulator wrapper for X. Originally written to complement the author’s window manager (yeahWM), it can be used anywhere, and is lightweight and dependency-free.
yeahconsole can be invoked by itself (in which case it simply starts your preferred terminal emulator) or with the -e (execute) argument. Once started, the default hotkey to drop down the terminal is Ctrl-Alt-y.
yeahconsole can be configured via your ~/.Xresources
file, in the format:
yeahconsole*foo: value
Type yeahconsole -h to view possible resources and their default values. Some highlights:
term: Your preferred terminal emulator. xterm and urxvt are supported. xOffset, screenWidth, consoleHeight: Set the placement and size of the terminal. Offset and width are measured in pixels, height in lines. aniDelay, stepSize: Delay and step size settings for the slide animation. Setting stepSize to 0 disables the animation. toggleKey, keyFull: Hotkeys to drop down the terminal. Set to Control-Alt-y and Alt-F11 by default, respectively.
See the man page for more; see also the man pages for xterm and urxvt and their respective resources. Particularly note that if urxvt is used as the terminal emulator, pseudo-transparency is supported.
Yakuake (featured in another debaday article) and Tilda: For KDE and Gnome, respectively. Yakuake is a wrapper for Konsole and Tilda for Gnome-terminal libvte (on which Gnome-terminal is based on). Both highly useful and, in some respects, more full-featured, but both carrying obvious overhead (and dependencies), especially if you’re not using KDE or Gnome. For instance, both Yakuake and Tilda have tabs, a feature which yeahconsole lacks. However, this writer has found yeahconsole + screen to be a much more lightweight, configurable, and ultimately satisfying solution.
yeahconsole has been available in Debian since at least Etch, and in Ubuntu since Gutsy. It is unknown to this writer whether yeahconsole is in active development, but it seems to be bug-free.
]]>Most sysadmins will agree that having a file integrity checker is a good idea, the problem with them is that they are usually a giant pain to get working and keep up-to-date. Thus they are perpetually on the “to do” list and then you don’t have it when you need it. (Hint, after the intrusion is too late.)
Enter fcheck, which Just Works out-of-the-box with the exception of the “major gotcha” detailed below, and with only a little care and feeding.
When installed it creates the file database (DB) then runs from cron every two hours. When it sees a change it sends email (via cron) then rebuilds the DB by itself, so you won’t get the same error next time. That’s a potential security issue, since if you lose that email you’ve missed your one and only alert. Also, if some files change all the time (like /etc/mtab
, /etc/printcap
, and /etc/samba/smbpasswd
) you will get alerted on them every run, until you go exclude them. The configuration file supports file includes, so keeping a custom fcheck.local
file is a breeze.
You will get a large alert message after an aptitude *-upgrade
command, which is a great way to validate your change control policy (yup, stuff was changed when it was supposed to; or Who the heck is messing with my server?!?
).
The existing package does not include logcheck ignore files, so if you’re using the logcheck package (and you should be on a server) you’ll get a alerts about DB rebuilds unless you add an ignore line (see samples).
The default config file is not bad, and adding new files and directories for fcheck to monitor is really easy, though including directories is a bit subtle in that they are only checked recursively if listed with a trailing ‘/’. See the examples below for things I usually add.
There is also a major gotcha reported in this bug report. It turns out there is a missing exclude needed for /lib/udev/devices/
so the install will hang at “Building fcheck database (may be some time)…” or during a check at “PROGRESS: validating integrity of /lib/” and leave a ton of fcheck processes clogging up your system. See the bug and the samples below for the fix.
Because of the easy failure mode of a single email before the DB update, and the lack of cryptographic protection of its component files, it’s not the most secure program in the book. But it is drop-dead easier than anything else I looked at. In my book, “easy and used” beats “such a pain I never got around to it” any day :-)
. And it’s not that hard to make it more secure by keeping off-line copies of the DB, configuration and Perl script and adjusting the cronjob to NOT rebuild after changes, if you want to.
If you run a server you should be using fcheck and logcheck. And probably tmpreaper, etckeeper and maybe monit too. To summarise:
fcheck.cfg
that is a bit more comprehensiveDebian: Since at least Etch: 2.7.59-8
Ubuntu: Since at least Dapper: 2.7.59-8
Edit /etc/fcheck/fcheck.cfg
and add at the bottom:
# Tweak the main file if needed, then add this near the bottom. # In addition to the defaults in this main file, also: CFInclude = /etc/fcheck/fcheck.cfg.local
Create /etc/fcheck/fcheck.cfg.local
# In addition to the defaults in '/etc/fcheck/fcheck.cfg': # Track changes to crontabs (may want to limit to some users on busy systems) # Note trailing '/' for recursive check of this directory Directory = /var/spool/cron/ # This stuff changes too often Exclusion = /etc/package.list Exclusion = /etc/printcap Exclusion = /etc/motd Exclusion = /etc/mtab #Exclusion = /etc/samba/smbpasswd # for DHCP: Exclusion = /etc/resolv.conf # BUGFIX, per https://bugs.launchpad.net/ubuntu/+source/fcheck/+bug/47408 # Can't hurt to have this just in case Exclusion = /lib/udev/devices/
Only if you are also using the logcheck package, create /etc/logcheck/ignore.d.server/fcheck.local
:
# Ignore fcheck rebuild notices # Note that this should be one single line: ^\w{3} [ :0-9]{11} \w+ fcheck: “INFO: Rebuild of the fcheck database /var/lib/fcheck/fcheck\.dbf begun \ for \w+ using config file /etc/fcheck/fcheck\.cfg”]]>
Logcheck periodically checks various syslog (or other) log files and picks up where it left off the last time. During each run it takes the new messages and looks for “known bad” things but first removes stuff that “looks bad but isn’t” and saves the messages as “this is known to be bad.” Then it rewinds, removes the known bad it just collected, removes the “known good” and stuff that “looks bad but isn’t” and saves whatever is left as “unknown.” Then it emails you the results.
Over time, as you tune your files, you end up only being alerted to known bad or new (not yet classified) stuff. Brilliant. I even did a (cheesy) Windows port of it.
Originally written by Marcus J. Ranum and Fred Avolio as frequentcheck.sh
for the TIS Gauntlet firewall toolkit, it was adapted by Craig Rowland and applied to system logs. It spent some time as logsentry
as part of Psionic’s Abacus/Sentry tools until they were bought by Cisco and the tools moved to SourceForge. The version in Debian is a re-write which was then inherited by Ubuntu.
But the best about the Debian/Ubuntu implementation is that almost all of the patterns you need are already Just Thereā¢. I usually only have to add a handful to work around odd things I’m doing or minor bugs. See the example at the bottom.
If you run a server you should be using fcheck and logcheck. And probably tmpreaper, etckeeper and maybe monit too. Articles about all these tools will be published soon, stay tuned!
As drawbacks, it should be noted that it may require some tuning, especially on a workstation or newer distro versions, and that may not be scalable for a lot of servers.
Is also worth mentioning that there are a variety of commercial and Managed Security Monitoring solutions that will scale and provide more information about events, but none are this easy.
The logcheck
package is available in Debian since at least etch, and in Ubuntu since at least Dapper. See also the logcheck-database
package.
/etc/logcheck/ignore.d.server/LOCAL.ignore
(lines wrapped for readability)
# /usr/sbin/logcheck automatically removes blank lines and comments. # See 'man run-ports' for file name restrictions. # For testing, create a sample log file and: # su -s /bin/bash -c "/usr/sbin/logcheck -tsol sample" logcheck # e.g.: su -s /bin/bash -c "/usr/sbin/logcheck -tsol /tmp/mylog" logcheck # # DHCP Client lease renewals # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ dhclient: New # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ dhclient: DHCP(REQUEST|ACK) # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ NetworkManager:]]>DHCP daemon state is now 3 \(renew\) for interface # # NTP, usually: 4001/0001 # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ ntpd\[[0-9]+\]: kernel time sync status change [0-9]+ # # Syslog restarts (morning or all) # ^\w{3} [ 0-9]{2} 07:[45][:0-9]{4} [._[:alnum:]-]+ syslogd 1\.5\.0#[0-9]ubuntu[0-9]: restart\. # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ syslogd 1.5.0#[0-9]ubuntu[0-9]: restart\. # # fcheck # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ fcheck: “INFO: Rebuild of the fcheck database /var/lib/fcheck/fcheck\.dbf begun for [._[:alnum:]-]+ using config file /etc/fcheck/fcheck\.cfg” # # lm-sensors (normal) # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ kernel: \[[0-9. ]+\] CPU[01]: Temperature/speed normal # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ kernel: \[[0-9. ]+\] Machine check events logged # # Wireless # ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ NetworkManager: \(eth1\): supplicant connection state:
Nullmailer is a minimal MTA (Mail Transport Agent) that provides mail delivery services to programs (cron jobs, system integrity checkers, log inspectors, etc.) in a host that otherwise does not require a full MTA like Exim or Postfix. Do not confuse an MTA with programs like Evolution or Thunderbird which are MUAs (Mail User Agent): programs that offer an interface to a human to write email.
Nullmailer is one of those packages that create a “well duh” moment when you find out about it. Normally, hosts with no MTA can’t send mail, which turns out to be a Bad Thing in terms of finding out when things like cron jobs break, or for monitoring logs or files. So you go and install a minimal system, then wonder why it’s being so quiet. Well, no MTA, no email. But Exim, Postfix or another full MTA is overkill and might be tedious to maintain. What you really need is just a basic MTA to send messages to the real mail server.
That’s nullmailer.
The package will prompt for your remote mail server and create /etc/nullmailer/remotes
, where you can also specify authentication details. You probably also want to create /etc/nullmailer/adminaddr
to receive in one mail account all mail destined to your local host. Each file is a oneliner that contains pretty much what you’d expect:
$ cat /etc/nullmailer/adminaddr I_get_roots_mail@example.com $ cat /etc/nullmailer/remotes mail.example.com
There are also several other files that may be used by nullmailer: /etc/nullmailer/defaultdomain
and /etc/nullmailer/defaulthost
in case you don’t already have /etc/mailname
. For a complete list of control files, see the nullmailer(7)
man page. Detailed information can be found in the man pages for each part of nullmailer: nullmailer-queue(8)
, nullmailer-inject(1)
, and nullmailer-send(8)
.
Pros:
Cons:
Other alternatives:
Nullmailer has been available in Debian at least since Etch, and in Ubuntu Universe since Dapper.
]]>Article submitted by Paul Wise. DebADay needs you more than ever! Please submit good articles about software you like!
Chromium B.S.U. is a top down fast paced high action scrolling space shooter. In this game you are the captain of the cargo ship Chromium B.S.U., and responsible for delivering supplies to the troops on the front line. Your ship has a small fleet of robotic fighters which you control from the relative safety of the Chromium vessel.
You control the robotic fighters with your mouse and repel wave after wave of different kinds of enemy ships. Launch Chromium B.S.U. from the Applications / Games / Arcade menu and start a new game. You will soon be sending volleys of weapon fire toward the enemy ships while protecting yourself with super shields, waiting for powerups to get closer or dodging fire from the larger enemy ships at the end of each level:
If you keep getting killed, quitcher whinin’, you ninny! It’s supposed to be hard! Seriously, Chromium B.S.U. is intended to be a 15 minute adrenaline rush/mental cleanser. Frequent doses of explosions can be very therapeutic. There is always kamikaze attacks or the BIG RED BUTTON if you get into particularly nasty trouble.
The chromium
package is in Debian since lenny and Ubuntu since dapper, but it has been recently renamed to chromium-bsu
. Chromium B.S.U. is an old favourite of the Linux gaming community that has been neglected until recently. The project is still looking for developers, especially to help move it off obsolete libs like libglpng and fix the rest of the bugs.