Search

dstat: versatile tool for generating system resource statistics

January 11th, 2009 edited by Vicho

Article submitted by András Horváth. We’re running out of articles! If you like Debian Package of the Day please submit good articles about software you like!

During my work with computers, I like to check the usage of system resources in my network. Sometimes a running process takes up too much CPU load, or the disk I/O goes too high. To get a clean picture of how much resources are being used by a client, I used ifstat, top(1) and iostat(1).

Since I have found out about dstat, I can cleanly check out all the system resources used by my computers. dstat prints all the different type of resources in separate columns on a single line, so it is very easy to see the system load globally.

Quoting from the website:

Dstat is a versatile replacement for vmstat, iostat, netstat, nfsstat and ifstat. Dstat overcomes some of their limitations and adds some extra features, more counters and flexibility. Dstat is handy for monitoring systems during performance tuning tests, benchmarks or troubleshooting.

Dstat allows you to view all of your system resources instantly, you can eg. compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval).

Here is a sample output that I made on my computer:

screenshot of dstat

Though dstat gives global statistics about the currently used system resources, it might replace several tools in one. Mostly you would run it without any parameters, that makes it very easy to remember too :)

Pros (compared to other programs):

  • All kinds of resource statistics in one single line.
  • No parameters needed in most cases.
  • CSV files can be generated easily to create charts in OpenOffice or Gnumeric.

Cons:

  • No per-process statistics

There are official packages available in both Debian and Ubuntu since very long time.

Posted in Debian, Ubuntu | 6 Comments »

tellico: collection manager for books, videos, music, and a whole lot more

January 4th, 2009 edited by Vicho

Article submitted by Dean Serenevy. We are running out of articles! Please submit good articles about software you like!

You’ve heard of book and movie collection organizers, but Robby Stephenson’s tellico is a general purpose collection manager. This application can be used to store information about arbitrary collections of whatever tickles your fancy. Tellico is available from the tellico package in Debian since Sarge and in Ubuntu since Dapper. Tellico is a KDE application, but works fine in other desktop environments.

The Basics

Like any good book or movie collection application, tellico presents the user with a multi-pane window that groups entries by some customizable criterion (I’ve grouped by director below), lists entries by some fields (customizable), and shows thumbnails.

Typical movie database

Selecting a list entry shows a more detailed view and a larger thumbnail. Clicking the image in this view launches your image editor.

Viewing an entry

Most of the built-in collection types include search sources to make adding new entries easy. Tellico has default search sources for Amazon.com (US, Japan, Germany, United Kingdom, France, and Canada), IMDb (movie database), z39.50 servers (bibliographic database), SRU servers (bibliographic database), PubMed (Medicine bibliography), CrossRef.org (bibliographic database developed by a consortium of publishers), and some others. You can also write your own script that performs the search and returns entries in a supported format.

Searching for every Debian user's favorite movie

The search box will filter results based on regular expression queries. Complex filters can be named and will be saved with the collection file.

So many movies, so little time

Beyond Books and Movies

Tellico’s built-in list of collection templates is already quite impressive. It provides default templates for books, bibliographies, videos, music, video games, coins, stamps, trading cards, comic books, and wines. However, users are free to modify, add, or remove fields in these collections or even create custom collections with arbitrary fields.

For example, I keep a collection of hyperplane arrangement examples in a custom tellico file. Tellico happily keeps a fully group-able and search-able record of my coefficient fields, polynomials, and other fields.

Arrangements of hyperplanes

Editing a custom entry looks just like editing a standard record type. Fields are grouped by customizable categories.

Editing a record

Modifying the collection fields is wonderfully simple. Your fields may be any of several field types including: text, paragraph, choice, checkbox, table, URL, date, and image. Field upgrading is supported between compatible field types.

Fields may be auto-formatted as names or titles if you wish. You can also control whether the field should support auto-completion (using existing entries in your collection), multiple values, or whether the field should appear in the grouping combo box.

Editing the fields in a collection

The paragraph field type supports basic HTML markup (used here in my bibliography collection). The red letters are KDE’s spell-check attempting to be useful.

HTML markup in a paragraph field

I use the table field type in my recipe collection.

Using a table for an ingredient listing

Beyond the Application

Tellico can import and export data to and from many sources (Bibtex, CSV, PDF metadata, Alexandria, …). It can export your collections (even custom collections) to HTML and generate HTML reports in several styles. Tellico even has limited support for sending citations to OpenOffice.org Writer (though I have never used this feature).

Moreover, since Tellico stores its data in a fully documented XML file you can write XSLT or use any XML parser to transform the data file however you like.

Tellico supports loan tracking for any collection type. It also translated into more than ten languages.

The not so good

Tellico is somewhat laggy when loading hundreds or thousands of images from disk and occasionally when switching from thumbnail view to entry view. However, switching between entries is always fine and collections with fewer images are quick and responsive.

Alternatives

There are many special-purpose collection managers (most of which are listed on the tellico homepage), but tellico is one of the earlier general purpose managers. Some applications (such as GCstar) are becoming more general-purpose as they mature. Others (such as Stuffkeeper) are simply younger applications and are not yet stable. Tellico is a well-designed application and therefore can give even the special-purpose collection managers a run for their money.

Posted in Debian, Ubuntu | 7 Comments »

atool: handling archives without headaches

December 28th, 2008 edited by Vicho

Article submitted by Paulus Esterhazy. Last article of 2008! We hope 2009 will be full of good articles about Debian and Ubuntu packages. But we can’t do it without your help, please submit good articles about software you like!

Have you ever wrestled with tar(1) and other Unix archive tools? Wondered why every tool has its own arcane syntax and nonstandard behavior? And why on earth is it impossible to use unzip(1) to unpack multiple archive files?

The good news is that, in the Unix universe, you can be sure that someone else has asked himself the same question before and, perhaps, solved it. And so it is. The atool package supplies a set of commands that hide the complexities and lets you work with compressed file archives in a sensible manner.

Arguably the most useful commands included are apack, aunpack and als which as their names suggest create an archive, and extract or list its contents. In addition, acat uncompresses an archive file and outputs the file contents to standard output, whereas adiff compares two archives and shows the differences between their contents. These commands work as you would expect them to, and the author has stuck to the Unix conventions where possible.

The details, however, are worth a look. Some examples:

  • aunpack archive.tgz Unpacks all the files in the archive. If the author of the archive was so inconsiderate as to put multiple files in the archive’s root, the command automatically creates a directory and moves the files inside.
  • aunpack -e archive1.tgz archive2.zip Unpacks each archive.
  • apack archive.tar.bz2 *.txt Creates a new compressed archive containing all text files in the current working directory.
  • als archive.rar Shows the names of the files contained in the archive.

Note that for each atool command the archive file name precedes the names of the files to add or extract on the command line. Compare aunpack -e archive1.tgz archive2.tgz and aunpack archive1.tgz file.txt.

As you can see, atool commands automatically determine the file type by looking at the extension, but they resort to using file(1) if the simpler heuristic fails (you can override the guess using the -F switch). Most commonly used archive types are supported, including tar+gzip, tar+bzip2, zip and rar; a notable omission in the version available in Debian Sarge and Ubuntu 8.04 is the relatively new LZMA compression format (lzma(1)), but the active upstream author has already added support for it. You can also extract a .deb package by forcing the ar archiving method using the switch -F a.

Atool is blessed with the virtue of simplicity and its options are explained in the helpful manpage, which thankfully doesn’t follow the Unix convention of leaving out examples. Here’s one last gem from the documentation. If you frequently work with archives you get from the internet, you probably follow this procedure: Check archive type, check that the archive contains a top-level directory, unpack the archive, change to the directory extracted. These steps can be combined by adding the following function definition to your $HOME/.bashrc or $HOME/.zshrc:

aunpack () {
  TMP=$(mktemp /tmp/aunpack.XXXXXXXXXX)
  atool -x --save-outdir=$TMP "$@"
  DIR="$(cat $TMP)"
  [ "$DIR" != "" -a -d "$DIR" ] && cd “$DIR”
  rm $TMP
}

After adding these lines, you can “reload” the configuration file in your shell using source ~/.bashrc or source ~/.zshrc. Now running aunpack automatically changes the current directory to the one just extracted. Note that adding this snippet is necessary to achieve the desired behavior becausing a directory change is effectively useless unless it is performed in the context of the running shell.

Atool was written in perl by Oskar Liljeblad. It is available in all current Debian and Ubuntu releases. Besides atool, there are a few other tools that aspire to be the Swiss army knife of archivers, for example deco. These programs, however, are not as full-featured and mature as atool.

Posted in Debian, Ubuntu | 8 Comments »

watch (from procps): execute a program at regular intervals, and show the output

December 21st, 2008 edited by Vicho

Article submitted by Kris Marsh. If you celebrate Christmas, you can give to Debian Package of the Day a nice present: a good article! :-)

Ever wanted to monitor a directory every second and see differences in filesizes per second? Or for that matter, run any program once a second and highlight differences in time? Well you can, and you have been able to since forever as it’s installed by default on the majority of Linux distributions. watch is part of the procps package, available in Debian and Ubuntu.

Here is an example for checking a directory:

watch ls -l

To highlight changes in each program run, you can use the -d flag:

watch -d ls -l

And to run the command every N seconds, use -nN (by default, watch runs every 2 seconds):

watch -n1 -d ls -l

Finally, to make the diff highlighting “sticky” (i.e. stay on permanently after a change is detected), use: -d=cumulative

Other examples:

  • Watch your log directory for changes
    watch -d=cumulative -n1 ls -lt /var/log
  • Watch for new email
    watch -n60 from
  • Monitor free memory
    watch -n10 free -m
  • Monitor established connections
    watch -n1 -d 'netstat -an | grep ESTABLISHED'

… you get the point. If you’re a system administrator, or just maintain Linux machines in general you’ll probably spot a bunch of places where you can use this straight away.

Posted in Debian, Ubuntu | 6 Comments »

ferm: a straightforward firewall configuration tool

December 14th, 2008 edited by Tincho

Article submitted by David A. Thompson. We’re running out of articles! If you like Debian Package of the Day please submit good articles about software you like!

Grumble… a postgresql server on an old Sun workstation isn’t visible to another old Sun workstation which (in theory…) is storing data on the postgresql server. The culprit was a misconfigured firewall. Rather than wading through a bunch of iptables commands, it seemed time to revisit the world of iptables front-ends on the off-chance there was an undiscovered treasure I’d missed on earlier visits. It turns out that there was one: ferm.

A revisit to firestarter, a straightforward GUI interface, ended when firestarter segfaulted and then, when started again, automatically started its firewall. Fortunately, I had altered the firestarter rule set and opened port 22 before firestarter segfaulted. Otherwise I would have been hundreds of miles away from an inaccessible server. After firestarter crashed again with a memory error, I decided to move on…

Like several other firewall front-ends, ferm is aware of the issues associated with working on servers hundreds of miles away from one’s physical location. Ferm starts with a default configuration which leaves the default SSH port open. Even better, ferm has a ‘try-before-you-buy’ feature (shared with a few other packages such as firehol): ferm --interactive activates a specific ruleset and, if a user response isn’t given within 30 sec, the system reverts to the previous ruleset.

Rather than using a GUI interface (e.g., firestarter, gnome lokkit, guarddog, kmyfirewall, knetfilter, …), ferm is configured via a text configuration file and can be controlled in a straightforward manner from the console. This may be a desirable feature for running on a box with limited disk space as GUI interfaces generally require the presence of X windows-related packages, often along with several KDE- or Gnome-related packages.

My main concern wasn’t with whether the application had a GUI or console interface but was with whether the application facilitated straightforward configuration of an iptables ruleset (translation: it shouldn’t take 20 min of reading documentation to get a simple firewall up). Other front-ends (e.g., shorewall and firewall builder) appear to be designed for complex rule-sets and require a substantial investment of effort to learn the syntax of configuration files or a ‘rule-making language’.

Along with ferm, another front-end, firehol seemed to also hit the mark with respect to having a straightforward syntax. Unfortunately, I found that firehol ended up being a time-consumer. In my experience, preparing a firehol configuration file which didn’t trigger multiple errors from firehol/iptables did not prove to be straightforward. In contrast, ferm gave me no such problems. A few tweaks of the default system configuration file —primarily opening a few ports—:

  proto tcp dport ssh ACCEPT;
  proto tcp dport http ACCEPT;
  proto tcp dport https ACCEPT;
  proto tcp dport postgres ACCEPT;

A simple /etc/init.d/ferm restart and things were running smoothly. Minimal effort, satisfying results…

The bottom line is that, for simple rulesets, using ferm is definitely easier than preparing iptables rules by hand. However, ferm can also be used to put together more complex firewall rulesets. It uses a reasonably powerful configuration language (including support for variables, function definitions, and arrays) which facilitates addressing more complex situations than the one I faced. To top it off, ferm seems to be under active development with bugs being squashed and features being added relatively regularly.

ferm has been available in Debian since Etch and in Ubuntu since Dapper.

Update, editor’s note: I’d like to add to this article my personal experience with ferm. Being a SysAdmin, I’ve been using netfilter/iptables for many years, after migrating away from ipchains; and the day I’ve found ferm my work changed completely. To me, being able to write your rules in clean structures, with blocks, variables and ‘functions’ is, by far, the most important feature of ferm. Thanks to this, I was able to write very complicated rule-sets, which were still readable to the point that the more junior SysAdmins, with little exprience on netfilter, have no difficult modifying it to open up ports or creating a new NAT rule.

Having said that, a warning to the newcomers to netfilter: there’s no tool that will magically allow you to write non-trivial rule-sets if you don’t understand the underlying stuff. You will be able to manage your home server, but if you want to do more serious work, you’ll need to really understand how TCP/IP works, and after that, read a lot about the details of routing and packet filtering in Linux. Having seen many people get frustrated by this, is better for you to know that this beast is quite tricky.

Posted in Debian, Ubuntu | 13 Comments »

« Previous Entries Next Entries »