Librenix
Headlines | Linux | Apps | Coding | BSD | Admin | News
Information for Linux System Administration 

Librenix T-Shirts and Coffee Mugs!

Up
vote
Down

For today's example of my (semi)elite C programming skilz, I submit for your inspection the Librenix T-Shirts! Yes, I created the images on these shirts and coffee mugs entirely with C code. While the code isn't up to the standards *cough* of my open source Space Tyrant project, at least the output is colorful and not entirely textual!


click either image to see the T-Shirts, Coffee Mugs, etc.

(If you like the images but don't care for 'librenix' on your shirt, these same styles are available for all 50 US state names as well as with the signs of the zodiac here)

(and here are some modern prints)
mail this link | permapage | score:9385 | -Ray, June 6, 2010 (Updated: May 13, 2014)

Writing syslog messages to MySQL

Up
vote
Down

Writing syslog messages to MySQL

Written by Rainer Gerhards (2005-08-02)

Abstract

In this paper, I describe how to write syslog messages to a MySQL database. Having syslog messages in a database is often handy, especially when you intend to set up a front-end for viewing them. This paper describes an approach with rsyslogd, an alternative enhanced syslog daemon natively supporting MySQL. I describe the components needed to be installed and how to configure them.

Background

In many cases, syslog data is simply written to text files. This approach has some advantages, most notably it is very fast and efficient. However, data stored in text files is not readily accessible for real-time viewing and analysis. To do that, the messages need to be in a database. There are various ways to store syslog messages in a database. For example, some have the syslogd write text files which are later feed via a separate script into the database. Others have written scripts taking the data (via a pipe) from a non-database-aware syslogd and store them as they appear. Some others use database-aware syslogds and make them write the data directly to the database. In this paper, I use that "direct write" approach. I think it is superior, because the syslogd itself knows the status of the database connection and thus can handle it intelligently (well ... hopefully ;)). I use rsyslogd to acomplish this, simply because I have initiated the rsyslog project with database-awareness as one goal.

One word of caution: while message storage in the database provides an excellent foundation for interactive analysis, it comes at a cost. Database i/o is considerably slower than text file i/o. As such, directly writing to the database makes sense only if your message volume is low enough to allow a) the syslogd, b) the network, and c) the database server to catch up with it. Some time ago, I have written a paper on optimizing syslog server performance. While this paper talks about Window-based solutions, the ideas in it are generic enough to apply here, too. So it might be worth reading if you anticipate medium high to high traffic. If you anticipate really high traffic (or very large traffic spikes), you should seriously consider forgetting about direct database writes - in my opinion, such a situation needs either a very specialised system or a different approach (the text-file-to-database approach might work better for you in this case).

Overall System Setup

In this paper, I concentrate on the server side. If you are thinking about interactive syslog message review, you probably want to centralize syslog. In such a scenario, you have multiple machines (the so-called clients) send their data to a central machine (called server in this context). While I expect such a setup to be typical when you are interested in storing messages in the database, I do not describe how to set it up. This is beyond the scope of this paper. If you search a little, you will probably find many good descriptions on how to centralize syslog. If you do that, it might be a good idea to do it securely, so you might also be interested in my paper on ssl-encrypting syslog message transfer.

No matter how the messages arrive at the server, their processing is always the same. So you can use this paper in combination with any description for centralized syslog reporting.

As I already said, I use rsyslogd on the server. It has intrinsic support for talking to MySQL databases. For obvious reasons, we also need an instance of MySQL running. To keep us focussed, the setup of MySQL itself is also beyond the scope of this paper. I assume that you have successfully installed MySQL and also have a front-end at hand to work with it (for example, phpMyAdmin). Please make sure that this is installed, actually working and you have a basic understanding of how to handle it.

Setting up the system

You need to download and install rsyslogd first. Obtain it from the rsyslog site. Make sure that you disable stock syslogd, otherwise you will experience some difficulties.

It is important to understand how rsyslogd talks to the database. In rsyslogd, there is the concept of "templates". Basically, a template is a string that includes some replacement characters, which are called "properties" in rsyslog. Properties are accessed via the "Property Replacer". Simply said, you access properties by including their name between percent signs inside the template. For example, if the syslog message is "Test", the template "%msg%" would be expanded to "Test". Rsyslogd supports sending template text as a SQL statement to MySQL. As such, the template must be a valid SQL statement. There is no limit in what the statement might be, but there are some obvious and not so obvious choices. For example, a template "drop table xxx" is possible, but does not make an awful lot of sense. In practice, you will always use an "insert" statment inside the template.

An example: if you would just like to store the msg part of the full syslog message, you have probably created a table "syslog" with a single column "message". In such a case, a good template would be "insert into syslog(message) values ('%msg%')". With the example above, that would be expanded to "insert into syslog(message) values('Test')". This expanded string is then sent to the database. It's that easy, no special magic. The only thing you must ensure is that your template expands to a proper SQL statement and that this statement matches your database design.

Does that mean you need to create database schema yourself and also must fully understand rsyslogd's properties? No, that's not needed. Because we anticipated that folks are probably more interested in getting things going instead of designing them from scratch. So we have provided a default schema as well as build-in support for it. This schema also offers an additional benefit: rsyslog is part of Adiscon's MonitorWare product line (which includes open source and closed source members). All of these tools share the same default schema and know how to operate on it. For this reason, the default schema is also called the "MonitorWare Schema". If you use it, you can simply add phpLogCon, a GPLed syslog web interface, to your system and have instant interactive access to your database. So there are some benefits in using the provided schema.

The schema definition is contained in the file "createDB.sql". It comes with the rsyslog package. Review it to check that the database name is acceptable for you. Be sure to leave the table and field names unmodified, because otherwise you need to customize rsyslogd's default sql template, which we do not do in this paper. Then, run the script with your favourite MySQL tool. Double-check that the table was successfully created.

Next, we need to tell rsyslogd to write data to the database. As we use the default schema, we do NOT need to define a template for this. We can use the hardcoded one (rsyslogd handles the proper template linking). So all we need to do is add a simple selector line to /etc/rsyslog.conf:

*.* >database-server,database-name,database-userid,database-password

In many cases, MySQL will run on the local machine. In this case, you can simply use "127.0.0.1" for database-server. This can be especially advisable, if you do not need to expose MySQL to any process outside of the local machine. In this case, you can simply bind it to 127.0.0.1, which provides a quite secure setup. Of course, also supports remote MySQL instances. In that case, use the remote server name (e.g. mysql.example.com) or IP-address. The database-name by default is "syslog". If you have modified the default, use your name here. Database-userid and -password are the credentials used to connect to the database. As they are stored in clear text in rsyslog.conf, that user should have only the least possible privileges. It is sufficient to grant it INSERT privileges to the systemevents table, only. As a side note, it is strongly advisable to make the rsyslog.conf file readable by root only - if you make it world-readable, everybody could obtain the password (and eventually other vital information from it). In our example, let's assume you have created a MySQL user named "syslogwriter" with a password of "topsecret" (just to say it bluntly: such a password is NOT a good idea...). If your MySQL database is on the local machine, your rsyslog.conf line might look like in this sample:

*.* >127.0.0.1,syslog,syslogwriter,topsecret

Save rsyslog.conf, restart rsyslogd - and you should see syslog messages being stored in the "systemevents" table!

The example line stores every message to the database. Especially if you have a high traffic volume, you will probably limit the amount of messages being logged. This is easy to acomplish: the "write database" action is just a regular selector line. As such, you can apply normal selector-line filtering. If, for example, you are only interested in messages from the mail subsystem, you can use the following selector line:

mail.* >127.0.0.1,syslog,syslogwriter,topsecret

Review the rsyslog.conf documentation for details on selector lines and their filtering.

You have now completed everything necessary to store syslog messages to the MySQL database. If you would like to try out a front-end, you might want to look at phpLogCon, which displays syslog data in a browser. As of this writing, phpLogCon is not yet a powerful tool, but it's open source, so it might be a starting point for your own solution.

On Reliability...

Rsyslogd writes syslog messages directly to the database. This implies that the database must be available at the time of message arrival. If the database is offline, no space is left or something else goes wrong - rsyslogd can not write the database record. If rsyslogd is unable to store a message, it performs one retry. This is helpful if the database server was restarted. In this case, the previous connection was broken but a reconnect immediately succeeds. However, if the database is down for an extended period of time, an immediate retry does not help. While rsyslogd could retry until it finally succeeds, that would have negative impact. Syslog messages keep coming in. If rsyslogd would be busy retrying the database, it would not be able to process these messages. Ultimately, this would lead to loss of newly arrived messages.

In most cases, rsyslogd is configured not only to write to the database but to perform other actions as well. In the always-retry scenario, that would mean no other actions would be carried out. As such, the design of rsyslogd is limited to a single retry. If that does not succeed, the current message is will not be written to the database and the MySQL database writer be suspended for a short period of time. Obviously, this leads to the loss of the current message as well as all messages received during the suspension period. But they are only lost in regard to the database, all other actions are correctly carried out. While not perfect, we consider this to be a better approach then the potential loss of all messages in all actions.

In short: try to avoid database downtime if you do not want to experience message loss.

Please note that this restriction is not rsyslogd specific. All approachs to real-time database storage share this problem area.

Conclusion

With minumal effort, you can use rsyslogd to write syslog messages to a MySQL database. Once the messages are arrived there, you can interactivley review and analyse them. In practice, the messages are also stored in text files for longer-term archival and the databases are cleared out after some time (to avoid becoming too slow). If you expect an extremely high syslog message volume, storing it in real-time to the database may outperform your database server. In such cases, either filter out some messages or think about alternate approaches involving non-real-time database writing (beyond the scope of this paper).

The method outline in this paper provides an easy to setup and maintain solution for most use cases, especially with low and medium syslog message volume (or fast database servers).

Feedback Requested

I would appreciate feedback on this paper. If you have additional ideas, comments or find bugs, please let me know.

References and Additional Material

Revision History

Copyright

Copyright (c) 2005 Rainer Gerhards and Adiscon.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license can be viewed at http://www.gnu.org/copyleft/fdl.html.

mail this link | permapage | score:9315 | -rgerhards, August 4, 2005 (Updated: March 21, 2007)

Scripting: A parallel Linux backup script

Up
vote
Down

This example bash shell script demonstrates a simple method of creating backups of multiple filesystems to multiple tape devices simultaneously. While the script presented writes to four tape drives in parallel, it can easily be modified to write to other device types and to create a different number of backup streams. The script is set up for the bash shell under Linux, but modifying it for another variety of Unix should simply be a matter of changing the locations of utility files such as tar, echo, cp, and sleep.

The script can be downloaded from http://librenix.com/scripts/par.tar.sh. Download the file now and load it into an editor as this article will refer to it frequently. Also, you may want to modify bits of it to match your filesystem names and your devices.

The first line of the script looks like this:
 #!/bin/bash
If the bash shell isn’t in the /bin directory on your system, you’ll need to modify this line. Enter the command which bash now to verify the location of bash. My Fedora Linux system and my Mac OS X system both have bash in /bin, but my FreeBSD system does not. If you have a non-Linux flavor of Unix, you’ll probably need to use the ‘which’ command to verify the locations of each command used in the script. The commands used are:
 bash
cd
sleep
echo
date
tar
wait
ls
wc
Note that ‘wait’ and ‘cd’ are usually implemented as internal shell commands and may not have external commands associated with them. If that is true for your system, leave ‘cd’ and ‘wait’ with no directory prefix just as they are in the original script.

Now, the first command in the script resets the current working directory to ‘/’:
 cd /
Since the script precedes each directory to be backed up with a ‘.’ to represent the current working directory, starting out at ‘/’ is necessary. The reason for this precaution is that some implementations of the tar command will only load files from a tar archive into the exact directory that was specified when the file was backed up. By prefixing the names with a ‘.’ we preserve the ability to recover the files into any subdirectory we want, without overwriting the original files.

Immediately after the ‘cd /’ command is where you would put any commands to shut down all services that must be quieted prior to a backup. The example script has a (commented out) command to initiate an Oracle database shutdown followed by a ‘sleep’ command to allow time for the shutdown to complete. The example database shutdown and the following delay probably don’t apply to your system. Obviously, you’ll have to add commands yourself to stop any applications that might interfere with the backup.

Next, we use the ‘date’ command to create two sets of four tiny files to stick at the start and end of each tape. Note that the presence of a ‘date.#’ file at the beginning of each tape lets you quickly find out when a tape was created and on which drive. The ‘zzzz.#’ files, appended to the end of each tape, only serve to let you easily verify that a backup completed without overrunning the end of the tape.

Next, we start the four actual ‘tar’ backup commands, each with sample directories named ‘./dir1’, ‘./dir2’, etc. Of course, you’ll need to modify the list of directories to match the actual directories you wish to back up. Note that you’ll probably want to balance the directory sizes so that all of the largest directores aren’t on the same tape. Also, note that each ‘tar’ command is run in the background and logs to a tar.#.log file in the /tmp directory. Obviously, you might want to put the logfiles somewhere else.

After each ‘tar’ command there is an entry like this: ‘TASK=$0’, or ‘TASK=$1’. These arbitrarily-named ‘TASK’ variables are used to store the process ID of each ‘tar’ command so that the script can wait for them with the four ‘wait’ commands that follow in the next block of code. There, we have the four ‘wait’ commands waiting on the $TASK0, etc, variables. (The addition of the ‘$’ to each TASK# shell variable is not a typo -- it’s necessary to read back the contents of the variable.)

Next, after the script has waited for the completion of each of the four ‘tar’ commands, it appends some information to a history file for later reference. It stores the date of the backup, the filesize of the logfile, and the number of files backed up on each tape to each of four history files. While the script will overwrite the logfiles (tar.#.log) each time it is run, it will append these three lines to each of the four history files (tar.#.history).

The final steps in the script are commented out. Those are the commands necessary to restart any applications that were brought down for the backup. Again, in the example we assume an Oracle database needs to be restarted. You’ll need to add the commands necessary to start any applications that were stopped at the beginning of the script.
mail this link | permapage | score:9298 | -Ray, April 10, 2005

Missing the point of the Mac Mini

Up
vote
Down

I've read several articles and numerous comments over the past week detailing just how overpriced Apple's new Mac Mini is. Reviewers seem to conclude that because they can assemble a PC of similar performance to the Mini for less money, that the new Mac simply costs too much.

What they have not done, however, is duplicate the Mac Mini in any important way. The closest comparison I've seen pitted the Mini against a machine 2.5 times its size. At least that reviewer understood that size matters. I'm a fan of small systems. I own 2 Biostar iDEQ cubes, one Shuttle, and three Book PC's. The Book PC's are the oldest and most obsolete, of course, with the fastest one containing a Pentium III 667. I've gotten rid of several systems over the past two years that were faster than the Book PC's. Why keep the slower computers while getting rid of systems up to twice as fast, you might ask.

I've kept the Book PC's because they are so small that keeping them around isn't a burden. They take about as much space on a bookshelf as an unabridged dictionary. I have one, currently disconnected, functioning only as a monitor stand for the system I'm using right now. A book PC will fit in my briefcase; I've hauled them around with me as if they were laptops. With a dozen computers around the house, space is precious and small is beautiful.

The Book PC's are 3.2" x 10.5" x 11.9". That's 4.8 times the volume of a Mac Mini. The Mac is truly tiny. I've worked to build fast, small, quiet Linux systems for years now. The iDEQ 200V is the cheapest system I've made that is fast, quiet, and runs Linux without complaint. Without software and with only the on-board graphics chip, it cost about the same amount as the Mac Mini. At 12.5" x 7" x 8", however, it is much larger than the Mini and weighs several times as much.

I challenge the anti-Mini crowd to build a PC of any shape that displaces approximately the same volume as the Mini plus power supply. Then, compare prices again. The SFF computer fans are clearly going to notice this machine and are going to buy a few truckloads of them. In the small form factor (SFF) computer market, even ignoring the software, this machine is clearly a bargain.

SFF computer fans who are committed to Windows will still covet this system; a few of them might even make the switch to OS X just to get one. I even expect some SFF Linux geeks to buy them because they're tiny, cheap, and can run Linux. Conclusion: the anti-Mini reviewers and posters are not SFF people.

Next, the Mini is an affordable and typically stylish Mac. A smallish PC does not run OS X. The Mini comes with OS X and will make a great second (or third) computer for many Mac users. I use Linux as my primary desktop OS (SuSE 9.2 Professional for the last three weeks, Fedora Core 2 the previous year) and FreeBSD and Linux (Fedora, Slackware) for my servers. I'm hardly a Mac guy but, as a Unix geek, I'm perfectly fine with OS X. I used a Mac as my primary desktop for a couple of weeks after a recent move.

Many Mac users -- at least those who need a second system -- will find the price -- and the size -- of this system quite appealing. Clearly, the negative reviewers and posters are not OS X users.

Therefore, I've come to the conclusion that these anti-Mac Mini arguments are coming from people who appreciate neither of the core characteristics of the machine. They don't understand the appeal of the SFF systems market, nor are they OS X / Mac users.

Apple, on the other hand, appreciates both and they have produced an impressively priced small form factor OS X system.

I wish for Apple responsive suppliers with scalable production facilities. They will surely need them in order to satisfy the demand for the Mac Mini.
mail this link | permapage | score:9255 | -Ray, January 21, 2005
More articles...
Abstract Art in Tallahassee

More features

How to install Ubuntu Linux on the decTOP SFF computer

MiniLesson: An introduction to Linux in ten commands

Install a Mail Server with Antivirus and Antispam in minutes

Space Tyrant: Multithreading lessons learned on SMP hardware

Space Tyrant: A threaded C game project: First Code

Tutorial: How to Block Ads With Adzap

The Real Microsoft Monopoly

Space Tyrant: A threaded game server project in C

Linux vs. Windows: Why Linux will win

Microsoft to push unlicensed users to Linux

Tutorial: Introduction to Linux files

Space Tyrant: A multiplayer network game for Linux

Mono-culture and the .NETwork effect

No, RMS, Linux is not GNU/Linux

Closed Source Linux Distribution Launched

Shadow.sh: A simple directory shadowing script for Linux

Apple to Intel move no threat to Linux

Why software sucks

Why Programmers are not Software Engineers

The Network Computer: An opportunity for Linux

Review: DevelopGo: A Linux Live CD for Programmers

Download: Linux 3D Client for Starship Traders

The short life and hard times of a Linux virus

Hacker Haiku

The BSD License and the GPL: Why we need both

Programming Language Tradeoffs: 3GL vs 4GL

Beneficial Computer Viruses

The life cycle of a programmer

Graffiti Server Download Page

The Supreme Court is wrong on Copyright Case

SquirrelMail and AXIGEN WebMail

SSL Encrypting Syslog via Stunnel

Be an Engineer AND an Artist

Opinion: Seeing Linux for the first time...

Create mobile Web apps with HTML5

Retrofit with JTip tooltips and GreyBox lightboxes

myServer: How to build up a personal web server

Opinion: What Linux Needs (or doesnt need)

Put our headlines on your site using RSS files

Librenix Sitemap

News Websites

Apps Websites

Monthly Visitors to Librenix.com

Sysadmin Websites

Monthly Visitors to Librenix.com

About Librenix.com: Privacy Policy, Contact Info, etc.

Call for Linux software review articles

Posting Guidelines

Coding Websites

 

Firefox sidebar

Site map

Site info

News feed

Features

Login
(to post)

Search

 
Articles are owned by their authors.   © 2000-2012 Ray Yeargin