Wednesday, April 25, 2007

Synergy Japanese Keyboard Support (Kana key)

For the impatient:

I patched and compiled Synergy to enable the Kana key used in Japanese keyboards to switch between input methods. I also created the Windows installer for the patched version for those Windows users that do not know what a patch is or what compiling means. You can find the installer here.

Unfortunately for Linux users I have no idea on how to create packages for any distribution so you will have to compile from sources. Fear not as it is not complicated at all and I explain how to do it below.


A dual screen hack (revisited):

In one of my previous post I talked about how I used X2VNC/Win2VNC in combination with RealVNC/X11VNC to share a single keyboard and mouse between several computers even if these computers had different operating systems.

Using the above programs works well but I have been experiencing some problems like not being able to use the Japanese keys on my keyboard and some responsiveness degradation (late mouse reaction) when the machine running the VNC server is under heavy load.

Due to these small problems I started looking for an alternative and found Synergy. This application is small and does not require the additional VNC server. Runs in Windows, Linux and Mac and the transition between screens is smooth even if the computers are under heavy load. Another thing I like a lot is the centralized configuration (only one server) that allows to easily configure more than two computers. Creating a setup with more than three computers using X2VNC/WIN2VNC could be very frustrating. Also there are GUI configuration tools for Windows, Linux and Mac that make even easier to setup (here).

There are a lot of resources in the Internet that explain how to install and use Synergy to create dual screen systems (here, here and here) so I wont explain it here. What I will explain is how to get the Japanese keys to work with synergy.

Some videos of Synergy in action can be found here

Japanese Key's support in Synergy

The current version of Synergy does not handle the Japanese keys used to change the input method (i.e. ローマ字/漢字). There is a Japanese keyboard (Hankaku/Zenkaku-Kanji and Kana key) patch in the patches section of the Synergy sourceforge page but no binary releases are available yet. I took the time to patch and compile the source code to enable these keys on my Japanese keyboard in Linux and Windows and here are the instructions on how I did it.

Instructions for Linux Kubuntu Desktop

1. Download the source code from here. The last version was 1.3.1 at the time of this writing.

2. Make sure you have installed the development tools and the XTest development package

sudo aptitude install build-essential
sudo aptitude install libxtst-dev

3. Unpack and edit the source code

tar xvfz synergy-1.3.1.tar.gz
cd synergy-1.3.1

4. Edit the source code to add Hankaku/Zenkaku-Kanji and Kana key support

Open the lib/platform/CMSWindowsKeyState.cpp file and search the following three lines:

/* 0x0f2 */ { kKeyNone }, // OEM specific
/* 0x0f3 */ { kKeyNone }, // OEM specific
/* 0x0f4 */ { kKeyNone }, // OEM specific

and replace them with these corresponding lines

/* 0x0f2 */ { kKeyOEMCopy }, // VK_OEM_COPY
/* 0x0f3 */ { kKeyZenkaku }, // VK_OEM_AUTO
/* 0x0f4 */ { kKeyZenkaku }, // VK_OEM_ENLW


Open the lib/synergy/KeyTypes.h file and search for a line like:

static const KeyID kKeyDelete = 0xEFFF; /* Delete, rubout */

and add the following below:

static const KeyID kKeyOEMCopy = 0xEF27; /* OEMCopy(Kana Key) */

Actually you can add this line of code anywhere you want inside the KeyTypes.h file but to keep things in order I add it after the kKeyDelete const declaration.

5. Make the configure/make/make install dance.


./configure -x-includes /usr/include -x-libraries /usr/lib --prefix=/usr
make
sudo make install


6. Synergy is now installed on your Kubuntu system and you can proceed to configure it. Refer to the links in the previous section for instructions.

Instruction for Windows

1. To compile Synergy in Windows you need Visual C++ (Express Edition) that can be downloaded for free from here.

2. To create the installer we need NSIS. Simply download the installable from here and install it.

3. Download the source code (synergy.1.3.1.zip), unpack it somewhere. Edit the file "dist/nullsoft/installer.mak" inside the source code to make sure it points to the correct NSIS installation path. Simply open the installer.mak file with any text editor and make sure the path for makensis is correct:

NSIS="C:\Program Files\NSIS\makensis"

This example is if you installed NSIS in the default path.

3. Open the synergy.dsw VC++ project. Say yes to all the requests to convert the project format and edit the source code as directed below

Open the platform->Source Files->CMSWindowsKeyState.cpp file and search these six lines:

/* 0x0f2 */ { kKeyNone }, // OEM specific
/* 0x0f3 */ { kKeyNone }, // OEM specific
/* 0x0f4 */ { kKeyNone }, // OEM specific

and replace them with these corresponding lines

/* 0x0f2 */ { kKeyOEMCopy }, // VK_OEM_COPY
/* 0x0f3 */ { kKeyZenkaku }, // VK_OEM_AUTO
/* 0x0f4 */ { kKeyZenkaku }, // VK_OEM_ENLW


Open the libsynergy->Header File->KeyTypes.h file and add the following line:

static const KeyID kKeyOEMCopy = 0xEF27; /* OEMCopy(Kana Key) */

just after this line:


static const KeyID kKeyDelete = 0xEFFF; /* Delete, rubout */

If you followed the Linux instructions you will note that these changes are exactly the same as does we did in Linux.

4. Make sure the compiler is set to Release configuration (i.e. select release in the project properties) and build the "all" and "installer" projects. To compile right click on the project names (all and installer) and select build from the context menu.

5. After both builds finish there will be a "SynergyInstaller.exe" file inside the "build" directory inside the source code directory. Execute this file to install Synergy as normal.

6. Proceed to configure (see references in previous section) and now you will be able to change from romaji to kana using the Kana key available on all japanese keyboards.

Mac Instructions

Unfortunately I do not have a Mac to try this patch but there are instructions on how to compile Synergy on a Mac inside the doc/compiling.html file. If someone is successful applying this patch on the Mac I would like to hear about it.

Sunday, April 22, 2007

Receive Log reports via Email (Ubuntu)

When deploying servers in the hostile Internet a good administrator is faced with the need to monitor all the log files the server produces to ensure it is working correctly and to detect any security treats.

This can be a really time consuming task as a busy server can produce several megabytes worth of log files per day and if there are more than one server, then checking log files one by one is totally impractical, not to say useless.

To alleviate the burden of checking big log files every day I installed logwatch and it has proven to be very useful. It gives a very complete summary of all your log files with the most relevant information and very well presented with per service sub-sections.

In Ubuntu Server installing the logwatch package worked out of the box for almost all my running services (Courier-POP, Postfix, OpenSSH). The relevant command is:

sudo aptitude install logwatch

The only service it did not work was the web server because logwatch is configured to use Apache log files by default while I use Lighttpd as web server.

How to Configure Logwatch to parse Lighttpd log files (Ubuntu)

The easiest way to customize logwatch is to create an override.conf file inside the /etc/logwatch/conf/ directory. To tell logwatch to parse lighttpd log files we create the override.conf file and add the following:

logfiles/http: LogFile = lighttpd/*access.log.1
logfiles/http: LogFile = lighttpd/*access.log
logfiles/http: Archive = lighttpd/*access.log.*.gz

This is assuming the lighttpd log files are inside "/var/log/lighttpd" directory. If they are not change the paths to reflect the location on your system. You can add as many log files as you want (i.e. virtual domains) by adding all three entries above for each log file.

Now you will get some nice reports about the web usage in your server. Make sure you read the HOWTO-Customize-LogWatch file to learn more about logwatch. This file is usually inside the "/usr/share/doc/logwatch" directory in .gz format. To read it you can use the command:

zcat /usr/share/doc/logwatch/HOWTO-Customize-LogWatch.gz | less


Get Logwatch reports to via email

By default Logwatch sends the reports it generates to root. To send the reports to a different local user or external email address you can edit the "/etc/aliases" file like:


1 # Added by installer for initial user

2 root:   myuser, mygmail@gmail.com



and then rebuild the aliases database:


1 sudo newaliases



In the example above all Logwatch reports will be received by the local user "myuser" so you may access the reports via the mbox file at "/var/mail/mysuer" and to the external address mygmail@gmail.com that you may read using the Gmail web interface.

Important Note: by default the mail transfer agent of (K)Ubuntu does not allow relay of messages to external addresses (i.e. gmail addresses). To change this you may follow the instructions to set a small personal mail server here.

Logwatch vs Logcheck

I installed both programs and for me Logwatch is far more useful than logcheck. Logcheck will only parse the log files related to security (i.e. auth.log) and simply send you and email with the access denied entries. The information Logcheck provides is no different that I get by looking at the log files directly.

Logwatch in the other hand provides relevant information not only about security issues but also from all the services running on the server. The information is well summarized and presented in a way it is easy to get a general and a detailed view of the server status and operation.

How to make Webalizer work with Lighttpd in Ubuntu server

To get more visually compelling statistics about your web server usage patterns you can use Google Analytics that is a powerful tool. But if you prefer a simpler alternative but still powerful enough then I recommend installing Webalizer.

In Ubuntu server if you are using Lighttpd instead of Apache make sure to change the configuration file (/etc/webalizer.conf) to point to the corresponding log file (i.e. LogFile /var/log/lighttpd/access.log.1) or it won't work.

Logwatch example report


################### LogWatch 7.1 (11/12/05) ####################
Processing Initiated: Thu May 31 06:25:02 2007
Date Range Processed: yesterday
( 20075月-30 )
Period is day.
Detail Level of Output: 5
Type of Output: unformatted
Logfiles for Host: makarena
##################################################################

--------------------- courier mail services Begin ------------------------

Connections: 100 Times
Protocol POP3 - 100 Times
Host 192.33.11.109 - 1 Time
Host 199.120.17.15 - 8 Times
Host 195.10.13.49 - 3 Times
Host 13.9.18.11 - 88 Times



Logins: 96 Times
Protocol POP3 - 96 Times, 3790856 Bytes
User paprika - 1 Time, 21511 Bytes
Host 14.8.13.19 - 1 Time, 21511 Bytes
User mondongolia - 88 Times, 3389830 Bytes
Host 13.9.18.11 - 88 Times, 3389830 Bytes
User juanito3 - 7 Times, 379515 Bytes
Host 124.10.17.1 - 6 Times, 379515 Bytes
Host 124.10.13.4 - 1 Time, 0 Bytes



---------------------- courier mail services End -------------------------


--------------------- Cron Begin ------------------------



Commands Run:
User root:
run-parts --report /etc/cron.hourly: 24 Time(s)
[ -d /var/lib/php4 ] && find /var/lib/php4/ -type f -cmin +$(/usr/lib/php4/maxlifetime) -print0 | xargs -r -0 rm: 48 Time(s)
[ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -r -0 rm: 48 Time(s)
test -x /usr/sbin/anacron || run-parts --report /etc/cron.daily: 1 Time(s)

---------------------- Cron End -------------------------


--------------------- httpd Begin ------------------------

35.45 MB transferred in 2953 responses (1xx 0, 2xx 2766, 3xx 151, 4xx 36, 5xx 0)
1817 Images (9.34 MB),
17 Documents (11.90 MB),
867 Content pages (9.56 MB),
16 Redirects (0.00 MB),
236 Other (4.66 MB)

Attempts to use known hacks by 1 hosts were logged 1 time(s) from:
20.11.61.11: 1 Time(s)


A total of 1 sites probed the server
20.11.61.11

Requests with error response codes
404 Not Found
/%7Enbalan/JSP/: 1 Time(s)
/_vti_bin/shtml.exe/_vti_rpc: 1 Time(s)
/_vti_inf.html: 1 Time(s)
/comment: 2 Time(s)
/en/home/photo_gallery: 1 Time(s)
/en/user: 3 Time(s)
/favicon.ico: 2 Time(s)
/imagefile/filepath/1110/small/IMG_0189.jpg: 1 Time(s)
/imagefile/filepath/1112/small/IMG_0191.jpg: 2 Time(s)
/imagefile/filepath/1113/small/IMG_0192.jpg: 1 Time(s)
/robots.txt: 17 Time(s)
/~nbalan/Concurrency/html/CIC: 2 Time(s)
/~nbalan/Concurrency/html/index2.html: 1 Time(s)
/~shda/toppage.html: 1 Time(s)

A total of 9 ROBOTS were logged

---------------------- httpd End -------------------------

--------------------- pam_unix Begin ------------------------

cron:
Sessions Opened:
root: 121 Time(s)

sshd:
Sessions Opened:
admin: 3 Time(s)

su:
Sessions Opened:
(uid=0) -> nobody: 3 Time(s)


---------------------- pam_unix End -------------------------


--------------------- POP-3 Begin ------------------------


[POP3] Connections:
=========================
Host | Connections
------------------------------------------------------------- | -----------
::ffff:14.8.13.19 | 1
::ffff:13.9.18.11 | 88
::ffff:84.10.1.15 | 6
::ffff:8.10.13.9 | 1
---------------------------------------------------------------------------
96



[POP3] Logout stats (in MB):
============================
User | Logouts | Downloaded | Mbox Size
--------------------------------------- | ------- | ---------- | ----------
hbr | 7 | 0.36 | 0
admin | 1 | 0.02 | 0
sna | 88 | 3.23 | 0
---------------------------------------------------------------------------
96 | 3.62 | 0.00

---------------------- POP-3 End -------------------------


--------------------- postfix Begin ------------------------



3716764 bytes transferred
204 messages sent
204 messages removed from queue

Top ten senders:
1 messages sent by:
root (uid=0):


SASL Authenticated messages from:
unknown[13.9.18.11]: 2 Time(s)


Connections lost:
Connection lost while CONNECT : 1 Time(s)

---------------------- postfix End -------------------------


--------------------- SSHD Begin ------------------------


Users logging in through sshd:
admin:
13.9.18.15: 2 times
14.7.13.19: 1 time

---------------------- SSHD End -------------------------


--------------------- Disk Space Begin ------------------------

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/Ubuntu-root 227G 33G 183G 15% /
/dev/sdb1 367G 36G 313G 11% /mnt
/dev/sda5 228M 24M 193M 11% /boot


---------------------- Disk Space End -------------------------


###################### LogWatch End #########################


As you can see the report is well structured and provides relevant information about SMTP/POP services, Web statistics and access security. It even reports that someone is proving the server and that a known vulnerability has been tested on my server... now I can take actions like blocking that IP address from all access to the server using iptables.

From looking at the thousand of megabytes of log files it would have been a little difficult to spot this particular security treat.

Saturday, April 21, 2007

Linux Package Manager Quick Reference

Fedora, CentOSDebian, (K)Ubuntu
Install from a package filerpm -i (package file)aptitude -S (package file) -i
Install/Update from a package filerpm -U (package file)dpkg -i (package file)
Update from a package filerpm -F (package file)aptitude -S (package file) -u
Downgrade a packagerpm -U --oldpackage (package file)dpkg --force-downgrade -i (package name)
Reinstall packagerpm -Uvh --replacepkgs (package file)aptitude --reinstall install (package name)
Remove a packagerpm -e (package name)dpkg -r (package name)
Remove a package (Installed via repositories)yum -C remove (package name)aptitude remove (package name)
Update package informationyum makecacheaptitude update
Update current installed packagesyum -C updateaptitude safe-upgrade
Install repository packageyum -C install (package name)aptitude install (package name)
Upgrade to next distribution releaserpm -Uvh *****-release-n-n.noarch.rpm
yum upgrade
aptitude dist-upgrade
Search repository packagesyum -C list | grep (string)aptitude search (regex)
List installed packagesrpm -qa
or
yum list installed
dpkg -l
Find the package a file belongs torpm -qf (file name)dpkg -S (file)
or
apt-file find (file)
List the files that belong to a packagerpm -ql (package name)dpkg -L (package name)
Get package detailsyum -C info (package name)aptitude show (package name)
Clean cacheyum clean allaptitude clean
Search packages by nameyum -C search (string)apt-cache search (string)

Tuesday, April 10, 2007

OMNet++ Development in Windows

I use OMNet++ for research purposes mostly in Linux but recently I needed to run it on a Windows environment. The instructions on how to install OMNet++ on Windows are not so useful and it took me a while to get it right. Here are the instructions I followed to get OMNet++ on Windows 2000 and Windows XP.

Windows 2000 preparations

To get an OMNet++ development environment on Windows 2000 it is necessary to have the most recent service pack SP4Express_EN.exe. If you don't have it simply download and install it.

You also need at least version 6.0 SP1 of Internet Explorer that you can download for free here.

If you are using Windows XP make sure you have the newest service pack installed too.

Setting up the development environment

To compile OMNet++ itself and our own models we need a compiler. Fortunately Microsoft is offering free versions of their IDE's (Integrated Development Environments). I will use Visual C++ Express Edition to develop OMNet++ in Windows.

Download and install it from this url. You also need to register your copy that is free and easy to do if you already have a hotmail.com account.

Next Download and install the Platform SDK that contains some needed header files.

Installing OMNet++ binary in Windows

(optional): To get support for images in Neddoc you need to install Ghostscript before installing OMNet++ . Download and install it from here.

Get the OMNeT++ 3.3 win32 binary (exe) from http://www.omnetpp.org/filemgmt/viewcat.php?cid=2 and install it.

(caution1): For some reason the binary available at the OMNet++ web page gives me a CRC error when I execute it. To bypass the CRC check run the installation binary from the command prompt with the /NCRC switch:

c:\> omnetpp-3.3-win32.exe /NCRC

(caution2): When asked for the installation directory make sure to choose a path that has no blank spaces as this may cause some problems when compiling.

During the installation you will be asked for the Ghostscript binary. Select the path were you installed Ghostscript or simply skip if you do not have it installed. Then you will be asked to choose the Visual C++ release you are going to use. If you followed this instructions then you must have Visual C++ 2005 installed so choose vc-81. If you have Visual C++ 2003 (aka 7.1) then choose the vc-71 release. Visual C++ 6.0 is known to have problems when compiling OMNet++ models so upgrade to a newer version of Visual C++ is recommended.

Create a quick launch link (optional)

I like to create a .bat file that sets up the necessary environment variables for OMNet++ development. To do this I simply create an batch file "omnetenv.bat" on the C:\ root that contains the following:

call "C:\Program Files\Microsoft Visual Studio 8\VC\vcvarsall.bat"

SET PATH=%PATH%;C:\Program Files\Microsoft Platform SDK for Windows Server 2003 R2\Bin
SET INCLUDE=%INCLUDE%;C:\Program Files\Microsoft Platform SDK for Windows Server 2003 R2\Include
SET LIB=%LIB%;C:\Program Files\Microsoft Platform SDK for Windows Server 2003 R2\Lib

The first command is a call to the vcvarsall.bat script that comes with the Visual C++ that sets up all the environment variables needed for C++ development. We also need to add the Platform SDK bin, include and lib directories to the respective environment variables. Make sure to change the directory paths to reflect your installation paths.

To call this omnetenv.bat script I create a desktop shortcut by selecting "Create new shortcut" in the Desktop context menu (i.e. right click) and filling the following items:

Command = %comspec% /k C:\omnetenv.bat
Name = OMNeT++ Development

Clicking on this new shortcut will open a Command Prompt windows (cmd.exe) call the omnetenv.bat script and stay there waiting for input. From this point on we can start developing OMNet++ models. If you are new to OMNet++ then I recommend you to follow the TicToc tutorial.

If you do not have a text editor to code consider using SCiTe.

Mobility Framework in Windows

Once OMNet++ is installed you may consider using the Mobility Framework to develop sensor network and ad hoc network models.

Simply download the Mobility Framework (mobility-fw2.0p3.zip) source code and uncompress it somewhere. Make sure the directory path you choose has no blank spaces in it (i.e. Program Files).

Enter the source directory and edit the mkmk.cmd file and make sure the OMNET_ROOT variable points to the root directory where you installed OMNet++. Make sure you have the OMNet++ development environment setup (i.e. run omnetenv.bat) and run the following commands:

mkmk.cmd
nmake -f Makefile.vc all

This will setup and compile the Mobility Framework. Once it finishes you can proceed to create Mobility Framework models using the template files and the Makefile.gen file as described in the documentation.

Scite Tips on Windows

When I program in a Windows machine my favorite editor is VIM and my second favorite editor is SciTE. For anyone that finds VIM complicated then SciTE is by far the best editor I have ever used.

SciTE is small and fast, can be used in multiple platforms, and supports folds, tabs, auto completion and a large number of language highlights.

If you have read my previous posts you will notice that I am a console guy, that is, I do most of my computer interaction using commands on a console (Linux) or command prompt (Windows).

One of the problems I found with SciTE is that it could not open files as tabs when invoked from the command prompt. Instead SciTE would create a new editor window for each invocation to the command "scite program.cpp". This can quickly clutter the desktop when working with programming projects that involve large number of files (i.e source, header, make files).

Fortunately there is a little application that allows us to open files as tabs rather than opening new editor windows.

Open files as Tabs on existing SciTE window

Download the scitecmd and copy it in any place in your PATH. For some reason this file has not .exe extension so rename the file to add the extension (i.e. scitecmd.exe). Once this application is in your PATH you can open new files as tabs on current SciTE windows (if one is available) with the following call.

scitecmd program.cpp

If no SciTE window is open then scitecmd will open a new one and all subsequent call to scitecmd will open the files as tabs on that window.

Create aliases (Optional)

As a Vim fan on Linux I like to create two shortcuts to the scite and scitecmd commands that resemble the commands used in Vim to open files:

doskey vi=scite $*
doskey split=scitecmd $*

The first alias will allow me to open new files in their own windows by calling "vi program.cpp" as I would in Linux and the second alias "split program.cpp" will open it as a tab in any current scite window. This is my own preference and you can use any aliases you see fit.

SJIS Support in SciTE (optional)

The computer I use at work has a Japanese version of Windows XP that uses a very strange character encoding. This is no problem as I write my program comments in English but when I get some files with Japanese text form my co-workers SciTE displays them as garbage.

To make SciTE display the Japanese text correctly I must add the corresponding char encodings in the configuration files. I learned how to do this reading this post.

1. Open SciTE
2. From the menu select Options -> Open User Options file -> SciTEUser.properties
3. Write the following text

code.page=932
character.set=128

4. Save the file from the menu with File -> Save
5. Restart SciTE

Windows with the Power of Linux

No matter how hard we try to avoid it there will always be a time were be are forced to work with Windows PC's. There will always be a friend/family that needs some help with their Windows machine or the school/work project that uses only Windows development tools (i.e. Visual C++).

Forced to work with Windows OS's for several years I have learned configure Windows in a way that makes it more enjoyable to use (i.e. make Windows more like Linux) and here I present my configuration to all unfortunate Linux/Unix users forced to use Windows at work or school.

The main problem I have with Windows is that by default it does not offer too much functionality and flexibility. Even after adding some additional software packages to Windows I am 400% more productive in a Linux console using the set of GNU utilities than using for example MS Excel.

Just as an example in Linux it is easy to find the number lines that contain a certain pattern in a CSV file with a single line command (i.e. cat file.cvs | grep patt | wc -l). I really don't know how to do it in Excel or how to compare two excel sheets and extract the differences to a third sheet (i.e. diff or comp in Linux). Granted that I am not an expert using Excel but this is just a simple example. I can handle far more complex files with console commands and small scripts that would be far more complicated to do with any GUI alternative.

To alleviate Windows lack of functionality and flexibility I simply install as much Linux tools as I can on my Windows installation to give it that Linux feeling. This gives me the power and flexibility of Linux combined with the Windows development tools I need to fulfill my daily tasks.

Cygwin? Thanks but no thanks

When talking about Linux tools on Windows we all know about Cygwin. Cygwin is good if you want to alienate Windows completely and pretend you are running a full installation of Linux. The Windows OS passes from being a complete OS to be only a hardware and X11 interface for Cgwin.

You have your home directory inside the Cygwin environment (different from Windows home) and you can install all sorts of tools, applications and even complete servers inside the Cygwin environment. Since everything is hosted inside the Cygwin environment it is difficult to integrate Windows applications with Cygwin and Cygwin applications with Windows.

For example trying to set up a C++/C# development environment in Cygwin using Visual C command line development tools can be a challenging task. I also had a lot of trouble to get the Windows DDK (Driver Development Kit) environment to work inside Cygwin.

All I needed was the ability to use Linux commands/applications like ls, cat, wc, grep, find, vim to handle my programs source code and use Windows build tools like nmake and cl to compile them.

GNU Core Utils

This is the set of GNU utility commands I so much love and they are all contained in a single package UnxUtils.zip.

To enable these utilities on the Windows Command shell I simply extract the contents of the "/usr/local/wbin" directory (without the paths) inside a directory listed on my PATH (i.e. C:\Apps\UnxUtils). Once the utilities are in my PATH I can start using them to manage source code files as I do in Linux.

There is also another port of these utilities that is more actualized (GnuWin32 project) but I have been using the UnxUtils package for years without problems so I haven't tried the new ones.

If you are using Windows Vista you may better try the new GnuWin32 utilities as the home page claims Vista compatibility. I am not sure if the UnxUtils supports Vista.

Wget

The UnxUtils package contains a version of Wget but it is a little outdated. I prefer the Heiko Herold's version that supports SSL and can be found here.

Simply open the Zip file and extract the wget.exe, the two .dll files and cacert.pem inside the UnxUtils directory (i.e. C:\Apps\UnxUtils) overwriting the existing files. This way we replace the wget that comes with UnxUtils with this new one.

Subversion

No serious development should be done without a CVS or SVN repository. To install Subversion in Windows simply download and run the installer.

http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91

Ruby

I have been a Ruby fan for a long time, even before Rails made it popular. I use it for almost everything from complete GUI applications to small scripts to process text files. Installing Ruby in Windows is easy using the one-click installer: http://rubyforge.org/frs/?group_id=167&release_id=10461

With this installer you get much more than just Ruby. You also get RubyGems and a lot of libraries and extensions for developing rich Ruby applications. You can even use RubyGems to install Rails and Mongrel to develop web applications.

Vim

I am a vim user and without it I am almost useless as a programmer. Well not that useless but after years of trying other editors I always come back to vim. In Windows I use the gVim self installing file from here.

During installation make sure you install the .bat files for command line use. Without these .bat files you won't be able to call vim from the command prompt.

OpenSSH

The powerful opennssh to transfer files securely from one machine to another. This is also a Windows installable that you can get here.

This installable file includes some Cygwin command utilities like ls.exe, mkdir.exe and rm.exe that are needed for the oppenssh-server. If you install the server these commands will be in your PATH and may conflict with those of the UnxUtils package.

To solve this you can install only the client or if you need the server make sure the order of directories in your PATH variable have the UnxUtils directory listed before the openssh bin directory.

X Forwarding with OpenSSH

The Windows version of OpenSSH above supports X-Forwarding that allows me to use my GUI applications at my home computer (Kontact, Kiten, Kopete, etc.) remotely from the Windows PC I use at work/school.

I blogged about this in a previous post so look there if you are interested. I will only mention that if you use XMing as X11 server remember to set the DISPLAY environment variable to reflect that used by XMing (i.e. SET DISPLAY=127.0.0.1:0.0).

Tab-completion in Windows 2000

This is an old known trick but for completeness I mention it here. In Linux and Window XP we have a very useful feature called tab-completion. With it we can start typing any command, file name or path name and press Tab to get the rest automatically. This is very useful specially in Windows that the paths are long and contain spaces (i.e. c:\Program Files\Documents and Settings).

In Windows 2000 this feature exists but is not enabled by default. To get "tab completion" in Windows 2000:

1. Hit the "Start" button
2. Select "Run"
3. enter "regedit", hit OK
4. Expand "My computer" (by clicking the little [+] beside it)
5. Expand HKEY_LOCAL_MACHINE
6. Expand SOFTWARE
7. Expand Microsoft
8. Expand Command Processor
9. Double-click "CompletionChar"
10. Replace the value that's there with 9 (ASCII equivalent of the TAB key)
11. Click OK

Now open a command prompt, start a command or directory path name and then start pressing Tab to scroll on the possible auto completion options.

These instructions were copied from Anders blog post .

Setting The Environment

In my Windows machines I usually create a batch script (C:\LINUXENV.BAT) where I configure some environment variables like PATH to include the newly installed UnixUtils. I also add some other variables for development (http://www2.blogger.com/img/gl.link.gifnot shown here) and some command shortcuts.

SET PATH=C:\Apps\UnxUtils;%PATH%
SET DISPLAY=127.0.0.1:0.0
doskey alias=doskey $*
doskey clear=cls
doskey vi=vim $*

There are two ways to use this script. First we can open a command prompt as normal in Windows (i.e. Start->Run->CMD.exe) and call the C:\LINUXENV.BAT from within it or we can create a shortcut that we can just click (see next section).

Creating A Shortcut

Instead of running the CMD.exe shell and then calling the C:\LINUXENV.BAT batch script to set up our environment we can simply create a shortcut on our Desktop that when clicked will open a command shell (CMD.exe) load the LINUXENV.BAT and leave the prompt waiting for use to input commands to it.

In the Desktop context menu (i.e. Right Click) choose create new shortcut. When asked for the command to run put:

%comspec% /k C:\LINUXENV.BAT

And in the name put anything you like (i.e. Linux CMD). Now when you click in that shortcut you will get a Windows command prompt but with the Linux power and flexibility.

Windows OSS Applications

These days there are many OSS applications that run natively in Windows that can make Windows even more functional and flexible. The best part is that most of these applications can run equally on Windows and on Linux making the transition from on OS to the other less complicated.

For a comprehensive list of Windows OSS applications check this link: http://shsc.info/usefulwindowssoftware.

My personal favorites are:

1 - Firefox loaded with web development plugins
2 - Thunderbird
3 - ClamAV or AVG
4 - Spybot Search & Destroy
5 - Amaya for some casual HTML/CSS editing of my blog.
6 - Scite Editor (if Vim is not available)
7 - XMing to enable SSH X-Forwarding

Friday, April 06, 2007

File System Benchmarks for Postfix Mail Server

After some research I decided to use XFS file system to host my next mail server (Postfix). This decision was based only on comments and suggestions I read in forums and some benchmarks published on the Internet about linux file systems [1].

I installed my new mail server with Postfix as MTA [2] and Courier as POP [3] and leaved a free 60GB partition to make some benchmark testing. This way I was able to extract some hard numbers to validate my desicion. The results show XFS is indeed the best option when running a mail server, at least for Postfix.

Test Environment

The test setup is very simple: I have one server with Ubuntu Server 6.06 LTS installed with the most recent updates and a second client machine with Kubuntu Feisty running Postal/Rabid [4] software used to load the server with SMTP/POP requests.

The machines specifications are:






Server MachineClient Machine
OSUbuntu Server 6.06 LTSKubuntu Feisty Fawn
CPUIntel Xeon 2.40GHzIntel Pentium III 800MHz
RAM512MB256MB
DiskSEAGATE ST373307LC 73408MBMaxtor 6Y120L0 SCSI 122942 MB


In the server machine I had Ubuntu Server installed with Postfix and Courier-POP configured and added 500 user accounts with Maildir boxes.

The client side is my desktop PC running Kubuntu Feisty where I simply run postal and rabid to load the server with a lot of SMTP and POP requests. I apply no limits on the number of requests per minute as we are testing the disk input/output throughput rather than the mail service itself.

Testing Procedure

On the server side I followed these steps:

  • Create test partition with one of the file systems under consideration. Especial care was taken to use the creation options that are known to increase each particular file system performance as stated in various benchmarks and forums.

    • mkfs.ext3 -J size=100 -m 1 -O dir_index,filetype,has_journal

    • mkfs.reiserfs -b 4096 -s 16386

    • mkfs.xfs -f -l size=64m -d agcount=16

    • mkfs.jfs -s 64



  • Mounted the newly created partition to a testing mount point. Again I tried my best to use the recommended mount options in order to improve each file system performance. In the case of ext3 and reiserfs we tested all three journaling methods they support.

    • mount -t ext3 -o noatime,nodiratime,data=journal

    • mount -t ext3 -o noatime,nodiratime,data=ordered

    • mount -t ext3 -o noatime,nodiratime,data=writeback

    • mount -t reiserfs -o noatime,nodiratime,notail,data=journal

    • mount -t reiserfs -o noatime,nodiratime,notail,data=ordered

    • mount -t reiserfs -o noatime,nodiratime,notail,data=writeback

    • mount -t xfs -o noatime,nodiratime,logbufs=8

    • mount -t jfs



  • Copied the necessary files from the root partition to the testing partition to create a fully functional chroot environment and chrooted it:

  • Once inside the chroot environment I start all services. First deleting all the syslogs and then starting sysklogd, postfix, courier-authdaemon and courier-pop in that order. This way all mailboxes, logs and configuration files are read/write in the file system under test.



On the client side I simply start postal and rabid to send unlimited SMTP and POP requests to the server:

postal -m 5 -r 24000 -t 5 -c 3 -s 0 server_ip userlist.txt
rabid -p 5 -c 5 -r 24000 -s 0 server_ip userlist.txt

The userlist.txt contains the list of all 500 user accounts with password that were created on the server after installation. The postal command above will establish five SMTP connections to the server and send as much messages as it can with sizes between 0 and 5kbytes to all 500 users. On the other side rabid will establish as much POP connections as it cans to the sever from five different processes and download a maximum of five messages per connection.

Both postal and rabid were executed at the same time on the client machine to send concurrent SMTP/POP requests to the server. After 10 minutes we start "vmstat" on the server to obtain CPU and RAM load statistics:

sudo vmstat 60 30 > /var/log/mail.vmstat

This command will output CPU, Disk and RAM load statistics of the server for 30 minutes in one minute intervals and save them on the mail.vmstat file.

Once vmstat finishes we take the mail.log and mail.vmstat files and analyzed them using some custom ruby scripts.

Results


This graph shows the average number of successful SMTP transactions between Postfix (server) and postal (client) per minute. Each connection represents up to 3 messages with sizes between 0 and 5kbytes. No rocket science is needed to see that XFS outperforms by a large margin all other file systems. I managed to get a maximum of 1700 messages delivered per minute using XFS while JFS follows with about 900 messages per minute. ReiserFS and Ext3 in all jornaling modes never passed the 700 messages per minute rank.


This graph shows the average number of successful POP transactions between Courier-POP (server) and rabid (client) per minute. Each connection established by rabid will download and delete 5 messages from the server.

We can see here that contrary to SMTP we have ReiserFS and Ext3 being the best performers with very high averages. XFS and JFS do not perform that well but with 1800 and 1300 transactions per minute respectively they are not that bad.

The bad performance XFS and JFS get here agrees with the fact that they perform badly when deleting large amounts of small files. If we do not delete the messages from the server after download (or use IMAP instead of POP) we may get better performance statistics.


This graph shows the same as the first graph but instead of connections per minute we show the actual number of bytes that were transmitted per minute. This shows that we actually read/write messages from the disk.

Because we ran postal and rabid at the same time we do not have valid statistics for the number of bytes per minute downloaded via POP. This is because the POP client download rate is limited by the SMTP upload rate. In other words the POP client cannot download from the server more than what the SMTP client sends to the server.

This is easy to verify on the mail log files were you will see a lot of POP connections that retrieved zero bytes. Also you can verify this by graphing the number of bytes per minute for POP and you will see that all bars will be equal or slightly lower but never larger than the values obtained for SMTP.



We have here the CPU utilization (real) obtained by running "vmstat" on the server while it was receiving SMTP/POP requests. We see here that XFS is less CPU intensive that all other file systems except for JFS that outperforms all others by a large margin.

This result is surprising as all benchmarks state that XFS is the more CPU hungry of all the file systems. I took this CPU value from the vmstat output data by adding the user and system CPU utilization values. If this is not a good CPU utilization measurement please let me know.


I am not sure what this graph represents but from the "vmstat" man page this value represents: "wa: Time spent waiting for IO". Since we are interested here in file system performance I assume this value is important so I show it here for those that know what it is. To my understanding this represents average CPU time per minute wasted waiting for IO so high values are bad while low values are good. Comments on this graph are appreciated.

Conclusions

XFS simply rocks as a file system for SMTP and POP servers, period. The results I got showed that XFS is the best for delivery of large amounts of mails, it is good for POP and does not consume all CPU.

After Thoughts

These results show clearly that the file system used can greatly affect the applications performance (mail in this case). It can be interesting to test other applications that make heavy use of disk like File Servers and/or Database Servers or even test other SMTP (qmail) or POP (Cyrus, Dovecot) servers and compare their performance over different file systems and Postfix/Courier.

This is the first benchmark test so I am not expert and may have made mistakes. Any comments to improve or correct my tests are welcome.

References

[1] Best File System for Server Usage
[2] Basic SMTP Server Setup In Kubuntu
[3] Basic POP Server Setup In Kubuntu
[4] Postal - SMTP and POP benchmark program.

Wednesday, April 04, 2007

Use time command to get CPU usage too.

The linux time command is a nice little utility that can tell you how much time (cpu and real time) a command or program takes to finish. For example to find out how much time it takes in your computer to find certain file on you home directory you can use:

time find /home -name "filename"

real 0m16.365s
user 0m0.208s
sys 0m0.492s

The time command simply receives whatever program you pass it along with all the parameters you give it and gives you back some statistics about the program execution.

If you read the manual page you can see that time can tell you a lot more than just excecution time of program/commands.

In BASH the way to make time give you other values in other formats it to set up the "TIMEFORMAT" environment variable before calling time. For example to get the above time values plus the CPU percentage we use:

export TIMEFORMAT="%E real,%U user,%S sys, %P cpu"

time find /home -name "*.txt"

7.184 real,0.292 user,0.504 sys, 11.08 cpu

Note that the man page method (i.e. -f switch) does not work with the version of Bash that comes with Ubuntu Server and Kubuntu Edgy/Feisty.