Tag Archives: linux

KeePassX: The Perfect Password App

Recently, I’ve been having some trouble with passwords. Either the login name is a string I never use, and therefore never commit to memory (like my real phone number that I mask with Google Voice), or the password policy forces me to use a password that I’ll never remember (like sites that keep track of your past passwords, or require 11 characters of alternating symbols, letters and numbers, etc.). Since I use spamgourmet, any site that requires an email address as a username is another puzzle – sometimes I even have to login there to find the right one. Also, I have a concern that if I die, my wife will have real trouble getting into all my accounts, so it would be nice if I could just leave her one password to give her access to all that information. So, I broke down and started using a password organizer app. Now, I have always been averse to using these applications for a variety of reasons (online companies having all your passwords, plaintext in swap space / memory, keyloggers, insecure encryption, etc.), but I managed to find one that’s open source, never caches my master password, widely used, and has extreme cross-platform capabilities. KeePassX is the name, and it’s available in Ubuntu. Installing it is left as an exercise to the reader. Once you get in there and add a few passwords, it starts to look something like this:

It allows you to mask both your usernames and passwords (both optionally) from the top-level view. It has clipboard capabilities, so you can just copy your password to the clipboard by clicking a button, and never see it on the screen in plain-text. Their security is really well-done. But the big realization today was that they have an Android App! This app only need the kdb file from any instance of the application, and of course the password to decrypt it. It’s available in the market too! But, how do you sync changes between your main desktop and your phone? Dropbox! Using the dropbox mobile app, I simply synced the kdb file onto the phone, and then opened it. KeePassDroid popped up and asked if I wanted to make it the default database, and I checked the box. Done.

Now, whenever I make a change, it syncs over Dropbox like magic.


Fill an LVM volume group completely with a single logical volume

I learned a cool LVM trick today – how to resize a logical volume to use a certain percentage of a volume group.  Since I just have one logical volume in the group, I did the following:

[root@nyu ~]# lvextend -l +100%FREE /dev/diclonius/vector
  Extending logical volume vector to 7.28 TiB
  Logical volume vector successfully resized

Thanks Redhat Documentation!

In other news, I ran a performance test on my 5-drive RAID-5 using HD204UI drives from Samsung:

[root@nyu tmp]# dd if=/dev/zero of=foo count=5 bs=$((1024*1024*1024))
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB) copied, 18.1434 s, 296 MB/s
[root@nyu tmp]# dd if=foo of=/dev/null                         
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB) copied, 15.2682 s, 352 MB/s

So, 350MB/sec reads and 296MB/sec writes! Not bad!!

Why RAID-Z isn’t appropriate for me (or for almost any home user)

So, ZFS is cool.  OpenSolaris derivatives are cool.  RAID-Z is cool.  But it lacks one simple feature that other software RAID solutions handle – the ability to grow a volume by increasing the stripe size.  For instance, let’s just postulate that you have 3 2TB hard disks in a RAID-5, and you want to add 2 more to make a 5-disk volume.  Well, with ZFS, you have 2 options:

  • Back up everything on the current volume, destroy it, and create a 5-drive RAID-Z from scratch
  • Buy another 2TB drive, create a new vdev out of the 3 new drives, and add it to the zpool

Now, at first, the second option doesn’t sound too bad – until you realize that you’ve basically created a false RAID-Z2 (RAID-6), since you’ve got 2 parity disks.  It’s false because if 2 disks fail in the same vdev, you’re cooked, but you could lose one in each and be fine.  Also, you’re wasting money on an extra disk when you’re a simple home user who wants to scale in small parts.

Neither of these issues are a problem for larger deployments – they generally already have disk space for backups (or already have all the data backed up in the first place), or are building the entire thing from scratch to store future data.  Buying extra disks isn’t a problem – they have money.  Home users do not.

So, until this is possible, I’ll be using mdadm or a similar solution on OpenFiler or another Linux-based OS.  This is a real shame; I really wanted to start using OpenIndiana.

Maximizing rsync performance between Linux and Solaris

I now am a proud owner of an OpenIndiana server, and I’ve been moving files to it over gigabit ethernet for the past few hours. During this time, I’ve made some important realizations, and I figured I’d note them here for everyone’s benefit.  My transfers started off at about 10MB/s sustained, which is right around 100Mbit/s speeds, but on a gigabit network.

1. Ethernet Cables

Something we don’t think about too often these days is the type/quality of Ethernet cable we’re using in our homes.  I certainly thought I was using CAT5e until I actually looked today and found my desktop machine was hooked up with a plain-Jane CAT5 cable.  Yuck – that’s in the garbage now.  After that change, I noticed a small improvement in sustained transfer speed, but still holding at around 12MB/s.

2. MTU

If you have 2 gigabit cards that support it, and a network switch that supports it, you can get better speeds by increasing the maximum transmission unit of your network card.  In Linux, we do it like this:

hank☢barad-dur:~ % sudo ifconfig eth0 mtu 8000

In Solaris, or its derivatives, you do it like this:

root@nyu:~ # ifconfig e1000g0 mtu 8170

You also have to enable that mtu in /kernel/drv/e1000g.conf! I found that out thanks to this post. It’s quite easy – this is what mine looks like:

MaxFrameSize=2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 
# 0 is for normal ethernet frames. 
# 1 is for upto 4k size frames. 
# 2 is for upto 8k size frames. 
# 3 is for upto 16k size frames.

Each number corresponds to the last number in the interface, so e1000g0 is the first number (set to 2 in this example) and so on. My switch only supports 9K Jumbo Frames, so this was fine.

This got me a little more stability, but I was still basically capped at 100Mbit (13MB/s). Time to roll out the big guns!

3. Rsync compression

The -z option in rsync compresses files before they’re sent.  I have nice beefy CPUs on both ends, so I thought that wouldn’t hurt – I was completely wrong about this.  For some reason it slows down the transfer by about 50% here.  CPU usage is very low on both machines, so this is really confusing, but as a general rule, do not use compression when transferring files with rsync on a LAN.  So, now that it’s off, I’m up to about 200Mbits/s.  Not bad, but we can do better!

4. Rsync method

So, when you run an rsync like this:

rsync -arxWh --progress . root@192.168.1.8:/diclonius/data

You’re telling rsync to log in (using rsh or ssh) to 192.168.1.8 using the root account. Now, if rsh is selected, then everything is peachy and you’ll get great rates. But, if ssh is selected, you’ll get encryption bloat, and your throughput will be reduced significantly (not to mention CPU usage will be higher). There’s a fix for this – on the destination system, run an rsync daemon. The instructions to do so can be found all over, but these were helpful for me. I set up the rsyncd.conf and secrets file, and just ran rsync --daemon, which backgrounded. I then executed this on the sending machine:

rsync -arxWh --progress . rsync://hank@192.168.1.8/data

And immediately got another 10MB/s (!!) bump in speed. So, now files are cruising over the network at around 300Mbits/s, which is good enough for now. If I didn’t have a crappy Marvell onboard network interface on my host machine, and actually got a real gigabit card (I have a PCI-E one in the mail that will supposedly do full gigabit), this would be a lot faster. For now, I’ll just have to deal with 1/3 of its potential.

Telling bash to step aside for zsh

So, let’s say that you want to change your shell to zsh, but fall back to bash if it isn’t available on whatever system you’re using. This is useful if you use something like NIS or LDAP with home directory NFS, since you’ll be sshing around and bringing your .bashrc with you everywhere. The solution is pretty simple – just add this to the bottom of your .bashrc:

hash zsh 2>&- && exec zsh

Update: This breaks X-windows. Need to figure out why…
Update: My esteemed colleague informed me that I should check for interactive shell, which is a dead give-away for X:

if [ ! -z $PS1 ]; then
  hash zsh 2>&- && exec zsh -l
fi

OpenIndiana Jones

So, after a bunch of research about building a DIY NAS, I decided to buy a whole bunch of hardware to do so. But, the real question was which software to use. FreeNAS seems to be the most popular solution, and I heard it was better than something called OpenFiler. Then I stumbled across NexentaStor, which is free for any NAS less than 18TB in size, which is fine for me. I was basically ready to go with that, but then I heard about OpenIndiana and the napp-it web gui. Basically, OpenIndiana is the result of OpenSolaris getting closed by Oracle. Since Oracle shut down the openness, the last open version of the operating system has been “sporked” into OpenIndiana. I just installed it in a VM, and I’m impressed, especially with the pool management of zfs.

But, being a hardcore Linux user for about 8 years, I’ve gotten used to certain things working a certain way. This post is just a little note to myself, and to others potentially, about what I didn’t like about the base install, and how I fixed it.

vim

So, nicely, the machine comes with vim 7.2 installed, which is fantastic.  The problem is it’s in compatible mode by default.  Gotta shut that down.  Solaris apparently keeps the vimrc file hidden away in /usr, so we have to do this:

echo "set nocompatible" | sudo tee -a /usr/share/vim/vimrc

I also added the following lines for good measure to the same file using vim:

syntax on
set bg=dark
set ts=4
set sw=4

Now I have a real working copy of my favorite editor. That’s more than half-way to happiness for me. More to come.

Update: grep

So, now that I’m getting settled, I’ve been doing a bunch of shell work, and there’s something I noticed:

root@nyu:/etc # grep -R 2,2 *
grep: illegal option -- R 
Usage: grep -hblcnsviw pattern file . . .

That’s right – the default grep is crappy Solaris grep, not good old GNU grep! So, I checked it out, and the way to solve this is to use ggrep, which I will alias to grep, of course.

root@nyu:/etc # alias grep="ggrep"
root@nyu:/etc # grep 
Usage: ggrep [OPTION]... PATTERN [FILE]...
Try `ggrep --help' for more information.

mkvmerge + mplayer Sadness Fix

I have been having some issues with newer versions of mkvmerge creating files that make mplayer cry. I finally messed around with the options enough to discover what (I think) was the problem. Header compression reduces filesizes of attachments significantly, but there are some compatibility problems with the files that have this option enabled. Here’s what I finally ended up with:

mkvmerge -o video.mkv --default-language eng \
  --compression -1:none --default-duration 0:41.708ms \
  --nalu-size-length 0:4 -A video.264 --compression -1:none \
   audio.dts subs.srt

Note that this doesn’t set the output language for the audio and subtitle tracks, but there’s only one of each, so I don’t really care. The key point is since this is an elementary h264 stream, I have to set the default-duration and nalu-size-length options manually, since they’re not encoded into the file. This default-duration corresponds to a 23.976fps frame rate, so you have to convert according to your source.

Really, the header compression option should be disabled by default; I have no idea why it’s on. It does save about 30MB of space in the output file, but that’s moot if it ruins compatibility.

LXPanel Plugins: Simplified

I started using LXDE last night, and I’m really liking it. It seems to take a whole ton less memory than Gnome did, and as a result my Intel Atom box runs a lot smoother because it doesn’t have to continuously swap. Anyway, I’ve been customizing some things, and I eventually found myself in plugin development land.

LXPanel is the gnome-panel equivalent for LXDE. There doesn’t seem to be a Trash can plugin for it, and I think that’s just sad. So, I decided to learn how these plugins are coded and make one of my own. I’m not sure if I’ll end up succeeding, but at least it will be a learning experience either way.

I found this page, which outlines a simple plugin that doesn’t do anything. It seems to be a good starting point. I followed the directions and ended up looking at an autoconf project that required a lot of work to get running in Ubuntu. Even though I finally got it to compile, I decided it took way to long to do so. I converted it to a scons project, and now it’s just one directory with a simple build script:

The shared object that’s output is exactly the same size as the one created by autoconf, so that’s good enough for me at the moment. Time to keep hammering away. The code for the example is available here for reuse.

Moving the mouse with Python

I’ve been using KDE4 in Ubuntu, and I really like it – it’s slick, has everything I need, and it all seems to gel together pretty well. Yet, the power management is really starting to make me angry. I’ve turned all of it off, checked ps and other tools for any signs that it’s still running, and despite my efforts, my screens still turn black after 10 minutes unless mplayer is running. So, I decided to fix that using Python, which actually turns out to be pretty nifty. I got the original idea from here, and modified it to loop a bit. Here’s the result:

This moves the mouse around just a tiny bit, and works well enough to watch flash video for extended periods of time. To kill it, just hit Ctrl-c. Thanks, Python!

Extracting M2TS length from a BDMV directory in Linux

I was having the hardest time getting various programs to echo the runtime of m2ts files in Linux, and it turns out someone wrote a parser for the files in the BDMV/PLAYLIST directory, which have all of this information.

  • Get bdtools. I got Version 1.4. You can find it here.
  • ./configure && make && sudo make install
  • Try running mpls_dump. I got this error when running:
    mpls_dump: error while loading shared libraries: libbd-1.0.so.1: cannot open shared object file: 
    No such file or directory

  • To fix it, do this:
    echo "/usr/local/lib" | sudo tee -a /etc/ld.so.conf
    sudo ldconfig