Even for the Bash aficionado, the mkfifo command is likely to be one of the lesser used in your collection. It creates a pipe for sharing data, connecting two running utilities with a kind of command line wormhole. Data sent into one end will instantaneously appear at the other.
Before we look at how to use it, it's worth going over how we typically see pipes. If you've used the shell for anything other than scaring your friends with cat /dev/random, you'll be used to the idea of pipes. They're most often used to stream the output of one program into the input of another. A common use is when there is too much textual output from one command to read. Piping this output into another - usually either less or more - lets you pause and page through the output in your own time:cat /var/log/messages less
In this instance, the pipe is temporarily created for the execution of a single command, but using mkfifo it's possible to create persistent pipes that you can use for similar tasks.
The 'fifo' part of the command refers to the nature of the pipe - the data that's first in is first out. Creating the pipe itself is as easy as typing mkfifo, followed by the name you wish to call it. It's also possible to set the permissions for the pipe (using the --mode parameter) so you can restrict access. Once the pipe is created, you just need to route data into it. Here's a brief example. First we create the pipe, and use tail -f to output any data that's sent to it:mkfifo fifo_pipe
tail -f fifo_pipe
The next step, usually from another console or user account (if the permissions have been set), is to send data to the pipe. Typing echo "This is a test" >> fifo_pipe will send the test message, which will itself be output by the tail command we attached to the pipe.
Remote control MPlayer
There are two types of people in this world: those who think MPlayer is the best media player in the history of existence, and those who are wrong. One of MPlayer's lesser-known features is the ability to control it from a console, a shell script or even over the network. The secret to this trick is in MPlayer's -slave option, which tells the program to accept commands from the stdin stream instead of keystrokes. Combine this with the -input option, and commands are read from a file, or a FIFO. For instance, try this in one terminal:mkfifo ~/mplayer-control
mplayer -slave -input file=/home/user/mplayercontrol
filetoplay
Then, in another terminal or from a script, enter:echo "pause" >~/mplayer-control
This command will pause the currently running MPlayer, and issuing the command again will resume playback. Note that you have to give the full path of the control file to MPlayer, with /home/user and so forth, because ~/mplayer-control alone won't work. There are plenty of other commands you can send to MPlayer - indeed, any keyboard operation in the program triggers a command that you can use in your control script. You can even operate MPlayer from another computer on the network, using SSH or Netcat. See this example:ssh user@host "echo pause >mplayer-control"
Here, we log in to a remote machine (host) with the username user, and run a command to send pause to the remote machine's MPlayer control file. Of course, this can be made much faster if you have SSH key authentication enabled, as you don't need to give the password each time.
Share files the easy way
File sharing with Samba or NFS is easy once you've got it set up on both computers, but what if you just want to transfer a file to another computer on the network without the hassle of setting up software? If the file is small, you may be able to email it. If the computers are in the same room, and USB devices are permitted on both, you can use a USB flash drive, but there is also another option.
Woof is a Python script that will run on any Linux (or similar) computer. The name is an acronym for Web Offer One File, which sums it up fairly well, as it is a one-hit web server. There's nothing to install; just download the script from the homepage at www.home.unix-ag.org/ simon/woof.html and make it executable, then share the file by entering./woof /path/to/myfile
It will respond with a URL that can be typed into a web browser on another computer on the network - no software beyond a browser is needed. Woof will serve the file to that computer and then exit (you can use the -c option to have it served more times). Woof can also serve a directory, like so:./woof -z /some/dir
This will send a gzipped tarball of the directory, and you can replace -z with -j or -u to get a bzipped or uncompressed tarball. If others like Woof and want to use it, you can even let them have a copy with./woof -s
Find lost files
Have you ever saved a file, maybe a download, then been unable to find it? Maybe you saved it in a different directory or with an unusual name.
The find command comes in handy here:find ~ -type f -mtime 0
will show all files in your home directory modified or created today. By default, find counts days from midnight, so an age of zero means today.
You may have used the -name option with find before, but it can do lots more. These options can be combined, so if that elusive download was an MP3 file, you could narrow the search with:find ~ -type f -mtime 0 -iname '*.mp3'
The quotation marks are needed to stop the shell trying to expand the wildcard, and -iname makes the match case-insensitive.
Incorrect permissions can cause obscure errors sometimes. You may, for example, have created a file in your home directory while working as root. To find files and directories that are not owned by you, use:find ~ ! -user ${USER}
The shell sets the environment variable USER to the current username, and a ! reverses the result of the next test, so this command finds anything in the current user's directory that is not owned by that user. You can even have find fix the permissionsfind ~ ! -user $USER -exec sudo chown ${USER}:"{}" \;
The find man page explains the use of -exec and many other possibilities.
Bandwidth hogs
Have you ever noticed that your internet connection goes really slowly, even though you're not downloading anything? Because of the way most asymmetric broadband connections are set up, if you saturate the upload bandwidth, downloads become almost impossible.
This is because of the way the traffic is queued by the modem and the ISP. Even the slowest and lowest-bandwidth operations, like using a remote shell or looking up a DNS address, become painfully slow or time out. If you're using something like a BitTorrent client to upload, you can limit the upload rate, which avoids this problem. Some other programs, like rsync, have a similar feature, but many do not. Also, running two such programs will still cause problems, if each has been told it can use 90% of your upstream bandwidth.
One solution is a handy script called Wonder Shaper, which uses the tc (traffic control) command to limit overall bandwidth usage to slightly below the maximum available. Get it from http://lartc.org/wondershaper, put the wshaper script somewhere in your path - /usr/ local/bin is a good choice -and edit the start of the script to suit your system. Set DOWNLINK and UPLINK to just below your maximum bandwidth (in kilobits/s) and run it. You should now find that heavy uploads, like putting photos on Flickr, no longer drag your modem to its knees. When you're happy with the settings, set it to run at boot with whatever method your distro uses.
Fix broken passwords with chroot
Whether you're a sysadmin in charge of mission-critical data centres or a home tinkerer, Live CDs are wonderful to have around for when you get into trouble. If you manage to mess something up, you can boot from a Knoppix, Ubuntu, GRML or one of several other Live CDs, mount your hard disk partitions and edit whatever files are needed to recover from your troubles. However, there are some things that can't be fixed this easily, because they need you to be in the system that needs fixing.
The solution is to use the chroot (change root) command, which sets up a working environment within a directory. Note that the root in the name refers to the root directory, not the root user (or superuser) although the root user is the only one allowed to run this command. Chroot sets up a 'jailed' system within the specified directory, one that has no access to the rest of the system and thinks that the given directory is the root directory. To fix a broken password, for example, you could boot from a live CD, mount your disk's root filesystem at /mnt/tmp and do this:sudo -i
mount --bind /dev /mnt/tmp/dev
mount -t proc none /mnt/tmp/proc
chroot /mnt/bin/bash
The first line is needed to become root on Ubuntu. The next two make the /dev/ and /proc directories available inside the chroot, and the last one enters the chrooted directory, running a Bash shell. Now you can run passwd, or any other command you need, and finish off with logout or press Ctrl-D to exit.
Password-free SSH
Using SSH to connect to a remote computer is convenient, but it has a couple of drawbacks. One is that you have to type the password each time you connect, which is annoying in an interactive shell but unacceptable with a script, because you then need the password in the script. The other is that a password can be cracked. A complex, random long password helps, but that makes interactive logins even more inconvenient. It's more secure to set SSH up to work with no passwords at all. First you need to set up a pair of keys for SSH, using ssh-keygen like this to generate RSA keys (change the argument to dsa for DSA keys).ssh-keygen -t rsa
This creates two files in ~/.ssh, id_rsa (or id_dsa) with your private key and id_rsa. pub with your public key. Now copy the public key to the remote computer and add it to the list of authorised keys withcat id_rsa.pub >>~/.ssh/authorized_keys
You can then log out of the SSH session and start it again. You will not be asked for a password, although if you set a passphrase for the key you will be asked for that. Repeat this for each user and each remote computer. You can make this even more secure by addingPasswordAuthentication no
to /etc/ssh/sshd_config. This causes SSH to refuse all connection without a key, making password-cracking impossible.
Block script kiddies
Are you fed up with your system log filling up with reports of hundreds (or even thousands) of failed SSH login attempts as script kiddies try to get on your machine?
These do no harm as long as they fail but they're still annoying. Thankfully, there are a number of ways to avoid them. The best - provided you will never need SSH access from outside your network - it to close port 22 on your router, then no-none can get in. Another option is to run a program like Fail2ban (http://fail2ban.sourceforge. net) or DenyHosts (www.denyhosts.net). These watch your log files for repeated failed login attempts from the same IP address, then add that IP address to your firewall rules to block any further contact from there for a while.
The third option is ridiculously easy. Attempts to crack SSH generally assume it runs on the standard port 22; change that to a random, high-numbered port and the crack attempts magically disappear. Edit /etc/ssh/sshd_config and change the Listen directive to something like this:Listen 31337
and restart sshd. The only drawback of this is the inconvenience of having to add this port number to the ssh command everytime you log in, but you can use an alias to take care of that:alias myssh ssh -p 31337
Reclaim disk space
Filling a partition to 100% can have an unpleasant effect on your system. When services and other programs cannot write to their log files, or cannot save data in /var, you could be in trouble. These programs won't be able to save their data, and typically quit out (or, in some extreme cases, crash dramatically!). To avoid this, the ext2 and ext3 filesystems reserve 5% of their capacity for only root processes to use. This is a good idea, but 5% is a lot on large drives - for instance, it's 25GB on a 500GB drive. Also, there is no need to reserve any space on a filesystem not used for root files, such as /home.
The good news is that not only is this 5% not hardcoded into the filesystem, it can be changed on the fl y without disturbing the your data and files. Tune2fs is used to tune various parameters of an ext2 (or ext3) filesystem. It can be used to change the volume label or the number of mounts between forced execution of fsck and a host of other, more esoteric settings, but the options we are interested in here are -m and -r. The former changes the percentage of filesystem blocks reserved for the root user, while the latter uses an absolute number of blocks. So:tune2fs -m 2 /dev/sda1
reduces the reserved area to 2% of the filesystem, which may be more appropriate for if you have a large / or /var filesystem. If you're using a drive of 500GB or larger, this is the best option.
This line of code:tune2fs -r 0 /dev/sda1
sets the filesystem to have no reserved blocks, a good setting for /home that doesn't need a reserved area for the superuser.
Create packages
Downloading an application's source code and compiling it yourself. This is a straightforward task with 90% of the software out there, but it can cause problems with dependencies. While the various package managers have ways of working around this, there is another way.
When building from source using the standard autotools method of ./configure && make && make install, install CheckInstall first. You can get this from www.asic-linux. com.mx/~izto/checkinstall although it may well be in your distro's repositories. Run this instead of make install and, instead of installing the new files directly to your filesystem, it first builds a package and then installs that. CheckInstall works with Deb, RPM and Slackware packages. You can specify the type in a config file, or it will ask when you run./configure && make && checkinstall
Apart from the package type, CheckInstall asks for some other details. Most of these are optional, or can be left at the defaults, but make sure the name matches the older version you are replacing, otherwise your package manager will get confused. Installing with CheckInstall also makes it simpler to remove the package, as there is no need to keep the source directory around, and some programs don't have a make uninstall option anyway.
Get your cds in order
True Unix hackers know that changing directory can be done in all sorts of different ways, and with all sorts of different features, so soon everyone learns that the humble cd command can actually be their best friend. You should already know that cd ~ takes you to your home directory, but real hackers don't waste two keystrokes on nothing: just type cd to get the same result. If you just straighten that squiggle a little, ~ becomes - and you get cd -, the command to navigate to and from the previous directory.
For more advanced users, cd - just isn't enough, because it only lets you go between the current directory and the previous one. A better system is to use pushd and popd in place of cd. So, rather than typing cd mydir use pushd mydir - it remembers the directory where you were, then cds into mydir. You can run this on all sorts of different directories, and Bash will remember your entire trail. When you want to step backwards, just type popd to go to the previous directory.
Finally, don't you hate it when you're in a symlinked directory and you've no idea where you are? Worse, running pwd to print the working directory makes it look like you're not in a symlink at all. If this happens to you, just use the -P parameter (ie pwd -P) to make it resolve the symlink and show you where you really are. And if you want to cd into the real directory rather than the symlink, just use cd `pwd -P`.
Reverse SSH
SSH is one of the most versatile tools for Linux, but most people only ever use it one way - to use the server to send data to the client. What you might not know is that it's also possible to switch the usual logic SSH and use the client to send data to the server. It seems counterintuitive, but this approach can save you having to reconfigure routers and firewalls, and is also handy for accessing your business network from home without VPN.
You'll need the OpenSSH server installed on your work machine, and from there you need to type the following to tunnel the SSH server port to your home machine:ssh -R 1234:localhost:22 home_machine
You'll need to replace home_machine with the IP address of your home machine. We've used port number 1234 on the home machine for the forwarded SSH session, and this port needs to be both free to use and not blocked by a local firewall. Once you've made the connection from work, you can then type the following at home to access your work machine:ssh workusername@localhost -p 1234
This will open a session on your work machine, and you will be able to work as if you were at the office. It's not difficult to modify the same procedure to access file servers or even a remote desktop using VNC. The only problem you might find is that the first SSH session may time out. To solve this, open /etc/ssh/sshd.conf on your work machine and make sure it contains 'KeepAlive yes' and 'ServerAliveInterval 60' so that the connection doesn't automatically drop.
Safe-delete command aliases
We all know that horrible feeling: you type rm * and as your finger heads towards the Enter key, the horrifying realisation that you are in the wrong directory hits you, but you can only watch helplessly as your finger completes its short but destructive journey, sending your files to a swimming oblivion of zeros and ones.
By default, many Unix commands are destructive. rm deletes files, cp and mv overwrite them without hesitation or mercy. There are options to add a level of safety - the -i or --interactive arguments for the above three commands will ask you to confirm your intent after each step - but if you had time to stop and think about using them, you'd have time to check you were in the right directory or whatever. If you'd like these to be the default, add these lines to /etc/profile or ~/.bashrc.alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
to have these commands run with the -i option by default. You can always use -f if you want to enable maximum destructiveness.
Aliasing a command to itself is not limited to preventing file armageddon - you can also add options that improve the output of a command; such as adding -h to ls or df to give sizes in human readable KB, MB or GB.
Clarify your codecs
The trouble with having lots of video files is that they're often using lots of different file formats - and there are dozens of different codecs to encode both the video and audio streams.
You're probably familiar with the wonderful MPlayer, but what you may not know is that there's a sister utility called MEncoder. It's built from the same code base as MPlayer and as a result is capable of converting to and from all the same formats as its accomplished sibling. MEncoder runs from the command line, and can be a little less than intuitive for the beginner; there are just so many parameters. Just take a look at MEncoder's man page!
The mencoder command basically uses four different parameters to work out how you want to convert your file. The first part is the input; the second is the output video codec, the third for the output audio codec followed by the final parameter for the command's output. A typical MEncoder command looks like this:$ mencoder input.avi -ovc lavc -ovc -lavcopts vcodec=mpeg4:vhq:vbitrate=1200 -oac copy -o output.avi
This looks hard, but it isn't really. input.avi is the file to be processed, and -ovc lavc tells MEncoder which output codec to use. The parameters that follow are the options for the codec. In this case, we specify MPEG4 (equivalent to DivX) with a variable bit-rate of 1200. The -oac copy is where the audio output codec should go, but in this case we're simply copying it into the source file, which is the final parameter.
The great thing about MEncoder is that it really takes advantage of your Linux system. For example, you can use a television input for the source file, or pass the video through a filter. You can even remove the bars you see in widescreen films using the crop command.
Smart untarring
There's a routine to extracting tarballs that starts with opening a console, changing to the directory of your tarball and then typing the tar command, followed by the switches for whichever archive you're trying to extract. This is where there's a slight problem. Admittedly, it's not a big one, but when you do this enough times, it starts to become a real annoyance. The trouble is that you need to be able to remember what kind of archive you're un-tarring before you auto-complete the file name. It's usually either bz2 or gz, but you need to specify either a 'z' or a 'j' before you know.
We can script our way around this by using the file command to determine the file type, and then passing this through a conditional 'if' to determine the correct command for extraction. You could choose to embed your default switches into the script, but in this case they're just passed on to the command. The script starts by defining the file type, using the following code:#!/bin/bash
FILE_TYPE=$(file -b $2awk '{ print $1}')
With the 'b' switch, the file command returns only a brief line of data, with the first string being the actual file type. This is extracted from this line by piping the output through awk. We then just need to use 'if' to execute the correct command:if [ "$FILE_TYPE" = "bzip2" ]; then
tar "$1j" "$2"
elif [ "$FILE_TYPE" = "gzip" ]; then
tar "$1z" "$2"
fi
Obviously, it's easy to add your own types and make this part more comprehensive. You now need to save your script with a convenient name (we chose lfx) and place it in your path (such as ~/bin). Un-tarring a file is then as easy as typing:$ lfx xvf ~/testfile.bz2
Old favourites in Bash
It's always worth revisiting forgotten bash commands. Three of the most useful that seem to have fallen from common use are cut, paste and the translate command, tr. cut and paste do exactly what you'd expect, and though they sound mundane it's surprising how powerful they can be when used either in the command line or within a script.
cut is generally a little more useful than the paste command. Running cut takes part of a line and redirects it to the standard output. By default, it uses tab as the field separator, but this can be changed using -d, and fields are selected using the -f flag.
paste effectively allows you to merge contents in columns, like a vertical cat. The best way to see how this works is to create two text files, each with three separate lines of data. The output of paste will be the contents of the first file in a column to the left of the second.
The tr command is used for deleting extraneous output, such as spaces or tabs. The most useful option is -s, which removes repeated sequences of a single character. Take the output ls -al, which generates a long directory listing including the size of files padded with spaces for better-looking output. The tr command can be used to remove these, and provide a single space character for field separation.
Here's an example of how these commands can work together:ls -al --sort=size /usr/bin tr -s ' ' cut -d ' ' -f 5,8
The long output of ls is sorted and then piped to the translate command. This removes the padding, leaving each field separated by a single space. cut then uses the space character as a field delimiter, and takes fields 5 and 8 from the output. What you get is a list of files, sorted by size, displaying only the size and the filename.
Remote windows
The X Window System uses a client-server model to create a display. Most of the time you don't notice, because the client and the server are running on the same machine, but it was developed this way to allow remote X clients to connect to a central X server. You could think of it as a thin client, where the X client consists of just a keyboard and a monitor, connected to the server. The positive side-effect is that this remote functionality is just underneath the surface of your Linux box.
SSH forwards X windows sessions automatically, which means that if you start an application on a remote machine from an SSH console, the application window will appear locally. The window is communicating with the remote machine using the X protocol, which is why there is a delay every time you resize the window or click within the user interface.xterm -display :0 -e klamav &
If the above piece of code is run from an SSH console connected to a remote machine, it would open Xterm and run KlamAV on the remote screen rather than your local one - you wouldn't be able to see it on your screen. This is useful if you need to start an application remotely, such as an email client or virus checker.
The important part of the command is the display parameter. Here, this is :0, which is the first screen on the remote system. This is because X uses IP addresses and ports to specify a destination, and we've simply omitted the address, which implies the local machine. You could use localhost:1 to specify the second screen.
The -e parameter that follows will run an application from the opened Xterm, opening KlamAV on the same screen as the Xterm console. You could also use the nohup command, so that when the SSH session is terminated, the application that's running remotely won't be.
Sdrawkcab
The usual command for reading a text file is cat (or less if you want to read it page by page, but that's not what we're talking about here). This starts at the start and ends at the end, which is pretty logical but not always what you want. If you want to read a file backwards (say when reading a log file and you want the most recent entries first) just run cat backwards. That's right: tac does the same as cat but backwards.
What if you don't want any particular order but want the lines of the output randomly mixed up? For that we use the command shuf. Now this may not be particularly useful with logfiles (OK, it's completely useless with log files) but what if you have a list of music files you want to pass to a music player? The input doesn't have to be a file, it can be standard input, so you can play your Ogg Vorbis files in random order with one ofls -1 ~/music/**/*.ogg shuff mplayer -playlist -
ormplayer $(ls -1 ~/music/**/*.ogg shuff)
Gconf wallpapers
Gconf works more like the Windows registry editor, and enables you to get access to many of the hidden options and settings behind an app that aren't editable in any other way. You can browse all the possible parameters by firing up Gconf-editor from a console. This is a front-end to the thousands of settings that Gnome keeps in the background.
To find the path to your desktop background, open the Desktop folder, followed by Gnome and Background. This will display a list of settings that are applicable to your desktop background. This includes how your image is scaled, opacity value etc. The path to the image is found under the picture_filename parameter.
The clever part is that you can change these settings from the console, and therefore from your own scripts. Once you've found the parameter you want to change using Gconf-editor, use gconftool-2 to change this and synchronise the change so it's updated immediately. The following command will change your background to test.png:gconftool-2 --type str ---set /desktop/gnome/background/ picture_filename test.png
We've used exactly the same path to the parameter that we used when navigating the folders in Gconf-editor. The type parameter defines the value as a string because the filename is just text. You could swap set with get to display the path to your current desktop image. Now try changing icons, setting the default file manager mode, or even adding email accounts in Evolution.
Nice, nice baby
Most Linux users know of the nice command but few actually use it. Nice is one of those commands that sound really good, but you can never think of a reason to use. Occasionally though, it can be incredibly useful. Nice can change the priority of a running process, giving it a greater or smaller share of the processor. Usually this is handled by the Linux scheduler. The scheduler guarantees that processes with a higher priority (like those that involve user input) get their share of the resources. This should ensure that even when your system is at 100% CPU, you will still be able to move the windows and click on the mouse.
The scheduler doesn't always work smoothly, however; certain tasks can take over your computer. This could be a wayward find command that's triggered by a distro's housekeeping scripts; or encoding a group of video files that makes your computer grind to a halt.
You'd typically hunt these processes out with the top comand before killing them. Nice presents another, more subtle and more useful option. It reduces the offending task's priority so that your system remains usable while still serving the offending process. Running a command with a different priority is as easy as enteringnice --10 updatedb
This runs the updatedb with a reduced priority of -10. If you run top you can see the nice value under the column labelled 'NI'.
If you wish to reduce the priority of a program that's already running, you need to use the renice command with the process ID:# renice -10 -p 1708217082: old
priority 0, new priority -10
This will also reduce the process's priority by 10, and depending on the nice value of the other processes, will lessen the amount of CPU time it will share with the other tasks.
SSH by proxy
Cryptographic tunnels are a useful way to establish a secure connection between your local PC and a remote machine or server. If you use VNC, the remote desktop client, you've probably already burrowed your way through a tunnel; a sensible technique is to use SSH, which is commonly employed for remote logins.
One of the best uses of SSH tunneling is to access Webmin, the remote config tool that runs on a web server. You can change almost anything on your system using Webmin, so it's unwise to leave it open to the internet. But if you close it off, you lose the ability to configure your machine. You can get around this limitation by tunneling with SSH from the port that Webmin uses to your local machine, like so:
ssh -L 8090:localhost:10000 remotehost
Just point your web browser at https://localhost:8090 to connect to your remote Webmin server. You could also forward a proxy service using SSH. If you were at a location where you couldn't access Google or eBay, for example, you could create a tunnel to the proxy server and browse from there. Most distributions include a proxy server, such as Squid. This needs to be installed and running on the remote machine first. Squid uses port 3128, so the command to tunnel Squid would look something like this:ssh -L 8090:localhost:3128 remotehost
It's then just a matter of configuring your browser to use localhost:8090 as the proxy server, and all subsequent web requests will be passed through the SSH tunnel. Using a proxy server in this way enables you to connect to other machines on the proxy's local network, such as 192.168.1.1, and that also includes services like router configuration servers.
No one can hear you screen
Virtual terminals are like children: having one, two or even three brings joy to your life, but more than that puts a strain on your resources. When working remotely, some people miss having the ability to open mutliple terminals, so they simply open many SSH connections to the same machine. Not only is this a waste of bandwidth, it's also a sign you're a newbie - which you're not, right? Veterans know there's a much better way to open multiple terminals, and it comes in the form of the GNU screen program. To get started, open up a terminal, type screen, then hit Enter. Your terminal will be replaced with an empty prompt and you may think nothing has changed, but actually it has - as you'll see.
Type any command you like, eg uptime, and hit Enter. Now press Ctrl+a then c, and you should see another blank terminal. Don't worry, your old terminal is still there, and still active; this one is new. Type another command, eg ls.
Now, press Ctrl+a then 0 (zero) - you should see your original terminal again. As you can see, Ctrl+a is the combination that signals a command is coming - Ctrl+a then c creates a new terminal, and Ctrl+c then a number changes to that terminal. You can use Ctrl+a then Ctrl+a to switch to the previously selected window, Ctrl+a then Ctrl+n to switch to the next window, or Ctrl+a then Ctrl+p to switch to the previous window. To close windows, just type exit.
When your last window is closing, you also exit screen and it will print 'screen is terminating' to remind you. Alternatively - and this is the coolest thing about screen - you can press Ctrl+a then d to detach your screen session. Then, from another computer later on, use screen -r to pick up where you left off, with all the programs and output intact just as you left it - magic!
Better than a browser
If you often need to retrieve pages from the net and find that using a browser is like using a sledgehammer to crack an egg, then wget is for you. Its info page soberly describes it as a utility for the non-interactive download of files from the web; but what they're trying to say is that sometimes it works better than using a browser. You can use wget in a script to download web pages or files, and it's perfect for synchronising local web archives. You don't have to use it in a script either - it works just as well when executed directly from the shell (http://wget.sunsite.dk).
The most straightforward use for wget is to simply download a file referenced by a URL:$ wget http://localhost/somefile.tar.gz
This should present you with a text-based download bar. Unfortunately, if the site uses the HTTP protocol, wget won't support wildcards; so you couldn't use *.gz for downloading multiple files (but you could if the site used FTP instead). wget is used most often to mirror a whole website. Here's an example for downloading a site:$ wget --mirror -p --html-extension --convert-links http://localhost
Wget traverses the site and downloads the content into the current directory. The mirror argument enables options suitable for mirroring a website - in particular, recursion for traversing the whole website tree. htmlextension is used for sites that use either CGI scripts to generate HTML, or ASP files that need to be renamed after they're downloaded. If wget recognises the contents, it will just add the HTML extension.
After the transfer has finished, wget goes through the local files to change any remote references so the site can be viewed offline.
Killing zombies
If you spend any time looking at your process list, sooner or later you're going to come across one that's labelled 'defunct'. Before we explain what a defunct process is and how to remove it, here's a brief overview of how to query the process table using the ps command.
Typing ps ux will list all the processes attributed to the current user, and you can specify another user name with ps U username. One of the most common uses for ps is to list all the processes running on the system, and this can be achieved by using ps aux. Breaking this command down, the a will list all the processes rather than those of a single user, the u is the level of details returned for each process, and x lists processes such as daemons that weren't started from a terminal.
A defunct process is one that was started by another process (the parent), but has finished without the parent waiting for completion. This can happen if the parent process has hung or crashed.
Defunct processes are also known as zombies, and listed with a 'Z' status in the output from ps. They're not quite as destructive as the living dead, as they consume almost no system resources, but on a system that's always turned on, such as a server, they can become equally distracting. The key to killing a defunct process is to first kill the parent, which will be listed in the output of ps with the addition of -l for long output. Parent processes can be identified under the PPID column, as opposed to the PID column for the process ID. These are identifiers attached to each process running on your system. They can be killed using another common shell command, kill -9, followed by the PPID. Obviously this will stop the parent task, so first make sure it's not essential. Once the parent process has been killed, the system init process should send the correct signal to the defunct process, which should terminate automatically.
Safe keys
We rely on encryption to keep our data safe, but this means we have various keys and passphrases we need to look after. A GPG key has a passphrase to protect it, but what about filesystem keys or SSH authentication keys. Keeping copies of them on a USB stick may seem like a good idea, until you lose the stick and all your keys enter the public domain. Even the GPG key is not safe, as it is obvious what it is and the passphrase could be cracked with a dictionary attack.
An encrypted archive of your sensitive data has a couple of advantages: it protects everything with a password (adding a second layer of encryption in the case of a GPG key) and it disguises the contents of the file. Someone finding your USB key would see a data file with no indication of its contents. Ccrypt (http://ccrypt.sourceforge.net) is a good choice for this, as it give strong encryption and can be used to encrypt tar streams, such astar -c file1 file2... ccencrypt >stuff
and extract withccdecrypt
Editor redirection
If you run a Debian-based distribution like Ubuntu, have you ever wondered what the mysterious /etc/alternatives directory is for? If you take a look at its contents, you'll find that it's full of some of the most common system commands. But if you look closely, each file is really a symbolic link to the real location of the command elsewhere in the filesystem. This directory is full of links because the original Debian developers didn't want to assume one tool would be used over any other. They used the cron utility to highlight the problem.
Cron is used to schedule events to run at a certain dates and time, and it does this by opening a text editor from which you need to add your own jobs. But the big question for the Debian developers was 'Which text editor?' For Linux users, there's no simple answer and it's a question that's caused too many Monty Python-esque Judean Popular People's Front-style flame wars and too much wasted time to provide a definitive answer. Whether users prefer Emacs, Vi or Nano, a mandate to choose one over the other is always going to cause problems.
The solution was /etc/alternatives. If you type cron on Ubuntu, it actually loads the newbie-friendly Nano editor. But if you look closely, cron is actually launching the editor command located in /usr/bin' which is itself a link to /etc/alternatives/editor.
As you might have guessed, this file is a link to the real editor - in this case it's /usr/bin/nano. This is a careful sidestep of the issue of which editor to choose as all you have to do to change the default editor is change the link to point at your favourite rather than Nano. There's even a command that can perform this task for you. Type update-alternatives --set editor /usr/bin/vim to switch the editor to vim, for instance. You can also list the available editors that are acceptable using the -display editor parameter, and its exactly the same for all of the other commands that reside within the /etc/alternatives directory.
Playing with time
How many of your machines managed to successfully negotiate the transition out of daylight savings time in October? It's important, because there's more to time than the shiny clock sitting in the corner of your desktop panel - your system is regulated by running things at a certain time. Be it the time embedded in a sent email or the timestamp on a file, everything depends on your system clock.
The simplest way of checking your system clock is to use the date command. When date is executed on the command line, you get a single line of output that contains the date and time in an abbreviated form. You can use this output format to set the date and time as input for the date command, but it's also easily customised. Various options can be used to input or output anything from the time in nanoseconds to which century we're in.
The last field in the output from the date command will tell you which time zone your machine is configured for. If you're in the UK, hopefully this reads 'BST' for British Summer Time at this time of year. The configuration file for this can be traced to /etc/timezone, which will contain a description for your location. For BST, this is likely to be 'Europe/ London'. If this is wrong, you can choose a more suitable time zone from the /usr/share/zoneinfo/ directory. This directory includes a list of many of the more popular places to live on the planet, sorted by continent and country.
There are two clocks on board your system. One is in the system clock, and this is the one probed by date. The other is the hardware clock. This resides in your system BIOS and keeps the time while your computer is turned off. The system clock takes the time from the hardware clock as part of the boot procedure. You can query, and set, the hardware clock using the hwclock command, and by typinghwclock --systohc
you can set the hardware clock to the same time as the system clock.
Killing time
Once you start using the command line, you use ps time and again for managing your process list. Just typing ps will list the processes that belong to the current session, which unless you're running anything in the background will just be two: the Bash shell (if that's your choice), and the ps command itself. This isn't much use: most people use ps ux to display all the processes they own, and ps aux for listing every system process.
It's easy to find the process you're looking for by passing the output of ps into grep, as with ps auxgrep konqueror. With zombie processes, you typically go hunting for processes when they start to misbehave, before issuing a kill -9 pid to kill off the offender. pid is the process identification number, as listed in the output from the ps command.
But there is another option - using a command called pidof to get the process ID of a process you know is running. Using Konqueror as an example, you would just type pidof konqueror. The output will look something like the following:pidof konqueror
18380 18021 24825 13081 6478 6473 6472
This means that there are seven instances of Konqueror currently running, and each number is the process ID for each instance. The larger the number, the more recent the process. For example, you could kill the last executed Konqueror by typing kill -9 18380.
One of the most useful aspects to pidof is that you can use it to work out the process ID when you can't manually sift through the output from ps. This is perfect for scripts that need to find and kill a process, or maybe give them a higher or lower system priority, without having to waste time looking through the output of ps.
Lazarus raised
There's nothing quite like that feeling of horror you see when you get theNo usable partitions/No OS found
message from the over helpful BIOS. It usually takes a few seconds to sink in - your hard drive has failed, or is failing, and it no longer boots to the operating system. There are many reasons why this could have happened and each varies in the severity of potential data loss. With a broken hard drive, you might lose everything. But it could also be the result of a simple boot loader error or overzealous distro installation. In these cases, there's a good change your data may survive intact - but what do you do? Those of you who keep timely backups of your data can sit back, smile smugly and restore their hard work from the latest backup. But despite knowing how important it is, most of us never seem to get around to backing up the data we spend our lives collating. If ever there was a time for the Linux Live CD, this is it.
Live CDs are stuffed full of tools that can be used to resurrect a hard drive, and many of these Linux tools rival or surpass the functionality of most commercial solutions. The first thing to do is mount the lost drive from the Live CD.
We'd suggest using PCLinuxOS as we've found this the best distro we've seen for finding and mounting wayward partitions. It also does a good job of finding Windows NTFS partitions on the same drive. PCLinuxOS will automatically detect any it finds and mount them onto the desktop. You should then be able to copy your data to a safe place. If this doesn't work, your saviour is going to be typing testdisk from a root console.
Testdisk is one of the most underrated Linux tools and can really make the difference between losing or keeping everything. It's perfect for restoring broken MBR records and for rebuilding partitions tables.
Paint by numbers
We all like a little bit of colour in our lives, and just because the Linux command line is a text interface to the inner-workings of your system, it doesn't mean that it needs to suffer the same monochrome fate as printed text. This tip will show you how to escape!
There are various ways to add colour, and one of the most popular is accomplished with the help of a command called dircolors. If this spelling offends you (American readers, look away), you could always use a symlink similar to the following to amend it:sudo ln -s /usr/bin/dircolors /usr/bin/dircolours
Dircolors will make different file types appear as a rainbow of colours when you run the humble ls command. If you execute the dircolors command on its own, the output is a confusion of file types and secret codes. These will look something like pi=40;33: or *.ogg=01;35:. The first part of each entry is the file type, and the second part (after the = symbol) consists of two values that represent a foreground and background colour. If you're confused by some of the cryptic abbreviations in the first part, typing 'dircolors --print-database' provides more verbose output - revealing that pi=40;33: will colour the 'pipe' symbol (pi) with a black background (40) and a yellow foreground (33), for example.
If you look closely at the output from dircolors, you will see that it starts with LS_COLORS= and ends with export LS_COLORS. This is because the command is doing nothing more than setting a large environment variable with its list of file types and colours. You could save this output, and add it to the end of your .bashrc file in your home directory to set these colours automatically. But once you've run the dircolors command, your command prompt should start looking like a honey saturated tennis ball in a bucket of hundreds and thousands (aka sprinkles).
Oh, if you don't see any colour after that, try typing ls --color=auto.
Guaranteed screenshots
We often have a problem with illustrating game s because the game takes over the display and keyboard and, unless the developers have included an internal screenshot function, it can be hard to grab the contents of the screen and save it to a file. Even when there is a windowed game mode, as with Cold War, you still need to find a way to break the keyboard away from the game and give control back to the desktop before you can use Gnome or KDE's screenshot utilities.
There is a solution for when you can't escape the clutches of an application that's taken over your X Windows session. The clue is that even when you can't get back to your desktop, you can nearly always get back to one of the virtual terminals waiting patiently in the background. Pressing Ctrl+Alt+F1 will switch from your desktop to the text-based login of the first virtual terminal. These terminals hark back to when Unix was a predominantly multi-user environment, and the 'virtual' refers to the fact they are on the local machine rather than a remote dumb terminal.
Other virtual terminals are accessible by substituting F1 with F2-F6, and you can get back to your desktop by switching to the seventh virtual terminal, Ctrl+Alt+F7, which happens to be running your X session. What does this have to do with taking screenshots? Well, as you can get to a command line, you are now able to take a screenshot using one of the many ImageMagick tools you find installed on your system by default.
Here's the command to execute:chvt 7;sleep 10;import -display :0.0 -window root image.png
This switches to the virtual terminal running X (chvt 7), waits ten seconds, then uses ImageMagick's import command to dump the contents of the screen to image.png. Sorted!
The great SSH escape
One aspect of SSH that can make things a lot easier when you open a connection, start a series of jobs and realise you need to forward a port through the current session. The answer is to use an escape sequence while connected with SSH to change certain settings without needing to reconnect.
An escape sequence is just a series of characters that instruct the utility you're using (in this case SSH) to escape from what it's doing and perform a utility-specific task. You're most likely to have come across escape sequences while using the shell. The most useful escape sequence for SSH is executed by you pressing the tilde (~) symbol, followed by a capital C (by holding Shift down at the same time as 'c'). You won't see anything in the session until you've completed the escape sequence, at which point the prompt will change to 'ssh>'. This is to signify that you've been dropped into the SSH command line. From here you can connect a port on the remote machine with a port on the local machine, and tunnel the data between the two through the secure SSH connection.
You can use this technique to tunnel the data from a Squid proxy server through SSH to a local port on your machine using the -L argument, so typing -L8090:localhost:3128 would tunnel Squid to local port 8090 without restarting the SSH session. You can also list forwarded ports by using the ~#, escape sequence, and cancel forwarded ports by typing -Krhostport. To cancel the Squid tunnel we just created, just type -KR3128.
Redirect the masses
Even if you're a Linux beginner, it's likely you've already used some form of redirection while using the command line. Redirection uses the > and <>local. log, the contents of the kernel ring buffer (the output from the dmesg command) will be redirected into the local.log file rather than displayed on the screen. If you use two > symbols, the output of the dmesg command would be concatenated on to the end of the file rather than used to overwrite it. Using the <>/<> symbol - the 2 comes from the priority given to each file descriptor. Naturally, 0 is standard input, 1 is standard output and 2 is the standard error output. This is useful because it enables you to filter out error conditions generated by a command, while still sending output to a log file.
Here's an example using find. The common permission errors that result from find not having access rights are sent to the black hole of the null device, whereas successful results are output to the screen:find / -name *.jpg 2>/dev/null
No comments:
Post a Comment