Tuesday, September 29, 2015

Transfer Files through an SSH terminal on Windows such as PuTTY to/from UNIX/Linux/BSD remote host using sz and rz Zmodem commands

Have you ever wished you could transfer files through your ssh terminal window without having to run a separate sFTP-type program?  This is a pretty old-school method that must not be popular in these days, judging from how rarely it seems the "sz" and "rz" commands are even available on a system, but it's likely that you can install them.  The letters in these commands stand for "send Zmodem" and "receive Zmodem."

On a Windows client side, one could just use SecureCRT as it appears to support this by default, and using it would probably make the Windows side of this much simpler.  If both the client and server sides are non-Windows systems, that can also simplify matters.

Because SecureCRT is not free software, many people use PuTTY or one of its forks: "KiTTY" or "Le Putty."  One reason for both of these is the addition of support for Zmodem transfers.

Windows Client Side:

The KiTTY program can be found here: http://www.9bis.net/kitty/?page=ZModem
Le Putty home page: http://leputty.sourceforge.net/

In both cases there is no Windows installer, it's just a [zipped] .exe program that includes or requires the "sz.exe" and "rz.exe" executable files as well.

With both KiTTY and Le Putty the Zmodem settings are saved within the particular session profile.  With KiTTY the saved sessions/settings seem to be shared with those of PuTTY if it has also been used.*  Be sure to save your session info with your chosen Zmodem-transfer settings.

The settings for Zmodem transfers include the local folder for downloads to arrive in as well as the path and command line options for the client-side sz and rz commands, which should include the full path as well as the .exe extension.

For example, you might place your rz.exe and sz.exe files in a folder like C:\sshz, along with the KiTTY or Le PuTTY program files.  In the settings you might set your Zmodem download folder to "C:\sshz\downloads" and the command lines for sz to "C:\sshz\sz.exe" and for rz to "C:\sshz\rz.exe" respectively.
(note: it would probably be wiser to use a different directory than just C:\sshz, maybe something in your user profile or My Documents folder.  This is just an example path.)

That's about it for the Windows client side of things, though I would also note that in order to actually start a Zmodem upload from the client after the "rz" command has been used on the server, you must right-click on the terminal window's title-bar with your mouse pointer and then choose "Zmodem Upload" and select the file to be sent.  Likewise, to download a file after starting "sz" you'd need to right-click the title bar and select Zmodem Receive.

Remote host server side:

If the server already has the rz and sz (or lrz and lsz) commands installed and available, your work is done for you there.  If not, you can install them yourself from source, even if you don't have root access.  Personally I find this quite handy and nice to be able to do: building and installing software even without root access.

At the time of writing, the source tarball is available at: ohse.de/uwe/releases/lrzsz-0.12.20.tar.gz

If you do indeed have root access and wish to install lrz and lsz on the system, there might be a package available for your distribution.  With Debian or Mint or Ubuntu, for example, you can just install it using apt-get like so:

apt-get install lrzsz

When installing from source, however, the "-prefix=$HOME" option for the ./configure command line below is necessary if operating without root access.  In the following example I do not have root access to my web hosting machine and so I did use that option.

The commands I used to install lrzsz are as follows:

cd ~
mkdir gets
cd gets
wget http://ohse.de/uwe/releases/lrzsz-0.12.20.tar.gz
tar -xzvf lrzsz-0.12.20.tar.gz
cd lrzsz-0.12.20
./configure -prefix=$HOME
make
make install
find . | grep lsz

(and of course what the commands are doing is: first navigating to the user's home directory, then
creating a folder called "gets" for keeping files downloaded with wget, then entering that folder, then using wget to download the source tarball to the folder, then un-zipping the contents with tar, then entering the new folder for lrzsz, then building and installing it with configure/make/make install, then locating the binary executable file with find.)

After that I used "alias" so that when I type the commands "sz" or "rz" it will use the lsz/lrz programs, like so :

alias sz='~/gets/lrzsz-0.12.20/src/lsz '
alias rz='~/gets/lrzsz-0.12.20/src/lrz '

Those lines could be added to your .bash_profile or .bash_rc file, for example, to insure that they're aliased every time you log in.

Note that in this scenario, using "sz" on the remote host requires that you specify in the command line the file to be sent, like "sz filename" for example, whereas using "rz" doesn't require a file name to be specified.  "rz" may require the "-y" option to overwrite a file if there is already a file with the same name in the working directory.  After a transfer there may be a bit of confusing text in the console which can be clarified by pressing ENTER a couple of times to see the prompt again, or by using the "clear" command.  Using "Ctrl-C" does not abort the command after using sz or rz but it will timeout within a few minutes if the transfer is not started.

[Edit] Please Note that the permissions for transferred files may not be as expected.
I find myself using "chmod 644 filename" often after uploading files.
This point could be an important security consideration.

There are a number of command line options for both "rz" and "sz" which can change their behavior, and they can be listed by using "rz --help" or "sz --help" respectively, even if the "man pages" or the "info" command do not provide documentation.

*(Unless it's the portable KiTTY package available from portableapps, which seems to keep its own data separate and isolated from that of PuTTY.)

Links and references:

SecureCRT info: https://www.vandyke.com/products/securecrt/ssh_file_transfer.html
Kitty Zmodem-specifics: http://www.9bis.net/kitty/?page=ZModem&theme=none
Le Putty: http://leputty.sourceforge.net/
lrzsz: https://ohse.de/uwe/software/lrzsz.html
helpful page: https://waltonr.wordpress.com/2011/12/08/howto-lz-rz-function-sexy-commandline-file-transfers/
helpful page: http://sourceforge.net/p/leputty/support-requests/1/

Tuesday, September 22, 2015

Oracle VirtualBox on Debian Wheezy Installation Fail and Solution

Upon attempting to install Oracle VirtualBox on a Debian GNU/Linux system, you might run into a colorful error:

[FAIL] Starting VirtualBox kernel modules[....] No suitable module for running kernel found ... failed!

After a bit of searching I found a sort of solution to my problem which was:

      Install "module-assistant" and install VirtualBox from the "backports" repository.

While this is an official resource, it might not be the ideal solution to use a backport, which I'll come back to below with a "Second solution."  In this case I am using Debian 7 "wheezy," and the backport would be from "jessie" or "stretch."

First solution:

Run these commands to get the Module Assistant program ready:

apt-get install module-assistant

m-a prepare

I believe it was actually unnecessary to install headers with the following command, but I've included it just in case it's helpful in other situations:

apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]-,,')

Then to add the backports repositories to sources.list we could use a command like

nano /etc/apt/souces.list

and add the following text:

# backported
deb http://http.debian.net/debian/ wheezy-backports main contrib

Next it's time to try installing the VirtualBox package again, with:

apt-get -t wheezy-backports install virtualbox

This worked well, despite the FAIL message being displayed again just before it installed the correct kernel modules and finished successfully.  That's when I realized it was still using an older version: VirtualBox 4.3.18 instead of 5.0.2 which was the latest.

Later I realized that you can just go to the VirtualBox website and find their latest version in a neat Debian package, so I removed VirtualBox with apt-get and removed the backports part of sources.list

Second Solution:

Details were found at:

 https://www.virtualbox.org/wiki/Linux_Downloads

In addition to having the dkms package installed, installing VirtualBox in this way requires adding a line to sources.list for installing directly from Oracle, such as:

#Oracle VirtualBox
deb http://download.virtualbox.org/virtualbox/debian wheezy contrib

It also requires installing key for apt-secure, as described on their page, like so:

wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -

Even after all that, in my case there was still a warning about the source being unverifiable but then it was easy to install virtualbox 5 (actually 5.0.2, at the time) with the simple commands:

apt-get update
apt-get install VirtualBox-5.0

And it was a breeze, worked like a charm.

Compared to using a backport of an older version of VirtualBox, this method may be preferable.

It's probably safe to assume that this has been a problem for more people than just you and I, and recently I'd noticed that there is actually an entire distribution based primarily on the combination of VirtualBox and Debian.  It's called Robolinux.

I wish I'd recorded links to the other sites that were helpful to me with this so I could credit them here, but this should be enough information to get around the error message or to get the application installed, anyway.


Saturday, April 5, 2014

Performing an action on all files within the current folder and its sub-directories, specifically "touch" in this case

Today I was googling to the effect of discovering how to change the modification date of some files without having to use the "touch" command on each individual file and without having to write a PERL script or something else in order to get this done.

It turns out it's another case of the "find" command to the rescue.  It's interesting to me how often I think a basic command like "ls" or "touch" should have (and probably does have) a certain capability built-in, but the "find" command makes it easier and saves me from having to learn every single feature of every single command.

In this case, the command used to perform a simple "touch" on all files in the current folder and in all sub-folders, recursively, was this:

find . -exec touch {} \;

Looking at this, I can see that the word "find" and the period has a simple and familiar effect of listing all files etc. in the present working directory* and also files in the subdirectories thereof.

Apparently the -exec option specifies that a command should be run on each file in the list, instead of the default action of "find" which is simply listing the files, or printing the files' names to the console output.

The word "touch" in that command line is the command to be used on the files, and the curly brackets "{}" would seem to indicate where each file's name/path would be placed on the hypothetical command line relative to the command itself.  In this case "touch {};" would be the simple command, if we replace "{}" with the files' names or paths.

The "\;" or "escaped backslash" is a bit more liable to confuse, but I think I understand why it's escaped.  If it were not escaped, it would mark the end of the "find" command, not only the "touch" command.  Since it is indeed escaped, the "find" command could have more options or operators trailing it.  It is only marking the end of the -exec option's parameters, this way.

I'll go ahead and give credit where it's due, I found this information at the following page: http://www.unix.com/unix-for-dummies-questions-and-answers/108598-touch-all-files-subdirectories-recursive-touch.html 

* Most of you will know this already: "pwd" is another neat command, especially useful when your command prompt isn't telling you what your "present working directory" is.

Friday, March 28, 2014

Discovering or Finding the Source or Origin Domain or IP Address

Often when I want to find a domain or country of origin based on an IP, or find such kinds of information based on a domain name, I'll use a handy little website called DNSgoodies, but it occurs to me that on a typical Linux system there are already commands available to provide such information as it is available from the DNS servers being used (and that's assuming there isn't a well-informed DNS service running on the local system, which makes it even easier or less network-intensive to perform such a query locally.)

The good old command "nslookup" is usually available in Linux as well as in Windows, though it's much more useful for finding the IP address based on a known domain name, or for testing whether or not DNS-lookup functions are working properly, and less useful for finding information about a domain or origin based on a known IP address (at least as far as I know.)

One command for doing it the other way, e.g. for finding an unknown domain name from a known IP address, is simply "host."  Host works both ways, it can provide an IP from an inputted name, or it can provide a name from an inputted IP.  Typing a command such as the following should return a relevant domain name:

host 98.138.252.30

It returns something with:

ir2.fp.vip.ne1.yahoo.com

I found that IP address with the command:

nslookup www.yahoo.com

Another command that can be used this way is "dig," commonly used with an "-x" switch and sometimes with some switches to shorten the results etc. like "+noall" and "+answer" as well, like so:

dig +noall +answer -x 98.138.252.30

To be honest I'm not entirely clear on all of the subtle differences between all these commands nor about the exact meanings of all the possible information they provide, at this time, but for my purposes it's good enough info to make note of.

Tuesday, March 25, 2014

Cloud Storage for Linux part 1

Cloud storage for Linux users is a little different than it is for users who are on the more "mainstream" operating systems like Windows, Mac OS, or Android, but it's not necessarily more limited.

Today I was researching cloud storage services which cater specifically to Linux, meaning they have an official client program for use on a Linux operating system.  Surprisingly, I found there are very few that offer any free service as well as a client or official support for Linux: Dropbox, Wuala, Spideroak, Ubuntu OneMinus, etc.

There are several major services which support Linux directly in this way if you will to pay them, such as JustCloud, SugarSync, etc., though personally I find the free services to be much more interesting and useful for the sake of having multiple/redundant backups without it costing too many times the price of just having an awful lot of physical hard disks hanging around.

Fortunately for users of Linux, having a client program or application specific to each internet-based storage service is probably not as important as it is to users of the aforementioned mainstream OS's.

Linux users can simply choose services which offer access via the more standardized transfer protocols such as FTP, WebDAV, Swift, S3, etc., and then choose the cloud-storage providers that are accessible and usable in these ways.  There is really no need for an OS-specific client program when the service offers/allows access through these avenues, because there are already so many FTP clients [for example] that are available for Linux.

That being said, it's no small wonder that there aren't more service-specific clients available for Linux.  It can be intimidating indeed, from a software development point of view, to try to support Linux and BSD and other OS's like that because the distributions and flavors of Linux are so varied and often different in ways that can be subtle and/or obtuse.  I think, though, that this kind of reason to avoid it is not well founded, especially considering that packages can be distributed or obtained in a "noarch" format that is specific to an OS or distro (i.e. Debian) but independent of any hardware types or architectures, as well as a "portable binary" kind of format that is specific to the hardware architecture but not constrained by any brand or distribution of Linux, as long as it is indeed Linux.

It seems to me that with those two options, it should be a relatively small challenge to make software for Linux that can be used with most any distribution.

Tuesday, March 18, 2014

"thread.error: can’t start new thread" workaround, CentOS yum, /etc/hosts.deny , echo \n and printf >> append file

BACKGROUND:

While observing network activity on a VPS/proxy server of mine, using the command

netstat -vT

to see a view of current connections and checking on recent failed or succeeded attempts to log in or log on to the system using commands like 

cat /var/log/secure | grep 'Accepted password for root' | tail

and

cat /var/log/secure | grep 'Failed password ' | tail -n 20

and such,

I noticed what appeared to be someone running an automated script or macro or bot or program which was trying to gain access to the system by repeated logon attempts using what I assume someone thinks are the most commonly used usernames and passwords.  I suppose this would be called hammering, or perhaps a remote brute-force password discovery attack of some kind.

Even with an automatic method like this, the chances of someone gaining access in this way were slim to none in this situation.  My password is surely not on the list of "most commonly used passwords ever in the history of the Earth," unless it's an extremely long list that somehow includes the majority of passwords used in contemporary terrestrial history.

The offending party was only racking up about 1 attempt every 4 seconds, so yeah, there was not much chance at all of their hitting it blindly or randomly like that.

However nonthreatening it may have been, this way, it was still bothering me even if only because of how my logs were being filled up with garbage and becoming harder to read and monitor.

I got the idea that it should be a simple task to block the person's domain or IP address and preserve the simplicity of my security logs.  I figured that this login service was probably operating with the TCP protocol or at least within a TCP wrapper, so I just navigated to the /etc folder and examined the "hosts.deny" file first.

I'm terrible with using "vi" to edit so I tried to edit it using "nano" instead.  Apparently "nano" was not installed, I got one of those "command not found" errors.  D'oh!

(I actually added the entries I'd wanted to into the "hosts.deny" file without using vi and without using nano or any other text editor, at this point.  I'll describe that below but first I'll recount the tale of figuring out how to get nano in and working.)

Well this was a perturbing obstacle at first because I'd never actually bothered to check what operating system I was operating in with that system, so I had to determine which distribution of Linux I was using before I could use a package manager to install nano via the command line.  While in the /etc directory, I used the command

ls | grep 'release'

which returned the line "redhat-release" , and upon using the "cat" command to see what tasty little tidbits of information were waiting inside that little file, it returned with

"CentOS release 5.9 (Final)"

which was very helpful in determining that the package management utility I'd want to use is called "yum," which is cute enough.   ...Maybe too cute, that's an even cuter name for a command than "pacman" isn't it?

FOREGROUND:

I tried to install the package for "nano" using the command 

yum install nano

but instead of the computer telling me how happy it was to have done what I'd wanted, it gave me some backtalk error like this:

"thread.error: can’t start new thread"

So my first thought was that I must be severely limited in the number of processes or threads I could run with my user account, which was a disappointing thought because I was logged in as the one and only "root" himself.  I punched-in a command to have it tell me the maximum number of processes that I am limited to running at a time, 

ps -L -u root | wc -l

which returned with "15," and that gave me pause because even though it seemed like a low number for that, I still couldn't really believe that "yum" would require more than one or two simultaneous processes just to install nano, even if it were simply a forking front-end for "rpm" or something.

[2015 edit: I believe the above command returns the number of currently-running processes rather than the result of actually probing for limitation]

I quickly found another page which explained that this error was [at least in this kind of situation] usually due to  a lack of free memory, or available RAM, and not due to a limitation on the number of separate threads or processes that a user is allowed to be running at once.

The solution, I read, was to disable a plugin for yum called "fastestmirror" which is supposed to determine which mirror or which source of package files is the fastest one.  There were several ways listed for disabling the plugin, and I chose to try the method which is to simply use an option in the command line, like so:

yum --disableplugin=fastestmirror update

After issuing this command, I found it was trying to update the entire OS or the entire collection of installed packages, which was not what I'd wanted.  I decided to just let it finish, but it procured another error message and failed anyhow.

I tried again with the appropriate operation for installing nano, instead of the "update" operation, thusly:

yum --disableplugin=fastestmirror install nano

Sadly, that also failed to perform as I'd wished, but without terminating.  It just stopped and never completed the command.  After waiting long enough for me to determine that it was not going to finish, I simply logged in to the system remotely again with another terminal window and issued a "reboot" command.  That worked splendidly for putting an end to the "yum" command, and I entertained hopes that it would clean up any mess left behind by yum's couple of difficulties.

After waiting for long enough to satisfy myself that it'd had enough time to reboot, I logged in again and issued the same command as shown above for disabling the "fastestmirror" plugin and installing "nano."

This time, it worked!  I ran the nano program, one of my favorites, and was very pleased.  I would have then continued to edit the "hosts.deny" file if not for the fact that I'd already added the badguys' IP to that file and successfully blocked the would-be intruder.

FLASHBACK:

To add entries to the "hosts.deny" file without a text editor I figured I'd just use a console command to print the appropriate line of text to the console but then redirect that console output to the end of the target file.

The line I wanted to add to "/etc/hosts.deny" [with the "\n" symbolizing an EOL/newline and with a funnier-sounding domain name than I was really blocking that day] is the following:

ALL: *.koreanbunghole.net\n

The "echo" command didn't seem to want to process the "\n" as a newline, when I was testing it without the redirect-append operator, so instead I used the "printf" command as follows (note this was all being done from within the "/etc" directory):

printf "ALL: *.koreanbunghole.net\n" >> hosts.deny

In order to refresh my memory and familiarize myself again with all the functions etc. being employed here, I first tried it on a test file called "testfile.txt.txt" instead of on the actual "hosts.deny" file a few times, and then used "cat" to see what was in the test file and confirm my expectations.  Having to work without a text editor breeds caution, I guess.

To be honest: initially I'd neglected to include the asterisk and first period, and I found when I looked at the /var/log/secure file that they were still trying.   I appended a second line using their actual IP address instead of the domain name, checked again, and sure enough I found that they had been stopped by that.



Tags: Failed password , thread.error: can’t start new thread , Accepted password for root , /var/log/secure , login attempts , yum , rpm , vi , nano , CentOS , redhat , release , Linux , VPS , ssh, tunneling , proxy , echo , printf , \n , newline , new line , EOL