Use your PuTTY SSH Key on MacOs

I have had the problem authentication with my ssh server with a private key. It asks password despite no password needed. Here is the log:

mememe@Mac:~$ ssh -v -i ~/.ssh/id.ppk root@remote.machine.com
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/<username>/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug1: Connecting toremote.machine.com [999.1.1.1] port 22.
debug1: Connection established.
debug1: identity file /Users/<username>/.ssh/id.ppk type -1
debug1: identity file /Users/<username>/<username>/id.ppk-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH*
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5-etm@openssh.com none
debug1: kex: client->server aes128-ctr hmac-md5-etm@openssh.com none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
Host sfs
debug1: Server host key: RSA 43:xx:72:xx:xx:b0:xx:8a:xx:xx:xx:xx:xx:xx:a7
debug1: Host 'remote.machine.com' is known and matches the RSA host key.
debug1: Found key in /Users/<username>/.ssh/known_hosts:5
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /Users/<username>/.ssh/id.ppk
debug1: key_parse_private_pem: PEM_read_PrivateKey failed
debug1: read PEM private key done: type <unknown>
debug1: No more authentication methods to try.
Permission denied (publickey).

So it appears that, I tried to use PuTTY gen generated key file with OpenSSH. Therefore, without windows I had to use the steps:

1) This installs putty along with puttygen: (MacPorts needed)

$ sudo port install putty

2) This converts my ssh keygen into OpenSSH key type:

$ puttygen id.ppk -O private-openssh -o id_openssh 

now done. let’s go:

$ ssh -i ~/.ssh/id_openssh root@remote.machine.com

goes ok for me.

Advertisements

Simple Pair Programming

There are nice samples for pair programming solutions. Well, I have seen a few, too.

I needed a simpler and quicker solution so here I went like this:

add to end of /etc/bash.bashrc:

exit() {
    if [[ -z $TMUX ]]; then
       builtin exit
    else
       tmux detach
    fi
}

close() {
    builtin exit
}

if [[ "x" == "x$TMUX" ]]; then
    # if not a new tab of tmux
    if [[ 0 < $(ps -ef | grep -e "#######" -e tmux | grep -v "#####" | wc -l) ]]; then
       exec tmux a
    else
       exec tmux -2
    fi
fi

Resources I have inspired from:

[1] https://coderwall.com/p/powgbg     [ for detaching tmux instead exit ]
[2] http://collectiveidea.com/blog/archives/2014/02/18/a-simple-pair-programming-setup-with-ssh-and-tmux/  [Good one]
[3] http://superuser.com/questions/456187/connecting-a-tmux-pane-to-a-remote-server

 

 

hadoop with problematic startup

With my one-node cluster Hadoop configuration, Hadoop (0.20.2) doesn’t start without problem. Here is the status:

# netstat -an | grep LISTEN | grep tcp
tcp        0      0 0.0.0.0:46631           0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:40072           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:50060           0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:50030           0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:42004           0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN

There is nothing listening to port 9000 TCP. From the logs I see, tasktracker takes off without problem. But all the others are having problem.

Namenode couldn’t have started:

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-localhost/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
INFO org.apache.hadoop.ipc.Server: Stopping server on 9000

Datanode says, it cannot conntect:

INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:9000 not available yet, Zzzzz...
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 tim
e(s).
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 tim
e(s).
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 tim
e(s).
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 tim
e(s).

Of course secondary name node is off, too:

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).

I have looked up to Micheal Roll’s post about installing Hadoop on Ubuntu and tried once more with omitting IPv6 suppport. But no… Thereafter, have just looked up Hadoop Cluster Configuration guide. It points out options such as dfs.name.dir and dfs.data.dir. I think there is a problem with data files of Hadoop which are deleted since they are located under /tmp (locally) in default.

I have edit the conf/hdfs-site.xml to add these lines:

    <property>
         <name>dfs.name.dir</name>
         <value>/hometohadoop/hadoop-0.20.2/logs/transLogs</value>
    </property>
    <property>
         <name>dfs.data.dir</name>
         <value>/hometohadoop/hadoop-0.20.2/dataDir</value>
    </property>

Now I say Hadoop to store the relevant files under these directories instead of some unsteady directory under /tmp (local). Ensuring these folders do exist:

# mkdir -p /hometohadoop/hadoop-0.20.2/dataDir
# mkdir -p /hometohadoop/hadoop-0.20.2/logs/transLogs

I have restarted. It says now,

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
 java.io.IOException: NameNode is not formatted.

Which makes me format (delete all data, of course) in Hadoop so that I will eventually have a stable data store anymore:

# bin/hadoop namenode -format

Now restarting Hadoop again… And yet:

INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting

***

Good…

***

Real time process charter

I tried to find a realtime process chart drawing utility to watch what’s going on with my (specific) process. I would like to how much virtual memory consumed because of threads…

So here is the first step: (just edit the PROCESS_NAME to the filename of yours)

watch -n 1 -p ‘ps -e -O “%cpu,cputime,sgi_p,%mem,rss,sz,vsz,start,nlwp” | grep -e PROCESS_NAME -e “%CPU” -e “###########” | sed “s/[ \t]\{1,\}/\t/g”‘

This was the tabbed version. Here is the version for human eye:

ps -e -O “%cpu,cputime,sgi_p,%mem,rss,sz,vsz,start,nlwp” | grep -e PROCESS_NAME -e “%CPU” -e “###########” | grep -v “###########”

Those # are just a trick not to grep grep itself.

I wanted to use gnuplot-py to sketch the realtime chart to observe CPU and MEM consumption. Nowadays, out of time. Maybe later, maybe someone finishes.

Mounting Samba Paths

I have installed the VMWare Workstation on my laptop. I am lately looking forward for using linux in the office environment. There have been problems using linux for my daily job.

The company is using Microsoft Exchange server for mail transaction. However, as linux distribution I am using, Ubuntu is hardly using Web Proxy. Despite -hard or not- ability to use the Web Proxy, I am unable to use Evolution’s exchange addon through the web proxy. This might be dangerous for long term usages. Because, there will be attachments within mails; there will meeting requests. Hence, mail usage management should not be such problemous.

On the other hand, another problem is awaiting, I suppose. Character encoding will be another problem I think. For instance, I could write my mails with characters in ISO-8859-1 but not in ISO-8859-9. What am I supposed to do for Excel files then? Is there enough support of OpenOffice to import Excel files? We’ll see…

For now, using Ubuntu with a 512 MB ram dedicated on VM tastes good. I can use anything in Linux on command shell. In the future I might be using CrossOver for Office or there will be some other way to manage mailings.

By the way, to note, I am mounting my shared folder of Windows under Linux with:

# mount.smbfs //[my_windows_host]/[shared_folder] /media/smb -o username=[windowsUserName],password=[windowsUserPassword],workgroup=[windowsMachineWorkgroup],ip=[windowsMachineIP],iocharset=iso8859-9,ro

About Options:
IP is optional
Charset is optional – but I strongly recommend expressing
ro: Meaning read-only, you can also use rw for read-write – but I do not suggest using rw to ntfs. As far as I know, still, writing on NTFS is experimental option.