Friday, December 16, 2011

Logging request parameters in CodeIgniter

The application I'm currently working on provides services to mobile devices.  As such, I need to constantly check what values the mobile devices are POST-ing to the application.  Instead of adding a call to all functions in the controllers to perform this task, I decided to implement a hook in the application.

Hooks are supported in CodeIgniter (more info here).  Here's my hook class to perform the logging:

class Logger {
   
    private $CI;
   
    public function __construct() {
        $this->CI =& get_instance();
    }
   
    public function request_logger() {
        $uri = $this->CI->uri->uri_string();
       
            $params = trim(print_r($this->CI->input->post(), TRUE));
       
        log_message('info', '==============');
        log_message('info', 'URI: ' . $uri);
        log_message('info', '--------------');
        log_message('info', $params);
        log_message('info', '==============');
    }
}
To enable hooks, you'll need to edit the config/hooks.php file.  Here's the content of the file to enable the Logger class above:
$hook['post_controller_constructor'] = array(
                                'class'    => 'Logger',
                                'function' => 'request_logger',
                                'filename' => 'Logger.php',
                                'filepath' => 'hooks'
                            );

I chose to use "post_controller_constructor" as it is called before any methods in the controller class is executed.

Output below shows how the output looks like from the application logs:
INFO  - 2011-12-15 05:09:24 --> URI: services/promos/add_comment
INFO  - 2011-12-15 05:09:24 --> Array
(
    [param1] => value1
    [id] => dj3243hasdgasdg
    [msg] => Testing testing
)

Copying a running VM to external host

Friend had a small incident the other day where KVM/QEMU seg-faulted and lost communication with a running VM.  The VM was still running but it isn't writing to persistent storage.  We decided to do a live backup of the running VM.  Although this is definitely NOT recommended, we were left with no other choice as shutting down the VM would mean losing all changes made. 

The VM is running Linux with non LVM-ed partitions.

To start, run the following command as root in the VM to make a copy of the drive and send it over SSH to an external host:
dd if=/dev/sda bs=1k conv=sync,noerror | gzip -c | ssh -c blowfish myuser@extern.host "dd of=/images/damaged-vm.gz bs=1k"
Once that's done, access the host where the file is transferred to and decompress the file:
# gunzip damaged-vm.gz
You can try to either boot up the image via virsh or mount it via loopback to copy any files.  We chose the latter. 

Run the following commands as root.

Find the first unused loop device:
# losetup -f
/dev/loop0
Now we setup the loop device against the image that was transferred:
# losetup /dev/loop0 /images/damaged-vm
Use kpartx to map the partitions:
# kpartx -av /dev/loop0
add map loop0p1 : 0 29333504 linear /dev/loop0 2048
add map loop0p5 : 0 1380352 linear /dev/loop0 29337600
Proceed to mount the partitions:
# mount /dev/mapper/loop0p5 /mnt/p5

References:

Thursday, December 15, 2011

Using mysqldump to export CSV file

By default, mysqldump outputs SQL table dumps.  If you ever need to export a table (or even a database) in CSV format using only mysqldump, here's the quick and easy way without using any additional clients:
mysqldump -u root -p --fields-terminated-by="," --fields-enclosed-by="" --fields-escaped-by="" --no-create-db --no-create-info --tab="." information_schema CHARACTER_SETS
mysqldump will generate 2 files (generated file name is based on the table name):
  • CHARACTER_SETS.txt
  • CHARACTER_SETS.sql
The actual output is in the .txt file.  Output of the command below (output trimmed for brevity):
big5,big5_chinese_ci,Big5 Traditional Chinese,2dec8,dec8_swedish_ci,DEC West European,1cp850,cp850_general_ci,DOS West European,1hp8,hp8_english_ci,HP West European,1koi8r,koi8r_general_ci,KOI8-R Relcom Russian,1latin1,latin1_swedish_ci,cp1252 West European,1latin2,latin2_general_ci,ISO 8859-2 Central European,1[..]
If you get the following error when running the command, specify a location where the "mysql" user (or the owner of the MySQL process) is running can write to (e.g. /tmp).

mysqldump: Got error: 1: Can't create/write to file '/home/mike/CHARACTER_SETS.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'
Now for some description on the options:
  • --fields-terminated-by: String to use to terminate fields/columns.
  • --fields-enclosed-by: String to use to enclose the field values.  Single quote by default.  I set to nothing as it suits my needs.
  • --fields-escape-by: Set of string used to escape special characters e.g. tabs, nulls and backspace.  Look here for more info on escape sequences.
  • --no-create-db: Do not print DB creation SQL.
  • --no-create-info: Do not print table creation SQL.
For more comprehensive info on the mysqldump command, visit the reference manual page here.

Monday, December 12, 2011

Using logback to log to a syslog server

In my previous posts, we've configured a rsyslogd server to accept remote connections via TCP/UDP as well as a rsyslogd instance to write to a remote rsyslogd server.  We'll now take this further by logging to the rsyslogd server we've configured earlier using the logback library (http://logback.qos.ch/). 

Besides the logback JAR files, you'll need to grab the latest SLF4J libraries as well from http://slf4j.org/.  Below are the JAR files required to get logback working:
  • logback-core-1.0.0.jar
  • logback-classic-1.0.0.jar
  • slf4j-api-1.6.4.jar
Pay attention to the version of SLF4J you're using.  If you get the following exception when running your program, you're most likely using an older SLF4J library:
SLF4J: The requested version 1.6 by your slf4j binding is not compatible with [1.5.5, 1.5.6]
SLF4J: See http://www.slf4j.org/codes.html#version_mismatch for further details.
Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.helpers.MessageFormatter.arrayFormat(Ljava/lang/String;[Ljava/lang/Object;)Lorg/slf4j/helpers/FormattingTuple;
    at ch.qos.logback.classic.spi.LoggingEvent.(LoggingEvent.java:114)
    at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:468)
    at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:424)
    at ch.qos.logback.classic.Logger.info(Logger.java:628)
    at com.test.RemoteTest.main(RemoteTest.java:11)
On to the logback.xml configuration file.  It's a very basic configuration.  Just two appenders: STDOUT and SYSLOG.  SYSLOG appender is the one we're interested in.
<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder
            by default -->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
        <syslogHost>myhost</syslogHost>
        <facility>USER</facility>
        <suffixPattern>[%thread] %logger %msg</suffixPattern>
    </appender>

    <root level="debug">
        <appender-ref ref="SYSLOG"/>
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>
  • syslogHost - syslog server host to log to
  • facility - identify the source of the message
  • suffixPattern - format of the log message
More info can be obtained in logback's manual on appenders: http://logback.qos.ch/manual/appenders.html#SyslogAppender.

The Java class to test the appenders:
package com.test;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class RemoteTest {
    static final Logger logger = LoggerFactory.getLogger(RemoteTest.class);
   
    public static void main(String[] args) {
       
        logger.info("hello world");
    }
}
The rsyslogd host should have the following entry once you run the RemoteTest class:
Dec 12 16:16:37 my-noteb [main] com.test.RemoteTest hello world

rsyslog: Logging to remote server

Now that we've setup the rsyslogd server to accept incoming connections (NOTE: UDP somehow didn't work.  I had to configure rsyslogd to listen to TCP instead); we can now configure the "client" rsyslogd instance to log to a remote server.

In the "client" server, we'll need to edit the same configuration file /etc/rsyslog.conf.  I've added the highlighted lines below:
[...]
###############
#### RULES ####
###############

# Log all messages to this rsyslogd host
*.* @@myhost:514

#
# First some standard log files.  Log by facility.
#
auth,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none          -/var/log/syslog
#cron.*                         /var/log/cron.log
[...]
The rest of the config file remains unchanged.  Reload/restart the service once we're done:
myclient:~# /etc/init.d/rsyslog reload
Reloading enhanced syslogd: rsyslogd.
We can now use the logger command to send log messages to the syslog.  Here's what I used:
# logger -t CLIENT_TEST "This is a test to test the test"
The following entry should be logged in myhost /var/log/syslog file:
Dec 12 15:17:52 myhost CLIENT_TEST: This is a test to test the test

rsyslog: Enabling remote logging service in Ubuntu

Newer versions of Ubuntu (since 9.10 according to rsyslog wiki: http://wiki.rsyslog.com/index.php/Ubuntu) comes with rsyslog instead of sysklogd.  Was trying to enable it via the sysklogd way by adding the "-r" option in the startup script.  That obviously didn't work :)

What you'll need to do is just uncomment 2 lines in the /etc/rsyslog.conf file:
# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
That's if you want to provide UDP syslog service.  Uncomment the following 2 lines if you want to provide TCP syslog service:
# provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
Once you've made the changes, either reload or restart the rsyslogd service:
myhost:/etc# /etc/init.d/rsyslog reload
Reloading enhanced syslogd: rsyslogd.
I've chose to enable UDP for my server.  We'll use netstat to check if rsyslogd is listening to the specified port:
myhost:/etc# netstat -tlnup | grep 514
udp        0      0 0.0.0.0:514             0.0.0.0:*                           13282/rsyslogd
udp6       0      0 :::514                  :::*                                13282/rsyslogd
rsyslogd is indeed listening to the proper port and protocol. 

Next stop, actual logging and probably more configuration :)

Thursday, December 8, 2011

CSS3 Generator

I'm currently implementing (or rather trying to implement :P) a site with HTML5 and CSS3.  Found a very nifty site which helps in generating cross browser CSS3 effects e.g. shadows, border radius, etc:

http://css3generator.com/

Give it a try and see if it helps you as much as it helped me :)

Barebones JSF2 Eclipse project

Quick view on minimal libraries required to get a JSF2 project running in Eclipse (should work everywhere else):
  • com.springsource.javax.servlet.jsp-2.1.0.jar ==> Obtained from Spring framework dependencies package
  • com.springsource.javax.servlet.jsp.jstl-1.1.2.jar ==> Obtained from Spring framework dependencies package
  • javax.faces.jar ==> Obtained from http://javaserverfaces.java.net/.  Downloaded Mojarra 2.1 release.  Available in Spring framework dep package, but it's an older version (1.2).
  • jstl-impl-1.2.jar  ==> Obtained from http://jstl.java.net/.  Available in Spring framework dep package, but it's an older version (1.1.2).
You can start developing a JSF2 app with just these 4 files.  Have tried the following:
  • Facelets (xmlns:ui="http://java.sun.com/jsf/facelets")
  • HTML elements (xmlns:h="http://java.sun.com/jsf/html")
  • Core elements (xmlns:f="http://java.sun.com/jsf/core")
I used Apache Tomcat 7.  But I'm sure this will work with other app server/servlet containers as well.  

Setting up Hadoop in clustered mode in Ubuntu

Overview

This entry details the steps I took to setup Hadoop in a clustered setup in Ubuntu 11.10.  Hadoop version 0.20.205.0 was used to setup the environment.  The Hadoop cluster consists of 3 servers/nodes:
  • node616 ==> namenode, tasktracker, datanode, jobtracker, secondarynamenode
  • node617 ==> datanode, jobtracker
  • node618 ==> datanode, jobtracker
In an actual production setup, thet namenode shouldn't act as datanode, jobtracker and secondarynamenode.  But for the purpose of this setup, things will be simplified :)


Server setup

Ensure that the /etc/hosts file in all servers are updated properly.  All my servers have the following entry:
192.168.1.1    node616
192.168.1.2    node617
192.168.1.3    node618
This is to ensure that the configuration files stay the same in all servers.

The following directories must be created beforehand to store Hadoop related data:

  • /opt/hdfs/cache    ==> HDFS cache storage
  • /opt/hdfs/data    ==> HDFS data node storage
  • /opt/hdfs/name    ==> HDFS name node storage


SSH setup

Before we proceed to actual setup, the user running Hadoop must be able to ssh to the servers without a passphrase.  Test this out by issuing the following command:
$ ssh node616
If it prompts for a password, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The public key needs to be copied to all data nodes/slaves once they're setup in a later stage.


Namenode setup

Obtain Hadoop binary distribution from the main site (http://hadoop.apache.org).  Place it to a location in the server.  I've used /opt/hadoop for all installations. 

The extracted directory contents should look like the output below:
one616:/opt/hadoop# ls -l
total 7144
drwxr-xr-x  2 root root    4096 2011-11-25 16:58 bin
-rw-rw-r--  1 root root  112062 2011-10-07 14:19 build.xml
drwxr-xr-x  4 root root    4096 2011-10-07 14:24 c++
-rw-rw-r--  1 root root  433928 2011-10-07 14:19 CHANGES.txt
drwxr-xr-x  2 root root    4096 2011-11-30 12:23 conf
drwxr-xr-x 11 root root    4096 2011-10-07 14:19 contrib
drwxr-xr-x  3 root root    4096 2011-10-07 14:20 etc
-rw-rw-r--  1 root root    6839 2011-10-07 14:19 hadoop-ant-0.20.205.0.jar
-rw-rw-r--  1 root root 3700955 2011-10-07 14:24 hadoop-core-0.20.205.0.jar
-rw-rw-r--  1 root root  142465 2011-10-07 14:19 hadoop-examples-0.20.205.0.jar
-rw-rw-r--  1 root root 2487116 2011-10-07 14:24 hadoop-test-0.20.205.0.jar
-rw-rw-r--  1 root root  287776 2011-10-07 14:19 hadoop-tools-0.20.205.0.jar
drwxr-xr-x  3 root root    4096 2011-10-07 14:20 include
drwxr-xr-x  2 root root    4096 2011-11-22 14:28 ivy
-rw-rw-r--  1 root root   10389 2011-10-07 14:19 ivy.xml
drwxr-xr-x  6 root root    4096 2011-11-22 14:28 lib
drwxr-xr-x  2 root root    4096 2011-11-22 14:28 libexec
-rw-rw-r--  1 root root   13366 2011-10-07 14:19 LICENSE.txt
drwxr-xr-x  4 root root    4096 2011-12-07 12:10 logs
-rw-rw-r--  1 root root     101 2011-10-07 14:19 NOTICE.txt
drwxr-xr-x  4 root root    4096 2011-11-29 10:36 out
-rw-rw-r--  1 root root    1366 2011-10-07 14:19 README.txt
drwxr-xr-x  2 root root    4096 2011-11-22 14:28 sbin
drwxr-xr-x  4 root root    4096 2011-10-07 14:20 share
drwxr-xr-x  9 root root    4096 2011-10-07 14:19 webapps
Navigate to the conf directory and edit the core-site.xml file.  The default file should look like the following:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
</configuration>
Now we'll have to add 2 properties to make this a clustered setup: 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://node616:9000</value>
     </property>

     <property>
         <name>hadoop.tmp.dir</name>
         <value>/opt/hdfs/cache</value>
     </property>
</configuration>
  • fs.default.name ==> Sets the default file system name.  Since we're setting up a clustered environment, we'll set this to point to the namenode hostname and port; which in this case is the current machine.
  • hadoop.tmp.dir ==> A base for other temporary directories.  Points to /tmp by default.  But I had a problem with that as the Linux /tmp mount point is usually very small and caused problems.  The following exception was thrown if I did not explicitly set this property:
java.io.IOException: File /user/root/testfile could only be replicated to 0 nodes, instead of 1
For more properties, please consult the following URL: http://hadoop.apache.org/common/docs/current/core-default.html
   
Next comes the hdfs-site.xml file which we'll customize it like the following:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
        <property>
                <name>dfs.name.dir</name>
                <value>/opt/hdfs/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/opt/hdfs/data</value>
        </property>
</configuration>
  • dfs.replication ==> Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.  Since we only have one node, we'll set it to 1 for the time being.
  • dfs.name.dir ==>     Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
  • dfs.data.dir ==> Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
More configuration parameters here: http://hadoop.apache.org/common/docs/current/hdfs-default.html

Lastly, we come to the MapReduce site configuration file; mapred-site.xml.  Output below shows the updated version:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>node616:9001</value>
     </property>
</configuration>
  • mapred.job.tracker ==> Node specific port which job tracker process is running on.
Edit the masters file and change the localhost value to node616.  Ditto slaves file.  By default, the host is set to localhost in both files.  However, since we're using proper host names, it's better to update the entry so that all master and slaves nodes can use.

One last thing before starting up the service is to initialize the HDFS namenode directory.  Execute the following command:
node616:/opt/hadoop/bin$ ./hadoop namenode -format
Everything should be configured correctly :)  We can run Hadoop by going into the bin directory:
node616:/opt/hadoop/bin$ ./start-all.sh
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-node616.outWarning: $HADOOP_HOME is deprecated.
node616: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-node616.outnode616: Warning: $HADOOP_HOME is deprecated.node616:node616: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-node616.outnode616: Warning: $HADOOP_HOME is deprecated.node616:starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-node616.outWarning: $HADOOP_HOME is deprecated.
node616: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-node616.outnode616: Warning: $HADOOP_HOME is deprecated.node616:
A quick check via ps:
hadoop     29004 28217  0 09:31 pts/0    00:00:07 /usr/bin/java -Dproc_jar -Xmx256m -Dhadoop.log.dir=/opt/hadoop/libexec/../logs -Dhadoop.log.file=hadoop.log -Dhadoop.hom
hadoop     30630     1  1 16:07 pts/0    00:00:02 /usr/bin/java -Dproc_namenode -Xmx1000m -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.ssl=false -Dc
hadoop     30743     1  3 16:07 ?        00:00:04 /usr/bin/java -Dproc_datanode -Xmx1000m -server -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.ssl=f
hadoop     30858     1  1 16:07 ?        00:00:01 /usr/bin/java -Dproc_secondarynamenode -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote -Dhadoop.
hadoop     30940     1  2 16:07 pts/0    00:00:02 /usr/bin/java -Dproc_jobtracker -Xmx1000m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote -Dhadoop.log.dir
hadoop     31048     1  2 16:07 ?        00:00:03 /usr/bin/java -Dproc_tasktracker -Xmx1000m -Dhadoop.log.dir=/opt/hadoop/libexec/../logs -Dhadoop.log.file=hadoop-hadoop-ta
Now that we can see all processes are running, go ahead and visit the following URLs:
  • http://node616:50030 ==> Map/Reduce admin
  • http://node616:50070 ==> NameNode admin   
Let's try copying some files over to the HDFS:
node616:/opt/hadoop/bin$ ./hadoop fs -copyFromLocal 20m.log .
And let's see if it's there:
node616:~$ hadoop fs -ls
Found 1 items
-rw-r--r--   3 hadoop supergroup 5840878894 2011-11-29 09:21 /user/hadoop/20m.log
So far so good :)

Once you're done, shutdown the Hadoop processes by executing stop-all.sh:
node616:/opt/hadoop/bin# ./stop-all.sh
stopping jobtracker
node616: stopping tasktracker
stopping namenode
node616: stopping datanode
node616: stopping secondarynamenode


Data nodes/slaves

Now that the namenode is up, we can proceed to setup our slaves. 

Since we know that we'll have an additional two servers, we can add those entries in to the conf/slaves file:
node616
node617
node618
If there's a need to add more in the future, the slaves nodes can be added dynamically.

Edit the hdfs-site.xml file and change the dfs.replication value from 1 to 3.  This ensures that the data blocks are replicated to 3 nodes (which is actually the default value).

Next, tar the entire Hadoop directory in the Namenode by executing the following command:
node616:/opt$ tar czvf hadoop.tar.gz hadoop
Transfer the tarball to the other servers (i.e. node617 and node618) and untar it.  Make sure the /opt/hdfs directories have been created.  Once the package has been extracted, go back to the namenode (node616) and execute the start-all.sh script.  It should output the following:
node616:/opt/hadoop/bin# ./start-all.sh
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-node616.out
Warning: $HADOOP_HOME is deprecated.
node617: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-node617.out
node616: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-node616.out
node618: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-node618.out
node617: Warning: $HADOOP_HOME is deprecated.
node617:
node616: Warning: $HADOOP_HOME is deprecated.
node616:
node618: Warning: $HADOOP_HOME is deprecated.
node618:
node616: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-node616.out
node616: Warning: $HADOOP_HOME is deprecated.
node616:
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-node616.out
Warning: $HADOOP_HOME is deprecated.
node618: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-node618.out
node617: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-node617.out
node616: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-node616.out
node618: Warning: $HADOOP_HOME is deprecated.
node618:
node617: Warning: $HADOOP_HOME is deprecated.
node617:
node616: Warning: $HADOOP_HOME is deprecated.
node616:
Notice that the script will remotely start the data and task tracker services in the slave nodes.  Visit the NameNode admin at http://node616:50070 to confirm the number of live nodes in the cluster.


Stopping/Starting Services in a node

To stop or start specific a specific service in just one node, use the bin/hadoop-daemon.sh script.  As an example, to stop the datanode and the tasktracker processes in node618, I'll do:

/opt/hadoop/bin/hadoop-daemon.sh stop datanode
/opt/hadoop/bin/hadoop-daemon.sh stop tasktracker
To start them up, simply substitute "stop" with "start" in the commands above.


References




Tuesday, December 6, 2011

Connecting to remote MySQL via SSH in Ubuntu

I had to access my MySQL server via SSH tunnel on my Ubuntu desktop machine.  First up, setup ssh to tunnel the server's MySQL port (default 3306) to my Ubuntu's desktop machine port 13306:

ssh mylogin@myserver.com -p 4265 -L 13306:127.0.0.1:3306
However, when I tried accessing port 13306 on my Ubuntu desktop, it failed:

$ mysql -u root -p -P 13306
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
It seems the default localhost server used by mysql client does a socket connection instead of TCP/IP.  In order to overcome this, I had to use the --host option:
$ mysql -u root -p -P 13306 --host 127.0.0.1
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 115946
Server version: 5.1.49-3 (Debian)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> 

That took me a while to figure out :)





Friday, November 4, 2011

Commercial Use Free Fonts

Stumbled upon an awesome website offering commercial use free fonts:


It also provides @font-face kits (CSS3) where you can upload a font file and it'll generate the necessary files for you to use in your CSS3 styles.

Awesome!!!!

Thursday, October 6, 2011

Installing Open Nebula 3.0 on Ubuntu 10.04


  • Downloaded the .deb file from Open Nebula website (http://dev.opennebula.org/packages/opennebula-3.0.0/Ubuntu-10.04/opennebula_3.0.0-1_amd64.deb)
  • Installed it but gave some problems hence had to do the following steps:
dpkg -i opennebula_3.0.0-1_amd64.deb
apt-get install libmysqlclient16 libxmlrpc-c3
apt-get -f install 
  • Once that's installed, the oned daemon will be started automatically.  
  • Login as oneadmin and export the ONE_AUTH variable as follow:
export ONE_AUTH=/var/lib/one/auth
  • You should be able to access the one* commands as oneadmin.