Kafka – Insights (Managing under-replicated topic’s alert)

Apache Kafka is a distributed fault tolerant queue based messaging system. It has main component as Producer, consumer, topic, zookeeper, kafka broker. We are going to discuss today with respect to workflow of kafka publish-subscribe system working & some tips & treaks we can configure in order to maintain it very well.

Now lets discuss how kafka works internally

  1. Producers send message to a topic at regular intervals.
  2. Kafka broker stores all messages in the partitions configured for that particular topic. It ensures the messages are equally shared between partitions. If the producer sends two messages and there are two partitions, Kafka will store one message in the first partition and the second message in the second partition.
    Usually we can set partitions & replications factor while declaring kafka topics :
    bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic topic-name
  3. Consumer subscribes to a specific topic.
  4. Once the consumer subscribes to a topic, Kafka will provide the current offset of the topic to the consumer and also saves the offset in the Zookeeper ensemble.
  5. Consumer will request the Kafka in a regular interval (like 100 Ms) for new messages.
  6. Once Kafka receives the messages from producers, it forwards these messages to the consumers.
  7. Consumer will receive the message and process it.
  8. Once the messages are processed, consumer will send an acknowledgement to the Kafka broker.
  9. Once Kafka receives an acknowledgement, it changes the offset to the new value and updates it in the Zookeeper. Since offsets are maintained in the Zookeeper, the consumer can read next message correctly even during server outrages.
  10. This above flow will repeat until the consumer stops the request.
  11. Consumer has the option to rewind/skip to the desired offset of a topic at any time and read all the subsequent messages.

Now here consider if we have kafka cluster of 4 nodes & replication factor for a topic 3. As we know kafka maintains list of in-sync replicas(ISR) based on 2 configuration

  1. replica.lag.max.messages : This config tells that if a replica is out-of-sync with master for more than N messages than it removes that replica from ISR & creates an alert for a particular topic being under replicated as that replica is out-of-sync for N number of messages.
  2. replica.lag.time.max.ms : This config tells that if a slave (Replica) didn’t send a request to master for a message within configured time than that replica will removed from ISR(In-sync replica) & alert will be pop as under replicated topic.

Sometimes while working on production we face a lots of this kinds of errors which is not a good sign. There are various reasons for a node or replica for being slow or down.
so the ideal practice to minimize the alerts is to only set replica.lag.max.ms & avoid setting replica.lag.max.messages .

Thanks to  Neha nerkhade (https://www.confluent.io/blog/author/neha-narkhede/) for her awesome blog.

Reference :
https://www.confluent.io/blog/hands-free-kafka-replication-a-lesson-in-operational-simplicity/

 

Advertisements

HBase shell commands

  1. describe ‘tablename’ :
    Displays the metadata of particular table
    Example :

    describe 'employee' 
    
    {NAME => 'address', BLOOMFILTER => 'ROW', VERSIONS => '5', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', 
    
    TTL => 'FOREVER', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
    
    {NAME => 'personal_info', BLOOMFILTER => 'ROW', VERSIONS => '2', IN_MEMORY => 'true', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NO
    
    NE', TTL => 'FOREVER', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 
    
    {NAME => 'professional_info', BLOOMFILTER => 'ROW', VERSIONS => '3', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING =
    
    > 'NONE', TTL => 'FOREVER', COMPRESSION => 'GZ', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
  2. disable ’employee’ :
    Always disable table before performing any DDL operation
  3. alter ’employee’, {NAME => ‘col_fam1’, COMPRESSION => ‘GZ’}
    With alter command you can add columns on fly as well as add different parameter to columns like storing IN_MEMORY , Setting compression for particular column family , setting number of versions etc.
  4. list :
    List all the tables stored in Hbase :
    Example :
    list
    [“employee”, “table1”, “user_hcat_load_table”]
  5. enable_all ‘t.*’ :
    Enables all the table matching regex
  6. exists ‘student’ :
    Check weather student table exist or not
  7. show_filters :
    Shows all the available filter in Hbase
  8. alter_status :
    Get the status of the alter command. Indicates the number of regions of the table that have received the updated schema Pass table name.
  9. alter_async :
    Alter column family schema, does not wait for all regions to receive the
    schema changes. Pass table name and a dictionary specifying new column
    family schema. Dictionaries are described on the main help command output.
    Dictionary must include name of column family to alter.
    To change or add the ‘f1’ column family in table ‘t1’ from defaults
    to instead keep a maximum of 5 cell VERSIONS, do:hbase> alter_async ‘t1’, NAME => ‘f1’, VERSIONS => 5To delete the ‘f1’ column family in table ‘t1’, do:

    hbase> alter_async ‘t1’, NAME => ‘f1’, METHOD => ‘delete’or a shorter version:hbase> alter_async ‘t1’, ‘delete’ => ‘f1’
    You can also change table-scope attributes like MAX_FILESIZE
    MEMSTORE_FLUSHSIZE, READONLY, and DEFERRED_LOG_FLUSH.

    For example, to change the max size of a family to 128MB, do:

    hbase> alter ‘t1’, METHOD => ‘table_att’, MAX_FILESIZE => ‘134217728’

    There could be more than one alteration in one command:

    hbase> alter ‘t1’, {NAME => ‘f1’}, {NAME => ‘f2’, METHOD => ‘delete’}

    To check if all the regions have been updated, use alter_status <table_name>

  10. count :
    Counts the number of rows in table. COUNT interval is by default 1000, one can increase the interval as well as set scan caching on count scan by default.
    hbase> count ‘t1’, INTERVAL => 100000
    hbase> count ‘t1’, CACHE => 1000
    hbase> count ‘t1’, INTERVAL => 10, CACHE => 1000

  11. delete :
    Put a delete cell value at specified table/row/column and optionally
    timestamp coordinates. Deletes must match the deleted cell’s
    coordinates exactly. When scanning, a delete cell suppresses older
    versions. To delete a cell from ‘t1’ at row ‘r1’ under column ‘c1’
    marked with the time ‘ts1’, do:hbase> delete ‘t1’, ‘r1’, ‘c1’, ts1
  12. deleteall :
    Delete all cells in a given row; pass a table name, row, and optionally
    a column and timestamp. Examples:hbase> deleteall ‘t1’, ‘r1’
    hbase> deleteall ‘t1’, ‘r1’, ‘c1’
    hbase> deleteall ‘t1’, ‘r1’, ‘c1’, ts1

  13. get :
    Get row or cell contents; pass table name, row, and optionally
    a dictionary of column(s), timestamp, timerange and versions.
    Examples:
    hbase> get ‘t1’, ‘r1’
    hbase> get ‘t1’, ‘r1’, {TIMERANGE => [ts1, ts2]}
    hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’}
    hbase> get ‘t1’, ‘r1’, {COLUMN => [‘c1’, ‘c2’, ‘c3’]}
    hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}
    hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}
    hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}
    hbase> get ‘t1’, ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}
    hbase> get ‘t1’, ‘r1’, ‘c1’
    hbase> get ‘t1’, ‘r1’, ‘c1’, ‘c2’
    hbase> get ‘t1’, ‘r1’, [‘c1’, ‘c2’]
  14.  put :
    Put a cell ‘value’ at specified table/row/column and optionally
    timestamp coordinates. To put a cell value into table ‘t1’ at
    row ‘r1’ under column ‘c1’ marked with the time ‘ts1’, do:hbase> put ‘t1’, ‘r1’, ‘c1’, ‘value’, ts1

  15. scan :
    Scan a table; pass table name and optionally a dictionary of scanner
    specifications. Scanner specifications may include one or more of:
    TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
    or COLUMNS, CACHEIf no columns are specified, all columns will be scanned.
    To scan all members of a column family, leave the qualifier empty as in
    ‘col_family:’.The filter can be specified in two ways:
    1. Using a filterString – more information on this is available in the
    Filter Language document attached to the HBASE-4176 JIRA
    2. Using the entire package name of the filter.Some examples:hbase> scan ‘.META.’
    hbase> scan ‘.META.’, {COLUMNS => ‘info:regioninfo’}
    hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], LIMIT => 10, STARTROW => ‘xyz’}
    hbase> scan ‘t1’, {COLUMNS => ‘c1’, TIMERANGE => [1303668804, 1303668904]}
    hbase> scan ‘t1’, {FILTER => “(PrefixFilter (‘row2’) AND
    (QualifierFilter (>=, ‘binary:xyz’))) AND (TimestampsFilter ( 123, 456))”}
    hbase> scan ‘t1’, {FILTER =>
    org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
  16. truncate :
    Disables, drops and recreates the specified table.
    Examples:
    hbase>truncate ‘t1’

    Reference :
    https://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/

    https://www.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hbase_filtering.html

 

 

Priority Queue – Data Structure

We know that Queue follows First-In-First-Out model but sometimes we need to process the objects in the queue based on the priority. That is when JavaPriorityQueue is used.

For example, let’s say we have an application that generates stocks reports for daily trading session. This application processes a lot of data and takes time to process it. So customers are sending request to the application that is actually getting queued but we want to process premium customers first and standard customers after them. So in this case PriorityQueue implementation in java can be really helpful.

PriorityQueue is an unbounded queue based on a priority heap and the elements of the priority queue are ordered by default in natural order. We can provide a Comparator for ordering at the time of instantiation of priority queue.

Java Priority Queue doesn’t allow null values and we can’t create PriorityQueue of Objects that are non-comparable. We use java Comparable and Comparator for sorting Objects and Priority Queue use them for priority processing of it’s elements.

The simplest way to implement a priority queue data type is to keep an associative array mapping each priority to a list of elements with that priority. If association lists or hash tables are used to implement the associative array, adding an element takes constant time but removing or peeking at the element of highest priority takes linear (O(n)) time, because we must search all keys for the largest one. If a self-balancing binary search tree is used, all three operations take O(log n) time; this is a popular solution in environments that already provide balanced trees but nothing more sophisticated.

There are a number of specialized heap data structures that either supply additional operations or outperform the above approaches. The binary heap uses O(log n) time for both operations, but allows peeking at the element of highest priority without removing it in constant time. Binomial heaps add several more operations, but require O(log n) time for peeking. Fibonacci heaps can insert elements, peek at the maximum priority element, and decrease an element’s priority in amortized constant time (deletions are still O(log n)).

 

// BinaryHeap class
//
// CONSTRUCTION: empty or with initial array.
//
// ******************PUBLIC OPERATIONS*********************
// void insert( x )       –> Insert x
// Comparable deleteMin( )–> Return and remove smallest item
// Comparable findMin( )  –> Return smallest item
// boolean isEmpty( )     –> Return true if empty; else false
// void makeEmpty( )      –> Remove all items
// ******************ERRORS********************************
// Throws UnderflowException for findMin and deleteMin when empty

/**
* Implements a binary heap.
* Note that all “matching” is based on the compareTo method.
* @author Bhavesh Gadoya
*/
public class BinaryHeap implements PriorityQueue {
/**
* Construct the binary heap.
*/
public BinaryHeap( ) {
currentSize = 0;
array = new Comparable[ DEFAULT_CAPACITY + 1 ];
}

/**
* Construct the binary heap from an array.
* @param items the inital items in the binary heap.
*/
public BinaryHeap( Comparable [ ] items ) {
currentSize = items.length;
array = new Comparable[ items.length + 1 ];

for( int i = 0; i < items.length; i++ )
array[ i + 1 ] = items[ i ];
buildHeap( );
}

/**
* Insert into the priority queue.
* Duplicates are allowed.
* @param x the item to insert.
* @return null, signifying that decreaseKey cannot be used.
*/
public PriorityQueue.Position insert( Comparable x ) {
if( currentSize + 1 == array.length )
doubleArray( );

// Percolate up
int hole = ++currentSize;
array[ 0 ] = x;

for( ; x.compareTo( array[ hole / 2 ] ) < 0; hole /= 2 )
array[ hole ] = array[ hole / 2 ];
array[ hole ] = x;

return null;
}

/**
* @throws UnsupportedOperationException because no Positions are returned
* by the insert method for BinaryHeap.
*/
public void decreaseKey( PriorityQueue.Position p, Comparable newVal ) {
throw new UnsupportedOperationException( “Cannot use decreaseKey for binary heap” );
}

/**
* Find the smallest item in the priority queue.
* @return the smallest item.
* @throws UnderflowException if empty.
*/
public Comparable findMin( ) {
if( isEmpty( ) )
throw new UnderflowException( “Empty binary heap” );
return array[ 1 ];
}

/**
* Remove the smallest item from the priority queue.
* @return the smallest item.
* @throws UnderflowException if empty.
*/
public Comparable deleteMin( ) {
Comparable minItem = findMin( );
array[ 1 ] = array[ currentSize– ];
percolateDown( 1 );

return minItem;
}

/**
* Establish heap order property from an arbitrary
* arrangement of items. Runs in linear time.
*/
private void buildHeap( ) {
for( int i = currentSize / 2; i > 0; i– )
percolateDown( i );
}

/**
* Test if the priority queue is logically empty.
* @return true if empty, false otherwise.
*/
public boolean isEmpty( ) {
return currentSize == 0;
}

/**
* Returns size.
* @return current size.
*/
public int size( ) {
return currentSize;
}

/**
* Make the priority queue logically empty.
*/
public void makeEmpty( ) {
currentSize = 0;
}

private static final int DEFAULT_CAPACITY = 100;

private int currentSize;      // Number of elements in heap
private Comparable [ ] array; // The heap array

/**
* Internal method to percolate down in the heap.
* @param hole the index at which the percolate begins.
*/
private void percolateDown( int hole ) {
int child;
Comparable tmp = array[ hole ];

for( ; hole * 2 <= currentSize; hole = child ) {
child = hole * 2;
if( child != currentSize &&
array[ child + 1 ].compareTo( array[ child ] ) < 0 )
child++;
if( array[ child ].compareTo( tmp ) < 0 )
array[ hole ] = array[ child ];
else
break;
}
array[ hole ] = tmp;
}

/**
* Internal method to extend array.
*/
private void doubleArray( ) {
Comparable [ ] newArray;

newArray = new Comparable[ array.length * 2 ];
for( int i = 0; i < array.length; i++ )
newArray[ i ] = array[ i ];
array = newArray;
}

// Test program
public static void main( String [ ] args ) {
int numItems = 10000;
BinaryHeap h1 = new BinaryHeap( );
Integer [ ] items = new Integer[ numItems – 1 ];

int i = 37;
int j;

for( i = 37, j = 0; i != 0; i = ( i + 37 ) % numItems, j++ ) {
h1.insert( new Integer( i ) );
items[ j ] = new Integer( i );
}

for( i = 1; i < numItems; i++ )
if( ((Integer)( h1.deleteMin( ) )).intValue( ) != i )
System.out.println( “Oops! ” + i );

BinaryHeap h2 = new BinaryHeap( items );
for( i = 1; i < numItems; i++ )
if( ((Integer)( h2.deleteMin( ) )).intValue( ) != i )
System.out.println( “Oops! ” + i );
}
}

// PriorityQueue interface
//
// ******************PUBLIC OPERATIONS*********************
// Position insert( x )   –> Insert x
// Comparable deleteMin( )–> Return and remove smallest item
// Comparable findMin( )  –> Return smallest item
// boolean isEmpty( )     –> Return true if empty; else false
// void makeEmpty( )      –> Remove all items
// int size( )            –> Return size
// void decreaseKey( p, v)–> Decrease value in p to v
// ******************ERRORS********************************
// Throws UnderflowException for findMin and deleteMin when empty

/**
* PriorityQueue interface.
* Some priority queues may support a decreaseKey operation,
* but this is considered an advanced operation. If so,
* a Position is returned by insert.
* Note that all “matching” is based on the compareTo method.
* @author Bhavesh Gadoya
*/
public interface PriorityQueue {
/**
* The Position interface represents a type that can
* be used for the decreaseKey operation.
*/
public interface Position {
/**
* Returns the value stored at this position.
* @return the value stored at this position.
*/
Comparable getValue( );
}

/**
* Insert into the priority queue, maintaining heap order.
* Duplicates are allowed.
* @param x the item to insert.
* @return may return a Position useful for decreaseKey.
*/
Position insert( Comparable x );

/**
* Find the smallest item in the priority queue.
* @return the smallest item.
* @throws UnderflowException if empty.
*/
Comparable findMin( );

/**
* Remove the smallest item from the priority queue.
* @return the smallest item.
* @throws UnderflowException if empty.
*/
Comparable deleteMin( );

/**
* Test if the priority queue is logically empty.
* @return true if empty, false otherwise.
*/
boolean isEmpty( );

/**
* Make the priority queue logically empty.
*/
void makeEmpty( );

/**
* Returns the size.
* @return current size.
*/
int size( );

/**
* Change the value of the item stored in the pairing heap.
* This is considered an advanced operation and might not
* be supported by all priority queues. A priority queue
* will signal its intention to not support decreaseKey by
* having insert return null consistently.
* @param p any non-null Position returned by insert.
* @param newVal the new value, which must be smaller
*    than the currently stored value.
* @throws IllegalArgumentException if p invalid.
* @throws UnsupportedOperationException if appropriate.
*/
void decreaseKey( Position p, Comparable newVal );
}

/**
* Exception class for access in empty containers
* such as stacks, queues, and priority queues.
* @author Bhavesh Gadoya
*/
public class UnderflowException extends RuntimeException {
/**
* Construct this exception object.
* @param message the error message.
*/
public UnderflowException( String message ) {
super( message );
}
}

 

In-built java implementation of priorityQueue :

package com.journaldev.collections;

public class Customer {
	
	private int id;
	private String name;
	
	public Customer(int i, String n){
		this.id=i;
		this.name=n;
	}

	public int getId() {
		return id;
	}

	public String getName() {
		return name;
	}
	
}

We will use java random number generation to generate random customer objects. For natural ordering, I will use Integer that is also a java wrapper class.

Here is our final test code that shows how to use priority queue in java.

package com.journaldev.collections;

import java.util.Comparator;
import java.util.PriorityQueue;
import java.util.Queue;
import java.util.Random;

public class PriorityQueueExample {

	public static void main(String[] args) {
		
		//natural ordering example of priority queue
		Queue<Integer> integerPriorityQueue = new PriorityQueue<>(7);
		Random rand = new Random();
		for(int i=0;i<7;i++){
			integerPriorityQueue.add(new Integer(rand.nextInt(100)));
		}
		for(int i=0;i<7;i++){
			Integer in = integerPriorityQueue.poll();
			System.out.println("Processing Integer:"+in);
		}
		
		//PriorityQueue example with Comparator
		Queue<Customer> customerPriorityQueue = new PriorityQueue<>(7, idComparator);
		addDataToQueue(customerPriorityQueue);
		
		pollDataFromQueue(customerPriorityQueue);
		
	}
	
	//Comparator anonymous class implementation
	public static Comparator<Customer> idComparator = new Comparator<Customer>(){
		
		@Override
		public int compare(Customer c1, Customer c2) {
            return (int) (c1.getId() - c2.getId());
        }
	};

	//utility method to add random data to Queue
	private static void addDataToQueue(Queue<Customer> customerPriorityQueue) {
		Random rand = new Random();
		for(int i=0; i<7; i++){
			int id = rand.nextInt(100);
			customerPriorityQueue.add(new Customer(id, "Pankaj "+id));
		}
	}
	
	//utility method to poll data from queue
	private static void pollDataFromQueue(Queue<Customer> customerPriorityQueue) {
		while(true){
			Customer cust = customerPriorityQueue.poll();
			if(cust == null) break;
			System.out.println("Processing Customer with ID="+cust.getId());
		}
	}

}

	

Asynchronous processing in java

Asynchronous programming is very popular these days, primarily because of its ability to improve the overall throughput on a multi-core system. Asynchronous programming is a programming paradigm that facilitates fast and responsive user interfaces. The asynchronous programming model in Java provides a consistent programming model to write programs that support asynchrony.

Asynchronous programming provides a non-blocking, event-driven programming model. This programming model leverages the multiple cores in your system to provide parallelization by using multiple CPU cores to execute the tasks, thus increasing the application’s throughput. Note that throughput is a measure of the amount of work done in unit time. In this programming paradigm, a unit of work would execute separately from the main application thread and notify the calling thread about its execution state: success, in progress or failure.

Application of asynchronous can be a situation where we want to execute multiple things in parellel without waiting for 1 task to finish such that it increase the throughput of the system. Consider we want to send email to 100k+ users and at the same time need to process other data, such that we don’t want to wait for email task to complete to proceed.

Another good example of this can be logging frameworks: You typically would want to log exceptions and errors into your log targets; in other words, file, database, or something similar. There is no point for your application to wait till the logging tasks are over. In doing so, the application’s responsiveness would be affected. On the contrary, if the call to the logging framework can be made asynchronously, the application can proceed with other tasks concurrently, without having to wait. This is an example of a non-blocking mode of execution.

1. Future is a base interface and defines abstraction of an object which promises result to be available in future while FutureTask is an implementation of the Future interface.

2. Future is a parametric interface and type-safe written as Future<V>, where V denotes value.

3. Future provides get() method to get result, which is blocking method and blocks until result is available to Future.

4. Future interface also defines cancel() method to cancel task.

5. isDone() and isCancelled() method is used to query Future task states. isDone() returns true if task is completed and result is available to Future. If you call get() method, after isDone() returned true then it should return immediately. On the other hand, isCancelled() method returns true, if this task is cancelled before its completion.

6. Future has four sub interfaces, each with additional functionality e.g. Response, RunnableFuture, RunnableScheduledFuture and ScheduledFuture. RunnableFuture also implements Runnable and successful finish of run() method cause completion of this Future.

7. FutureTask and SwingWorker are two well known implementation of Future interface. FutureTask also implements RunnableFuture interface, which means this can be used as Runnable and can be submitted to ExecutorService for execution.

8. Though most of the time ExecutorService creates FutureTask for you, i.e. when you submit() Callable or Runnable object. You can also created it manually.

9. FutureTask is normally used to wrap Runnable or Callable object and submit them to ExecutorService for asynchronous execution.

import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.logging.Level;
import java.util.logging.Logger; /** * Java program to show how to use Future in Java. Future allows to write * asynchronous code in Java, where Future promises result to be available in * future * * @author Javin */
public class FutureDemo {
private static final ExecutorService threadpool = Executors.newFixedThreadPool(2);
public static void main(String args[]) throws InterruptedException, ExecutionException {
FactorialCalculator task = new FactorialCalculator(1000);

System.out.println(“Submitting Task …”);
Future future = threadpool.submit(task);
System.out.println(“Task is submitted”);
while (!future.isDone()) {
System.out.println(“Task is not completed yet….”);
Thread.sleep(1); //sleep for 1 millisecond before checking again
}
System.out.println(“Task is completed, let’s check result”);
long factorial = (long) future.get();
System.out.println(“Factorial of 1000000 is : ” + factorial);
threadpool.shutdown();
}
private static class FactorialCalculator implements Callable {
private final int number;
public FactorialCalculator(int number) {
this.number = number;
}

@Override public Long call() {
long output = 0;
try {
output = factorial(number);
} catch (InterruptedException ex) {
//Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex);
}
return output;
}

private long factorial(int number) throws InterruptedException {
if (number < 0) {
throw new IllegalArgumentException(“Number must be greater than zero”);
}
long result = 1;
while (number > 0) {
Thread.sleep(1); // adding delay for example
result = result * number;
number–;
}
return result;

}
}
}

Usage in spring framework is given in below link :

 

Python – java integration (Jython continue…)

Friends , Having knowledge of multiple language is good. but sometimes it become cumbersome to use the libraries written in one language into another. Jython provides a way to run python over JVM. Hence allows integration of both java & python. We can use java classes & function in python as well as python libraries in java simply.

Below i am going to give an example for such.

  • We are going to create interface in java which will be implemented in python. which are again getting called in java language
  1. Create a package name org.jython.book.interfaces & define the interace as given below.
     

    package org.jython.book.interfaces;// Java interface for a building object

    public interface BuildingType {

    public String getBuildingName();

    public String getBuildingAddress();

    public String getBuildingId();

    }

  2. Create a python module which implements the above interface.

from org.jython.book.interfaces import BuildingType

class Building(BuildingType):
def __init__(self, name, address, id):
self.name = name
self.address = address
self.id = id

def getBuildingName(self):
return self.name

def getBuildingAddress(self):
return self.address

def getBuildingId(self):
return self.id

3. Create another package called package org.jython.book.util

Create a  class named BuildingFactory.java

 

/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.jython.book.util;
import org.jython.book.interfaces.BuildingType;
import org.python.core.PyObject;
import org.python.core.PyString;
import org.python.util.PythonInterpreter;

public class BuildingFactory {

private PyObject buildingClass;

/**
* Create a new PythonInterpreter object, then use it to execute some python
* code. In this case, we want to import the python module that we will
* coerce.
*
* Once the module is imported than we obtain a reference to it and assign
* the reference to a Java variable
*/
public BuildingFactory() {
PythonInterpreter interpreter = new PythonInterpreter();
interpreter.exec(“import sys\n” + “sys.path.append(‘/root/NetBeansProjects/JythonR/src/org/jython/book/interfaces/’)\n”+”from Building import Building”);
buildingClass = interpreter.get(“Building”);
}

/**
* The create method is responsible for performing the actual coercion of
* the referenced python module into Java bytecode
* @param name
* @param location
* @param id
* @return BuildingType
*/

public BuildingType create(String name, String location, String id) {
PyObject buildingObject = buildingClass.__call__(new PyString(name),
new PyString(location),
new PyString(id));

//buildingObject.__tojava__(Object.class);
BuildingType type = (BuildingType) buildingObject.__tojava__(BuildingType.class);
System.out.println(type.getClass());
return type;
}
}

4. Now simply write a main method

/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package org.jython.book;

import org.jython.book.util.BuildingFactory;
import org.jython.book.interfaces.BuildingType;

public class Main {

private static void print(BuildingType building) {
System.out.println(“Building Info: ” +
building.getBuildingId() + ” ” +
building.getBuildingName() + ” ” +
building.getBuildingAddress());
}

/**
* Create three building objects by calling the create() method of
* the factory.
*/

public static void main(String[] args) {
BuildingFactory factory = new BuildingFactory();
print(factory.create(“BUILDING-A”, “100 WEST MAIN”, “1”));
print(factory.create(“BUILDING-B”, “110 WEST MAIN”, “2”));
print(factory.create(“BUILDING-C”, “120 WEST MAIN”, “3”));

}

}

 

BuildingFactory class will create a factory object which will convert python object into java object.

 

Run the program. Remember you will need to install jython on your system & need to add jython jar file into class library in order to run it!!.

 

 

Front Controller design pattern

The front controller pattern makes sure that there is one and only one point of entry. All requests are investigated, routed to the designated controller and then processed accordingly to the specification. The front controller is responsible of initializing the environment and routing requests to designated controllers.

The front controller design pattern is used to provide a centralized request handling mechanism so that all requests will be handled by a single handler. This handler can do the authentication/ authorization/ logging or tracking of request and then pass the requests to corresponding handlers. Following are the entities of this type of design pattern.

  • Front Controller – Single handler for all kinds of requests coming to the application (either web based/ desktop based).
  • Dispatcher – Front Controller may use a dispatcher object which can dispatch the request to corresponding specific handler.
  • View – Views are the object for which the requests are made.

Implementation

We are going to create a FrontController and Dispatcher to act as Front Controller and Dispatcher correspondingly. HomeView and StudentView represent various views for which requests can come to front controller.

FrontControllerPatternDemo, our demo class, will use FrontController to demonstrate Front Controller Design Pattern.

Front Controller Pattern UML Diagram

Step 1

Create Views.

HomeView.java

public class HomeView {
   public void show(){
      System.out.println("Displaying Home Page");
   }
}

StudentView.java

public class StudentView {
   public void show(){
      System.out.println("Displaying Student Page");
   }
}

Step 2

Create Dispatcher.

Dispatcher.java

public class Dispatcher {
   private StudentView studentView;
   private HomeView homeView;
   
   public Dispatcher(){
      studentView = new StudentView();
      homeView = new HomeView();
   }

   public void dispatch(String request){
      if(request.equalsIgnoreCase("STUDENT")){
         studentView.show();
      }
      else{
         homeView.show();
      }	
   }
}

Step 3

Create FrontController

FrontController.java

public class FrontController {
	
   private Dispatcher dispatcher;

   public FrontController(){
      dispatcher = new Dispatcher();
   }

   private boolean isAuthenticUser(){
      System.out.println("User is authenticated successfully.");
      return true;
   }

   private void trackRequest(String request){
      System.out.println("Page requested: " + request);
   }

   public void dispatchRequest(String request){
      //log each request
      trackRequest(request);
      
      //authenticate the user
      if(isAuthenticUser()){
         dispatcher.dispatch(request);
      }	
   }
}

Step 4

Use the FrontController to demonstrate Front Controller Design Pattern.

FrontControllerPatternDemo.java

public class FrontControllerPatternDemo {
   public static void main(String[] args) {
   
      FrontController frontController = new FrontController();
      frontController.dispatchRequest("HOME");
      frontController.dispatchRequest("STUDENT");
   }
}

Step 5

Verify the output.

Page requested: HOME
User is authenticated successfully.
Displaying Home Page
Page requested: STUDENT
User is authenticated successfully.
Displaying Student Page

Event – Observer pattern

Imagine a situation: you are developing a custom e-commerce framework and are about to put some finishing touches to your order model. The task is to program functionality, which creates an order at the end of the checkout and saves it to the database. You realize that a confirmation email has to be sent and you add the necessary code. Next morning a product manager asks you to send a copy of the e-mail to his address, and you extend your code accordingly. At a meeting in the afternoon the warehouse guy proposes to automate inventory management by sending a message to the warehouse software; and you implement it in your code. Exporting order as an XML file to feed into the shipping system – done. Notifying the purchasing department when the inventory is too low – done too.

After all this you take a look at your order model and its function placeOrder and see it has become an unmaintainable mess. All these diverse and complex tasks just do not fit into the order model. Yet they are essential for the your business and must be implemented. Situations like this are not uncommon. The growing complexity of enterprise applications often results in code that is inflexible and difficult to maintain, and prohibitively expensive to scale.

Event-driven software architecture has evolved to address such problems by decoupling services and service providers. It introduces events – notable milestones in business processes that invoke services, which observe and react to them. Events alert their subscribers about a problem, or an opportunity, or a threshold in the current process flow. An event broadcasted in the system usually consists of an event header and body. The event header contains an ID that is used to locate subscribers, while the body transports information required to process the event. In some systems event headers can also include information on the event type and creator, or a timestamp – whatever data the specifications mandates.

The service providers are independent entities and can be added or removed without affecting objects, whose events they listen to. Event creators have no knowledge of the subscribed service providers and do not depend on them. Similarly service providers are not interested in the internal mechanics of event creators. This allows for extremely flexible, loosely coupled and distributed systems. This advantages, however, come at a price – tracing events and their subscribers can be difficult.

Below is simple example of Event-Observer pattern :

Application : Suppose we want to find out the different versions of given string let say hex ,decimal , octal etc we can apply event observer for this as given below:

Observer.java

abstract class Observer {

protected Subject subject;

public abstract void update();
}

 

Subject.java

public class Subject {

private List<Observer> observers = new ArrayList<Observer>();
private int state;

public int getState() {
return state;
}

public void setState(int state) {
this.state = state;
notifyAllObservers();
}

public void attach(Observer observer) {
observers.add(observer);
}

public void notifyAllObservers() {
for (Observer observer : observers) {
observer.update();
}
}
}

BinaryObserver.java
public class BinaryObserver extends Observer{

public BinaryObserver(Subject subject){
this.subject = subject;
this.subject.attach(this);
}

@Override
public void update() {
System.out.println( “Binary String: ” + Integer.toBinaryString( subject.getState() ) );
}
}

OctalObserver.java

public class OctalObserver extends Observer{

public OctalObserver(Subject subject){
this.subject = subject;
this.subject.attach(this);
}

@Override
public void update() {
System.out.println( “Octal String: ” + Integer.toOctalString( subject.getState() ) );
}
}

 

HexaObserver.java

public class HexaObserver extends Observer {

public HexaObserver(Subject subject) {
this.subject = subject;
this.subject.attach(this);
}

@Override
public void update() {
System.out.println(“Hex String: ” + Integer.toHexString(subject.getState()).toUpperCase());
}
}

 

ObserverPatternDemo.java

public class ObserverPatternDemo {

public static void main(String[] args) {
Subject subject = new Subject();
new HexaObserver(subject);
new OctalObserver(subject);
new BinaryObserver(subject);
System.out.println(“First state change: 15”);
subject.setState(15);
System.out.println(“Second state change: 10”);
subject.setState(10);
}
}

As soon as event is triggered it is propagated to all attached observer. A beautiful design pattern to code!!

 

 

 

 

 

Implicit lock vs Exclusive lock in java

On class level, ReentrantLock is a concrete implementation of Lock interface provided in Java concurrency package from Java 1.5 onwards. As per Javadoc, ReentrantLock is mutual exclusive lock, similar to implicit locking provided by synchronized keyword in Java, with extended feature like fairness, which can be used to provide lock to longest waiting thread. Lock is acquired by lock() method and held by Thread until a call to unlock() method. Fairness parameter is provided while creating instance of ReentrantLock in constructor. ReentrantLock provides same visibility and ordering guarantee, provided by implicitly locking, which means, unlock() happens before another thread get lock().

Difference between ReentrantLock and synchronized keyword in Java

Though ReentrantLock provides same visibility and orderings guaranteed as implicit lock, acquired by synchronized keyword in Java, it provides more functionality and differ in certain aspect. As stated earlier, main difference between synchronized and ReentrantLock is ability to trying for lock interruptibly, and with timeout. Thread doesn’t need to block infinitely, which was the case with synchronized. Let’s see few more differences between synchronized and Lock in Java.

1) Another significant difference between ReentrantLock and synchronized keyword is fairness. synchronized keyword doesn’t support fairness. Any thread can acquire lock once released, no preference can be specified, on the other hand you can make ReentrantLock fair by specifying fairness property, while creating instance of ReentrantLock. Fairness property provides lock to longest waiting thread, in case of contention.

2) Second difference between synchronized and Reentrant lock is tryLock() method. ReentrantLock provides convenient tryLock() method, which acquires lock only if its available or not held by any other thread. This reduce blocking of thread waiting for lock in Java application.

3) One more worth noting difference between ReentrantLock and synchronized keyword in Java is, ability to interrupt Thread while waiting for Lock. In case of synchronized keyword, a thread can be blocked waiting for lock, for an indefinite period of time and there was no way to control that. ReentrantLock provides a method called lockInterruptibly(), which can be used to interrupt thread when it is waiting for lock. Similarly tryLock() with timeout can be used to timeout if lock is not available in certain time period.

4) ReentrantLock also provides convenient method to get List of all threads waiting for lock.

So, you can see, lot of significant differences between synchronized keyword and ReentrantLock in Java. In short, Lock interface adds lot of power and flexibility and allows some control over lock acquisition process, which can be leveraged to write highly scalable systems in Java.

Benefits of ReentrantLock in Java

Most of the benefits derives from the differences covered between synchronized vs ReentrantLock in last section. Here is summary of benefits offered by ReentrantLock over synchronized in Java:

1) Ability to lock interruptibly.

2) Ability to timeout while waiting for lock.

3) Power to create fair lock.

4) API to get list of waiting thread for lock.

5) Flexibility to try for lock without blocking.

Disadvantages of ReentrantLock in Java

Major drawback of using ReentrantLock in Java is wrapping method body inside try-finally block, which makes code unreadable and hides business logic. It’s really cluttered and I hate it most, though IDE like Eclipse and Netbeans can add those try catch block for you. Another disadvantage is that, now programmer is responsible for acquiring and releasing lock, which is a power but also opens gate for new subtle bugs, when programmer forget to release the lock in finally block.

Lock and ReentrantLock Example in Java

Here is a complete code example of How to use Lock interface and ReentrantLock in Java. This program locks a method called getCount(), which provides unique count to each caller. Here we will see both synchronized and ReentrantLock version of same program. You can see code with synchronized is more readable but it’s not as flexible as locking mechanism provided by Lock interface.

import java.util.concurrent.locks.ReentrantLock;

import java.util.logging.Level;

import java.util.logging.Logger;

/**

* Java program to show, how to use ReentrantLock in Java.

* Reentrant lock is an alternative way of locking

* apart from implicit locking provided by synchronized keyword in Java.

*

* @author Javin Paul

*/

public class ReentrantLockHowto {

private final ReentrantLock lock = new ReentrantLock();

private int count = 0;

//Locking using Lock and ReentrantLock

public int getCount() {

lock.lock();

try {

System.out.println(Thread.currentThread().getName() + ” gets Count: ” + count);

return count++;

} finally {

lock.unlock();

}

}

//Implicit locking using synchronized keyword

public synchronized int getCountTwo() {

return count++;

}

public static void main(String args[]) {

final ThreadTest counter = new ThreadTest();

Thread t1 = new Thread() {

@Override

public void run() {

while (counter.getCount() &lt; 6) {

try {

Thread.sleep(100);

} catch (InterruptedException ex) {

ex.printStackTrace();                   }

}

}

};

Thread t2 = new Thread() {

@Override

public void run() {

while (counter.getCount() &lt; 6) {

try {

Thread.sleep(100);

} catch (InterruptedException ex) {

ex.printStackTrace();

}

}

}

};

t1.start();

t2.start();

}

}

Output:

Thread-0 gets Count: 0

Thread-1 gets Count: 1

Thread-1 gets Count: 2

Thread-0 gets Count: 3

Thread-1 gets Count: 4

Thread-0 gets Count: 5

Thread-0 gets Count: 6

Thread-1 gets Count: 7

That’s all on What is ReentrantLock in Java, How to use with simple example, and difference between ReentrantLock and synchronized keyword in Java. We have also seen significant enhancement provided by Lock interface over synchronized e.g. trying for lock, timeout while waiting for lock and ability to interrupt thread while waiting for lock. Just be careful to release lock in finally block.
Read more: http://javarevisited.blogspot.com/2013/03/reentrantlock-example-in-java-synchronized-difference-vs-lock.html#ixzz3q3D0ImMf

Developing RESTful service in Spring

The service will handle GET requests for /greeting, optionally with a name parameter in the query string. The GET request should return a 200 OK response with JSON in the body that represents a greeting. It should look something like this:

{
    "id": 1,
    "content": "Hello, World!"
}

The id field is a unique identifier for the greeting, and content is the textual representation of the greeting.

To model the greeting representation, you create a resource representation class. Provide a plain old java object with fields, constructors, and accessors for the id and content data:

src/main/java/hello/Greeting.java

package hello;

public class Greeting {

    private final long id;
    private final String content;

    public Greeting(long id, String content) {
        this.id = id;
        this.content = content;
    }

    public long getId() {
        return id;
    }

    public String getContent() {
        return content;
    }
}

Create a resource controller

In Spring’s approach to building RESTful web services, HTTP requests are handled by a controller. These components are easily identified by the @RestController annotation, and the GreetingController below handles GET requests for /greeting by returning a new instance of the Greeting class:

src/main/java/hello/GreetingController.java

package hello;

import java.util.concurrent.atomic.AtomicLong;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class GreetingController {

    private static final String template = "Hello, %s!";
    private final AtomicLong counter = new AtomicLong();

    @RequestMapping("/greeting")
    public Greeting greeting(@RequestParam(value="name", defaultValue="World") String name) {
        return new Greeting(counter.incrementAndGet(),
                            String.format(template, name));
    }
}

This controller is concise and simple, but there’s plenty going on under the hood. Let’s break it down step by step.

The @RequestMapping annotation ensures that HTTP requests to /greeting are mapped to the greeting() method.

The above example does not specify GET vs. PUT, POST, and so forth, because @RequestMapping maps all HTTP operations by default. Use @RequestMapping(method=GET) to narrow this mapping.

@RequestParam binds the value of the query string parameter name into the name parameter of the greeting() method. This query string parameter is not required; if it is absent in the request, the defaultValue of “World” is used.

The implementation of the method body creates and returns a new Greeting object with id and content attributes based on the next value from the counter, and formats the given name by using the greeting template.

A key difference between a traditional MVC controller and the RESTful web service controller above is the way that the HTTP response body is created. Rather than relying on a view technology to perform server-side rendering of the greeting data to HTML, this RESTful web service controller simply populates and returns a Greeting object. The object data will be written directly to the HTTP response as JSON.

This code uses Spring 4’s new @RestController annotation, which marks the class as a controller where every method returns a domain object instead of a view. It’s shorthand for @Controller and @ResponseBody rolled together.

The Greeting object must be converted to JSON. Thanks to Spring’s HTTP message converter support, you don’t need to do this conversion manually. Because Jackson 2 is on the classpath, Spring’s MappingJackson2HttpMessageConverter is automatically chosen to convert the Greeting instance to JSON.

Make the application executable

Although it is possible to package this service as a traditional WAR file for deployment to an external application server, the simpler approach demonstrated below creates a standalone application. You package everything in a single, executable JAR file, driven by a good old Java main() method. Along the way, you use Spring’s support for embedding the Tomcat servlet container as the HTTP runtime, instead of deploying to an external instance.

src/main/java/hello/Application.java

package hello;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

@SpringBootApplication is a convenience annotation that adds all of the following:

  • @Configuration tags the class as a source of bean definitions for the application context.
  • @EnableAutoConfiguration tells Spring Boot to start adding beans based on classpath settings, other beans, and various property settings.
  • Normally you would add @EnableWebMvc for a Spring MVC app, but Spring Boot adds it automatically when it sees spring-webmvc on the classpath. This flags the application as a web application and activates key behaviors such as setting up a DispatcherServlet.
  • @ComponentScan tells Spring to look for other components, configurations, and services in the the hello package, allowing it to find the GreetingController.

The main() method uses Spring Boot’s SpringApplication.run() method to launch an application. Did you notice that there wasn’t a single line of XML? No web.xml file either. This web application is 100% pure Java and you didn’t have to deal with configuring any plumbing or infrastructure.

Build an executable JAR

If you are using Gradle, you can run the application using ./gradlew bootRun.

You can build a single executable JAR file that contains all the necessary dependencies, classes, and resources. This makes it easy to ship, version, and deploy the service as an application throughout the development lifecycle, across different environments, and so forth.

./gradlew build

Then you can run the JAR file:

java -jar build/libs/gs-rest-service-0.1.0.jar

If you are using Maven, you can run the application using mvn spring-boot:run. Or you can build the JAR file with mvn clean package and run the JAR by typing:

java -jar target/gs-rest-service-0.1.0.jar
 

Reading / Writing in Java

Friends reading & writing operation in java is very easy.

To write to a file :

File f = new File(“/location/of/file.txt”);
if(!f.exists())
{
f.createNewFile();
}
FileWriter fw = new FileWriter(f,true); //true meaning allow append
BufferedWriter wr = new BufferedWriter(fw);
wr.write(token);
wr.close();
fw.close();

To read file:

String data=””;

String final_data=””;

File f = new File(“/location/of/file.txt”);
if (f.exists()) {
FileReader fr = new FileReader(f);
BufferedReader reader = new BufferedReader(fr);
while ((data = reader.readLine()) != null) {
final_data += data;
}

reader.close();
fr.close();

 

To read from keyboard

Read the input 3 times after pressing enter each.

BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
int i=0;
String [] data = new String[3];
while(i<3)
{
data[i] = br.readLine();
i++;
}