Reading / Writing in Java

Friends reading & writing operation in java is very easy.

To write to a file :

File f = new File(“/location/of/file.txt”);
if(!f.exists())
{
f.createNewFile();
}
FileWriter fw = new FileWriter(f,true); //true meaning allow append
BufferedWriter wr = new BufferedWriter(fw);
wr.write(token);
wr.close();
fw.close();

To read file:

String data=””;

String final_data=””;

File f = new File(“/location/of/file.txt”);
if (f.exists()) {
FileReader fr = new FileReader(f);
BufferedReader reader = new BufferedReader(fr);
while ((data = reader.readLine()) != null) {
final_data += data;
}

reader.close();
fr.close();

 

To read from keyboard

Read the input 3 times after pressing enter each.

BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
int i=0;
String [] data = new String[3];
while(i<3)
{
data[i] = br.readLine();
i++;
}

 

Advertisements

BoneCP connection pooling – Some useful Tips

Friends , while developing an application which require high availability of application. It is often a requirement to use any kind of connection pooling. BoneCP connection pooling is a database connection pooling which has a good features that i have explained in my earlier post.There are some minor fine tuning require to work with BoneCP.  which are listed below.

1. Always keep setPartitionCount to less then or equal to 3.

In order to reduce lock contention and thus improve performance, each incoming connection request picks off a connection from a pool that has thread-affinity, i.e. pool[threadId % partition_count]. The higher this number, the better your performance will be for the case when you have plenty of short-lived threads. Beyond a certain threshold, maintenence of these pools will start to have a negative effect on performance (and only for the case when connections on a partition start running out).

 

  • maxConnectionsPerPartitionThe number of connections to create per partition. Setting this to 5 with 3 partitions means you will have 15 unique connections to the database. Note that BoneCP will not create all these connections in one go but rather start off with minConnectionsPerPartition and gradually increase connections as required.
  • minConnectionsPerPartitionThe number of connections to start off with per partition.
  • acquireIncrementWhen the available connections are about to run out, BoneCP will dynamically create new ones in batches. This property controls how many new connections to create in one go (up to a maximum of maxConnectionsPerPartition). Note: This is a per partition setting.Default: 10

 

2. To keep alive the connection use

config.setIdleConnectionTestPeriodInMinutes(10);
config.setConnectionTestStatement("/* ping */ SELECT 1"):

so in this way the application will use the above query to perform test connection to server whenever require.

3. Always commit() after performing transaction to avoid unwanted errors.

 

Pig – Functions

Friends , Functions in Pig come in four types
1. Eval function
– A function that takes one or more expressions and returns another expression.
– Some function are aggregate function like MAX
– Some functions are algebraic, which means that the result of the function may be calculated incrementally.
– In MapReduce term algebric functions make use of combiner and are much more efficient to calculate .
– Supports UDF by importing org.apache.pig.EvalFunc , extend EvalFunc & overriding exec method

2. Filter function :
– It returns logical boolean results
– FILTER removes unwanted rows
– EX: IsEmpty
– Supports UDF by importing org.apache.pig.FilterFunc , extend FilterFunc & overriding exec method

3. Load function
– Loads the data into a relation from external storage
– Supports UDF by importing org.apache.pig.LoadFunc , extend LoadFunc but override different other function like setLocation , getInputFormat , prepareTORead , getNext methods.

4. Store function
– Specifies how to save the contents of a relation to external storage
– Ex: PigStorage which loads data from delimited text files , can store data in the same format.

Detailed list is given below:

Pig Built-in Function
Eval AVG Calculate Avg(Mean) value of entries in a bag
CONCAT Concatenates byte arrays or chareacter array together
COUNT Calculate number of non-null entries in a bag
COUNT_STAR Calculate all entries including nulls
DIFF Calculates the set difference of two bags. If the two arguments are not bags
, returns a bag containing both if they are equal;otherwise,returns a nempty bag
MAX Calculate max
MIN Calculate Min
SIZE for character arrays, it is the num of char. For byte arrays the number of bytes
for containers(tuple , bag,map) it is number of entries
SUM Calculate summation of the values of entries in a bag
TOBAG Convert one or emore expresssions to individual tuple which are then put in a bag
TOKANIZE Tokenizes a character array into a bag of it’s constituent words
TOMAP Converts an even number of expressions to a map of key-value pairs
TOP Calculate top n tuples in a bag
TOTUPLE Convert one or more expresssions to a tuple
Filter IsEmpty Test weather bag or map is empty
Load/Sttore PigStorage Loads or stores relations using a field-delimited text format defaults to a tab character
BinStorage Loads or store relations from or to binary files in a pig specific format that uses HadoopWritable Object
TextLoader Loads relations from a plain-text format.
JsonLoader,JsonStorage Loads or store s relations from or to a JSON format.
HBaseStorage Loads or stores relation from or to Hbase

Pig Latin Relational Operator

   Pig Latin Relational Operator
Category Operator Description
Loading & Storing LOAD Loads data from the filesystem or other storage into a relation
STORE Saves a relation to the filesystem or other storage
DUMP Prints a relation to the console
Filtering FILTER Removes unwanted rows from relation
DISTINCT Removes duplicate rows from a relation
FOREACH..GENERATE Adds or removes fields from relation
MAPREDUCE Runs a mapreduce job using a relations as input
STREAM Transforms a relation using an external program
SAMPLE selects ar andom sample of a relation
Grouping & Joining JOIN Joins two or more relations
COGROUP Groups the data in two or more relations
GROUP Groups the data in a single relation
CROSS creates the croos-product of two or more relations
Sorting ORDER Sorts a relation by one or more fields
LIMIT Limits the size of a relation to a maximum number of tuples
Combining & Splitting UNION Combines two or more relations into one
SPLIT Split a relation into two or more relations

Pig – A programmer friendly MapReduce tool

Pig raises the level of abstraction for processing large datasets. MapReduce allow you , to specify map function followed by reduce function , but working out how to fit your data processing into this pattern , which often require multiple MapReduce stages, can be a challange.
Pig supports richer data structure, typically being multivalued & nested, and the set of transformations you can apply to the data are much more powerful.
One of the powerful feature of PIG is join, which are not for the faint of heart in MapReduce.
Pig is made up of 2 pieces:
1. The language used to express data flows, called Pig Latin
2. The Execution enviornment to run Pig Latin Program.
For MultiQuery execution it is always recommended to use STORE instead of DUMP as DUMP is a diagnostic tool, it will always trigger execution even in batch mode which STORE command doesn’t.
Consider the following example:
A = LOAD ‘input/pig/multiquery/A’
B = FILTER A by $1 == ‘banana’;
C = FILTER A BY $1 != ‘banana’;
STORE B INTO ‘output/b’;
STORE C INTO ‘output/c’;
Pig can run this script as a single MapReduce job by reading A once and writing two output files from the job, one for each of B & C. This features is called multiquery execution.

Java8 new Features

Friends with introduction of java8 many new features has been added to it which has bring revolution to java language. The list is given below.

Lambda Expression:
You can find the good tutorial in below link
http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/Lambda-QuickStart/index.html#section5
Default methods enable new functionality to be added to the interfaces of libraries and ensure binary compatibility with code written for older versions of those interfaces.
– Repeating Annotations provide the ability to apply the same annotation type more than once to the same declaration or type use.
– Type Annotations provide the ability to apply an annotation anywhere a type is used, not just on a declaration. Used with a pluggable type system, this feature enables improved type checking of your code.
– Improved type inference.
– Method parameter reflection.
– Collections:
Classes in the new java.util.stream package provide a Stream API to support functional-style operations on streams of elements. The Stream API is integrated into the Collections API, which enables bulk operations on collections, such as sequential or parallel map-reduce transformations.
– Performance Improvement for HashMaps with Key Collisions

  • JavaFX
    • The new Modena theme has been implemented in this release. For more information, see the blog at fxexperience.com.
    • The new SwingNode class enables developers to embed Swing content into JavaFX applications. See the SwingNode javadoc and Embedding Swing Content in JavaFX Applications.
    • The new UI Controls include the DatePicker and the TreeTableView controls.
    • The javafx.print package provides the public classes for the JavaFX Printing API. See the javadoc for more information.
    • The 3D Graphics features now include 3D shapes, camera, lights, subscene, material, picking, and antialiasing. The new Shape3D (Box, Cylinder, MeshView, and Sphere subclasses), SubScene, Material, PickResult, LightBase (AmbientLight and PointLight subclasses) , and SceneAntialiasing API classes have been added to the JavaFX 3D Graphics library. The Camera API class has also been updated in this release. See the corresponding class javadoc for javafx.scene.shape.Shape3D, javafx.scene.SubScene, javafx.scene.paint.Material, javafx.scene.input.PickResult, javafx.scene.SceneAntialiasing, and the Getting Started with JavaFX 3D Graphics document.
    • The WebView class provides new features and improvements. Review Supported Features of HTML5 for more information about additional HTML5 features including Web Sockets, Web Workers, and Web Fonts.
    • Enhanced text support including bi-directional text and complex text scripts such as Thai and Hindi in controls, and multi-line, multi-style text in text nodes.
    • Support for Hi-DPI displays has been added in this release.
    • The CSS Styleable* classes became public API. See the javafx.css javadoc for more information.
    • The new ScheduledService class allows to automatically restart the service.
    • JavaFX is now available for ARM platforms. JDK for ARM includes the base, graphics and controls components of JavaFX.
  • Tools
    • The jjs command is provided to invoke the Nashorn engine.
    • The java command launches JavaFX applications.
    • The java man page has been reworked.
    • The jdeps command-line tool is provided for analyzing class files.
    • Java Management Extensions (JMX) provide remote access to diagnostic commands.
    • The jarsigner tool has an option for requesting a signed time stamp from a Time Stamping Authority (TSA).
    • Javac tool
      • The -parameters option of the javac command can be used to store formal parameter names and enable the Reflection API to retrieve formal parameter names.
      • The type rules for equality operators in the Java Language Specification (JLS) Section 15.21 are now correctly enforced by the javac command.
      • The javac tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by the new -Xdoclint option. For more details, see the output from running “javac -X“. This feature is also available in the javadoc tool, and is enabled there by default.
      • The javac tool now provides the ability to generate native headers, as needed. This removes the need to run the javah tool as a separate step in the build pipeline. The feature is enabled in javac by using the new -h option, which is used to specify a directory in which the header files should be written. Header files will be generated for any class which has either native methods, or constant fields annotated with a new annotation of type java.lang.annotation.Native.
    • Javadoc tool
      • The javadoc tool supports the new DocTree API that enables you to traverse Javadoc comments as abstract syntax trees.
      • The javadoc tool supports the new Javadoc Access API that enables you to invoke the Javadoc tool directly from a Java application, without executing a new process. See the javadoc what’s new page for more information.
      • The javadoc tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by default, and can also be controlled by the new -Xdoclint option. For more details, see the output from running “javadoc -X“. This feature is also available in the javac tool, although it is not enabled by default there.
  • Internationalization
    • Unicode Enhancements, including support for Unicode 6.2.0
    • Adoption of Unicode CLDR Data and the java.locale.providers System Property
    • New Calendar and Locale APIs
    • Ability to Install a Custom Resource Bundle as an Extension
  • Deployment
    • For sandbox applets and Java Web Start applications, URLPermission is now used to allow connections back to the server from which they were started. SocketPermission is no longer granted.
    • The Permissions attribute is required in the JAR file manifest of the main JAR file at all security levels.
  • Date-Time Package – a new set of packages that provide a comprehensive date-time model.
  • Scripting
  • Pack200
    • Pack200 Support for Constant Pool Entries and New Bytecodes Introduced by JSR 292
    • JDK8 support for class files changes specified by JSR-292, JSR-308 and JSR-335
  • IO and NIO
    • New SelectorProvider implementation for Solaris based on the Solaris event port mechanism. To use, run with the system property java.nio.channels.spi.Selector set to the value sun.nio.ch.EventPortSelectorProvider.
    • Decrease in the size of the <JDK_HOME>/jre/lib/charsets.jar file
    • Performance improvement for the java.lang.String(byte[], *) constructor and the java.lang.String.getBytes() method.
  • java.lang and java.util Packages
    • Parallel Array Sorting
    • Standard Encoding and Decoding Base64
    • Unsigned Arithmetic Support
  • JDBC
    • The JDBC-ODBC Bridge has been removed.
    • JDBC 4.2 introduces new features.
  • Java DB
    • JDK 8 includes Java DB 10.10.
  • Networking
    • The class java.net.URLPermission has been added.
    • In the class java.net.HttpURLConnection, if a security manager is installed, calls that request to open a connection require permission.
  • Concurrency
    • Classes and interfaces have been added to the java.util.concurrent package.
    • Methods have been added to the java.util.concurrent.ConcurrentHashMap class to support aggregate operations based on the newly added streams facility and lambda expressions.
    • Classes have been added to the java.util.concurrent.atomic package to support scalable updatable variables.
    • Methods have been added to the java.util.concurrent.ForkJoinPool class to support a common pool.
    • The java.util.concurrent.locks.StampedLock class has been added to provide a capability-based lock with three modes for controlling read/write access.
  • Java XMLJAXP
  • HotSpot
    • Hardware intrinsics were added to use Advanced Encryption Standard (AES). The UseAES and UseAESIntrinsics flags are available to enable the hardware-based AES intrinsics for Intel hardware. The hardware must be 2010 or newer Westmere hardware. For example, to enable hardware AES, use the following flags:
      -XX:+UseAES -XX:+UseAESIntrinsics
      

      To disable hardware AES use the following flags:

      -XX:-UseAES -XX:-UseAESIntrinsics
      
    • Removal of PermGen.
    • Default Methods in the Java Programming Language are supported by the byte code instructions for method invocation.
  • Java Mission Control 5.3 Release Notes
    • JDK 8 includes Java Mission Control 5.3.

Database Connection Pooling

 Friends in software engineering, a connection pool is a cache of database connections maintained so that the connections can be reused when future requests to the database are required.

There are various types of Connection pooling. which are listed below.

1. DBCP2 Connection pooling

https://commons.apache.org/proper/commons-dbcp/

2. BoneCP

http://jolbox.com/

3. HikariCP

http://brettwooldridge.github.io/HikariCP/

4. C3PO

http://www.mchange.com/projects/c3p0/

By Default Hibernate uses a C3P0 Connection pooling for creating connections. From performance point of view HikariCP is best as it can handle large number of concurrent connections. while BoneCP is alternative to C3PO & provide following good features

  • Highly scalable, fast connection pool
  • Callback (hook interceptor) mechanisms on a change of connection state.
  • Partitioning capability to increase performance
  • Allows direct access to a connection/statements
  • Automatic resizing of pool
  • Statement caching support
  • Support for obtaining a connection asynchronously (by returning a Future<Connection>)
  • Release helper threads to release a connection/statement in an asynchronous fashion for higher performance.
  • Easy mechanism to execute a custom statement on each newly obtained connection (initSQL).
  • Support to switch to a new database at runtime without shutting down an application
  • Ability to replay any failed transaction automatically (for the case where database/network goes down etc)
  • JMX support
  • Lazy initialization capable
  • Support for XML/property configuration
  • Idle connection timeouts / max connection age support
  • Automatic validation of connections (keep-alives etc)
  • Allow obtaining of new connections via a datasource rather than via a Driver
  • Datasource/Hibernate support capable
  • Debugging hooks to highlight the exact place where a connection was obtained but not closed
  • Debugging support to show stack locations of connections that were closed twice.
  • Custom pool name support.
  • Clean organised code. 100% unit test branch code coverage (over 180 JUnit tests).
  • Free, open source and written in 100% pure Java with complete Javadocs.

Software Design Patterns

  Friends while developing a software one should understand the requirement definition clearly. Designing the software is an Art work. There is often a motivation to develop a software which is loosely coupled & easily expandable as well as follow some set of standard.

There are 4 kinds of programming.

1. Machine Code Programming

2. Procedural language programming

3. Object-Oriented Programming

4. Imperative Programming

You can find the programming paradigm in detail in below wiki link.

https://en.wikipedia.org/wiki/Programming_paradigm#Further_paradigms

The object-oriented programming is widely used now a days with support by many languages.

There are mainly 4 Design Patterns which you can use to build the software in more productive way.

1. Creational Pattern

– The Singleton Pattern.

– The Factor Pattern

2. Structural Pattern

– The Adapter Pattern

– The Proxy & Decorator Pattern

– The Composite Pattern.

3. Behavioral Pattern

– The Observer Pattern.

– The Strategy & Template Pattern.

4. Concurrency Pattern.

– Single Thread Execution Pattern.

I will  brief each design pattern & It’s use cases in  my next blog.