Tag Archives: Pdf

For Sake of Love | Review

Start of new year and love of month Feb flows in soon. I started it my new resolution to read books and engage myself to learn new emotions. Love is not my type, so thot to try a romantic novel for my resolution. I choose to start with 2018 new book, “For the Sake of Love” by Anamika Mishra. Since she is from my hometown Kanpur, I preferred reading her thoughts.

The book talks about Love and being loyal to it. Author has well expediated the meaning of being in Love. Now a day’s people do everything in exchange of benefits, but Love is an emotion to feel and flow with no benefit strategies. This book narrates a heart touching love story of a David and Jasmine who stayed in Shimla. The story flows in present and past both, with perfect connection with reader. No ambiguity in writing which again makes it a good read, with its simplicity.

About the Author:

Her debut novel Too hard to handle was released in Sept 2013. This is her 3rd novel and must say , her writing has improved to large extent. She understands her readers well and writes the plot which keeps you attached to book, and wait for climax.She is also a blogger and occasionally delivers motivational talks. Writing is her first passion.

Review About this book-

For me, Love is a commitment to live for & by of someone, who may be yours, but still you have trust & be loyal to it with nothing in exchange. Falling in love is easy but being in love whole life with the same person is something which not everyone is capable off.

Book started with amazing and simple love letters, which every1 can relate to and have a smile on their face. Book cover shows a couple sitting on a bench, which plays a major role in story and is connected to their symbol of love. I completely rejoiced the book.

Book Title: For the sake of love
Author: Anamika Mishra
Format: Paper Back
Publishing Date: 17 Nov 2017
Printed Price: INR 100
No.of pages : 184
Genre: Romance

Book Cover :
The book cover is simple yet colorful. The image of a happy couple with some sweet decor around gives a soothing feeling. It is an amazing roller coaster story which is a mixture of friendship, love, trust, sacrifice, and life.
Plot:  Story revolves around many characters who are linked to each other in an important way. It’s a fictional piece of work with genre of love & romance. This starts with love letters written by David for Jasmine, his childhood love. Story keeps on flowing with the letters in different years and how Twisha/Booby/Alex help them to complete their love story. Story is narrated by Twisha & she is main character in story. The end of one is beginning of other and this leads to Connection between Twisha & Alex. Plot is very interesting and keep readers engaged. Its neither too slow or fast , so well paced.

Read the book to know more !! Stay tuned.
Final Verdict :

Characters: 4.5/5
Plot: 5/5
Narration: 5/5
Vocab : 4.5/5

 

Advertisements

Learn Sqoop..!! (tutorial Day 8)

Being a part of Hadoop ecosystem, Scoop is an important interaction tool. When Big Data storages and analyzers such as MapReduce, Hive, HBase, Cassandra, Pig, etc. of the Hadoop ecosystem came into picture, they required a tool to interact with the relational database servers for importing and exporting the Big Data residing in them. Here, Sqoop provides feasible interaction between relational database server and Hadoop’s HDFS.

What is Scoop?

Sqoop is a open source tool designed to transfer data between Hadoop (Hive/HDFS or Hbase) and relational database servers. It is used to import data from structured Data stores or relational databases such as MySQL, Oracle to Hadoop related eco-systems like Hive or HDFS or HBase, and export from Hadoop file system back to relational databases, enterprise data warehouses. Sqoop only works with structured & relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres etc.IF your DB doesn’t lie in this category, we can still use Scoop, using Extension framework – Connectors. You can find connectors online, and modify there code or write your own code using framework. Generally, JDBC connectivity comes handy with maximum of databases. so this resolves your issue.

Sqoop: “SQL to Hadoop and Hadoop to SQL”

Why is Sqoop used?

Scoop doesn’t have any server,so it is a client library. So it doesn’t matter you run it from Data node or from anywhere. It will find the instalation locally or  you can define the hadoop installation and then it will find the name node and run from there.

Sqoop uses MapReduce framework to import and export the data, which provides parallel mechanism as well as fault tolerance. Sqoop makes developers life easy by providing command line interface. Developers just need to provide basic information like source, destination and database authentication details in the sqoop command. Sqoop takes care of remaining part.

Sqoop provides many salient features like:

  1. Full Load
  2. Incremental Load
  3. Parallel import/export
  4. Import results of SQL query
  5. Connectors for all major RDBMS Databases
  6. Kerberos Security Integration
  7. Load data directly into Hive/Hbase

What are Connectors?

Scoop has connectors, which is a pluggable component that uses extension framework to enable scoop to import or export the data between Hadoop and Data stores. The most basic connector that ships with Sqoop is Generic JDBC Connector, and as the name suggests, it uses only the JDBC interface for accessing metadata and transferring dataAvailable connectors include Oracle, DB2, MYSql, PostgresSQL, Teradata, JDBC.

Scoop Architecture

 

How is Sqoop used?What all can we import/export ??

Scoop can be used to import/export :

  1. Entire table
  2. Part of table or just data using Where clause
  3. all tables of a Database

or we can use Scoop’s few commands like Eval (Evaluate), Options-File (convert your file command into Scoop commands) , all-Databases, all-tables and many more.

Scoop reads the table row by row into HDFS. The output of this Import table process is set of files containing copy of imported table. Since import is a parallel process, hence output will be many files.These files may be delimited text files, or binary Avro or sequence files.It tries to fetch metadata from db table & calculates the max and min values of Primary key of tables to identify the data range (Amount of Data). This value helps Scoop to divide the load between mappers. Generally it uses 4 mappers & no reducers.

Scoop is built on Map-reduce logic & uses JDBC API’s to create Java/class files to process this metadata and at end create a JAR  file of it. So once the import is complete you will see 3 files created. For example: Employee.java, employee.class and employee.jar

Let’s learn how to use Scoop to import tables. Lets assume you have a mysql database (RDBMS) and you are trying to import a Employee table from it into HDFS.

Command Syntax:

sqoop import –connect jdbc:mysql://localhost/databasename –username $USER_NAME –password $PASSWORD$ –table tablename –m 1

Example:

$ sqoop import –connect jdbc:mysql://localhost/scoop_db –username scp –password scp123 –table employee –m 1

Here we specify the:

  • database path (localhost)
  • database name (scoop_db)
  • connection protocol (jdbc:mysql:)
  • username (scp)
  • password  (There are many ways to provide the password like on command line, store it in a file and call it etc.)
  • Always use ‘- -‘ for all sub commands like CONNECT, USERNAME, PASSWORD
  • Use ‘-‘ for Generic commands like FILE

To verify the imported data in HDFS, use the following command

(syntax from internet).
$ $HADOOP_HOME/bin/hadoop fs -cat /employee/part-m-*

It will show you fields and data with comma separated.

Now lets see various syntax and examples:

  • Import an entire table:

sqoop import –connect jdbc:mysql://localhost/abc –table EMPLOYEE

  • Import a subset of the columns from a table:

sqoop import –connect jdbc:mysql://localhost/abc–table EMPLOYEES –columns   “employee_id,first_name,age,designation”

  • Import only the few records by specifying them with a WHERE clause

sqoop import –connect jdbc:mysql://localhost/abc –table EMPLOYEES  –where “designion=’ADVISOR’ “

  • If table has primary key defined, we can set Parallelism to command by explicitly set the number of mappers using --num-mappers. Sqoop evenly splits the primary key range of the source table, as mentioned above.

sqoop import –connect jdbc:mysql://localhost/abc –table EMPLOYEES –num-mappers 6

  • If there is not primary key defined in the table, the data import must be sequential. Specify a single mapper by using –num-mappers 1 or  give ‘-m 1′ option for import.Otherwise it gives error

 sqoop import –connect jdbc:mysql://localhost/db –username $USER_NAME –password $PASSWORD$ –table tablename –m 1

 

  • To try a sample query without importing data, use the eval option to print the results to the command prompt:

sqoop eval –connect jdbc:mysql://localhost/abc –query “SELECT * FROM employees LIMIT 10”

 

Follow for more…Read next article on What is Spark ?

Want to learn how HDFS works  …read here

Want to learn Hadoop Installation…click here !!

%d bloggers like this: