Category Archives: Uncategorized

Its AmPm cafe time | DLF Galleria

Hello friends..so here i am with my review of the buzz restaurant now a days, AmPM cafe-bar. Its has 2 outlets in north, one in DLF Galleria gurgaon and other in Rajouri Garden Delhi. Its a famous restaurant chain worldwide. Since weekend is the time for lepaking, and i wanted to kill my time with old buddies, thought of visiting this place !!

Its located at 3rd floor back side of DLF Galleria, Gurgaon. Its has capacious space with both indoor and outdoor seating. They have a separate area for live music, which not only adds liveliness to evening but also makes them more relaxing & peaceful. They serve lot of delectable & scrumptious food along with shakes and cocktails. Hence this place is best option to hangout with friends and family both.

We started with the lickables, which are launched newly like Benaras Belle(5/5), which is a thick duo of gulkand and kulfi, flavored with Paan syrup.It was presented with a paan leaf & tasted amazing, so a must try. Next we had Ms. Alphonso(4/5), the most awaited drink of the visit as its a mango season and this had Mangoes flavored with icecream, presented in a luscious way! Next one was thought to be a common Orchard Street (5/5), but it turned out to be toothsome with its tutti fruityyyy flavor.

In food,we tried many dishes like AmPm chaat, Palak patta chat, Daal makhana platter, momos, Paneer tikka platter, keema pao, veggie pizza & tangy tomato pasta. Everything was lip smacking and worth the try. Delicious & authentic to flavors.

Last but not the least, we ended our day with AmPm special shakes like Kitty Katty, Belgian Affair & Magnum Upside down. Last one is the winner of tonight’s food visit !!

Find this at:

AMPM Café & Bar Menu, Reviews, Photos, Location and Info - Zomato

 

What the ratings stand for: 5 = Excellent, 4 = Very Good, 3 = Good, 2 = Fair, 1 = Disaster.

© Follow me on Instagram – creativeme1807
© Follow me on Twitter – @EktaSethi1807
© Follow me on Zomato – www.zomato.com/foodofy© Follow for more content: creativeme1807.wordpress.com
© Follow me on Facebook – https://www.facebook.com/Creativeme1807.wordpress/
Don’t forget to share this post with your social media friends by clicking one of the sharing buttons below. A quick share on Facebook would be nice for all the effort we put in.

How to get YYYYMM format from calendar date without using CAST as Char

So today i am here to discuss the scenarios of CAST function in OBIEE11g or 12c both. We always see that we always run behind using CAST function for conversion to Char/Int/Date etc. But issue appears when we have to get a diff format in Date functions.

For example, i had a scenario where user wanted to see report in pivot form, with Prompt  Date range as 2017 / 01  to 2017 / 04.

I tried getting above values in Prompt by using Convert Calendar date to Char using Cast function. But when i have to pass this prompt value to my report, it was not working as expected. If  i select Date range from 2017 / 01  to 2017 / 04, it use to show only data in report for 2017 / 01 , 2017 / 02, 2017 / 03.  It use to omit 2017 / 04.

To resolve this issue, we used a simple logic in Report Prompt, as below. This gives us the output in little different format , but still no CAST is used,so no conversion to Char and then having issues passing value.

Using below idea, we are now able to get output in proper format and it passes correctly to report and filters it.

imgg

Output shows below:

Follow for more 🙂

How to navigate to external links from obiee reports

Sometimes we get different business requirements, which are meaning-full but little trick to implement in OBIEE. Similarly , last week i got a requirement for business to add a hyperlink on a report column, which when clicked goes to an external link. also, along with it i have to pass the CodePin and Number.

I tired many ways of passing parameters in Go Nav, GO url format. But it didnt work. Finally <a Href> worked, so sharing my solution with you.

Step 1: Edit column formula and place your code in below format. Concat all the parameters to be passed. Set Target and give it a name.

Step 2: Modify the same column and set its data format as below.

Let me know if you face any issues.

Follow for more !!

 

Mughal Dine-esty…the fine Dining Restaurant !!

Mughal Dine-esty is a fine dining restaurant on the ground floor of the Time Square Building in Sushant Lok Phase 1 in Gurgaon. It’s an add-on venture of Dine-esty, which has been a famous Chinese restaurant in Gurgaon. This restaurant serves authentic Mughalai food. This place has a common entrance and 2 different set-ups inside. The interiors are exquisite and really aesthetic.You can see chinese theme touch in the seating area.

The restaurant offers a good variety of food options in both Chinese and Mughlai Cuisines.  Visited here for fine dining with my husband & friends on Sunday brunch.We tried both vegetarian and non vegetarian cuisines.

We started of with the starters, along with some signature drinks like Italian Smooch, Silver Lining, Blue Sea and Food Punch (5/5). Both the cocktails and mocktails were full of flavors and tasted well. In starters, we had Methi corn ke seekh kebab (5/5) –full of flavours and yumm, Kale channe ki Tikki (4.5/5) – was nicely cooked ,had strong flavour of Cardamom & was quite crispy , Daahi ke Sholley (5/5) – very cripsy and full of hungcurd, properly fried, Achari Paneer Tikka (5/5) was teh best i liked. Paneer was very soft and full of flavours.

After drinks and starters, it was time to order main course. We ordered Daal Makhani, RawalPindi Cholle, Keere ka raita, Rice & Naans.Food was delicious (5/5) and the portion size was also good. In desserts we had gulab jamuns which were quite soft and served hot.

collgg1

The staff is pretty courteous in service and the service was prompt. Pricing wise little costly but worth a visit.

Highly recommend as place is ideal for fine dining, especially with your family.

 

To know more about Mughal Dine-esty, click here:

Mughal Dine-Esty Menu, Reviews, Photos, Location and Info - Zomato

La Pino’z..What more you can ask from a pizza !!

La Pino’z pizza is a delivery only outlet serving delicacies like Mexican, Italian, chinese. It’s located at Golf course road, gurgaon. Their priority is to serve high quality, freshly prepared, hot meals delivered on time, every time to all of their customers.

They serve authentic pizzas at a very reasonable price.Menu has lot of options including veg and non veg. Order was delivered on time and served hot. The packaging was proper, cardboard boxes with required accompaniments. La Pino’z Pizza is one place where you can try the Giant Slice if you’re looking to try more than one pizza or do not have the appetite to eat it all. You can also ask them to serve two favours on same pizza. They have given an option to order Monster pizza, for large group of people.

We ordered Cheese garlic bread (5/5), which was as super yummy,full of cheese and adequately flavoured. Corn Taco’s (4.5/5) – it was excellent and full of cheese, corn.was properly cooked and crispy. 1 large Veg Tamer pizza & 1 paneer pizza (5/5).Pizza were really good with full on veggie toppings like onion, tomatoes and capsicum, paneer,mushrooms . It was super yum. The crust was not too thick and quite crispy, It was served with delicious dips and sauces.

collg

I shared my order with my parents. Generally, we don’t eat outside or ready meals, so i was skeptical about how good the food would taste, i feared they my parents will not like it, but was soooo happy when my father said I just loved the Pizza. Everything were delicious & lip smacking!!

Loved the food and would highly recommend it!!

What the ratings stand for: 5 = Excellent, 4 = Very Good, 3 = Good, 2 = Fair, 1 = Disaster.

Find it on Zomato at :

La Pino'z Pizza Menu, Reviews, Photos, Location and Info - Zomato

Buéno café…Premium bakery & Sandwich café !!

Buéno café is located in MGF Metropolis mall, Gurgaon. It is a healthy café that serves some delicious desserts & healthy sandwiches. Along with it, the best thing i loved is their presentation. Food was delivered in white packages stating…..Premium bakery & Sandwich café !!

This place is a newly opened delivery outlet & serves the best of their kind. We recently got to order few food items from Bueno & everything was lip smacking.
Packaging: Loved the packing. Most of it was packed in little jars with their labels. Sandwiches were packaged and sealed nicely with no leakage. It was easy to handle and had veg/ non veg mentioned properly on them. The cookies , Hummus ,Cheese cake and Banoffee were packed in an airtight glass bottle with Lid and was vacuum packed.

Food: We ordered the Assorted Pita chips” target=”_blank”>Pita chips with Hummus. They were colorful and square shaped baked chips. The flavor combination with Hummus sounded so good. The chip has a hearty crunch and really work well with all sorts of dips. Then we tried their variety of cookies like Double Chocolate chip Cookie & Tender Coconut cookies: These were scrumptious crispy mini cookies filled with chocolate chips. They were nicely packed in glass jars. They added to best of the appetizer’s.

collage1

Next we had the Veg club Sandwich ,which had multi grain bread well stuffed with veggies(onion, tomatoes),mustard and slices of the mozzarella cheese. The best part was the bread was very soft and was not overstuffed.

In desserts we had Banoffee & Blueberry Cheese cake. It was worth binging for. I enjoyed every scoop of it and finished till the end. After all it was my fav and tasted yummy . It s must recommendation.
Delivery time: Food was delivered on time .The Server was very humble & polite.

Prices: Little over priced but worth every bit, as they maintain good quality food.

Follow me on Zomato

Find it on :
buéno Menu, Reviews, Photos, Location and Info - Zomato

Hadoop Ecosystem Components contd…(Tutorial Day 6)

So continuing the old post, here we will discuss some more components of Hadoop ecosystem.

Data Integration or ETL Components of Hadoop Ecosystem

Sqoop (SQL-to-Hadoop) is a big data tool that offers the capability to extract bulk data from non-Hadoop  or relational databases (like MySQL, Oracle,Teradata, Postgre) , transform the data into a form usable by Hadoop, and then load the data into HDFS, Hbase or Hive also. This process is similar to Extract, Transform, and Load.It parallelizes data transfer for fast performance, copies data quickly from external system to Hadoop & makes data analysis more efficient.

It’s batch oriented and not suitable for low latency interactive queries. It provides a scalable processing environment for both structured and non-structured data.

Sqoop Import

The import tool imports individual tables from RDBMS to HDFS. Each row in a table is treated as a record in HDFS. All records are stored as text data in text files or as binary data in Avro and Sequence files.

Sqoop Export

The export tool exports a set of files from HDFS back to an RDBMS. The files given as input to Sqoop contain records, which are called as rows in table. Those are read and parsed into a set of records and delimited with user-specified delimiter.

Sqoop Use Case-

Coupons.com , Apollo Group uses Sqoop component of the Hadoop ecosystem to enable transmission of data between Hadoop & data warehouse .
  • Flume-

Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming or log files data into the Hadoop Distributed File System (HDFS).  It is used for collecting data from its origin and sending it back to the resting location (HDFS).Flume accomplishes this by outlining data flows that consist of 3 primary structures channels, sources and sinks. The processes that run the dataflow with flume are known as agents and the bits of data that flow via flume are known as events.

Flume helps to collect data from a variety of sources, like logs, jms, Directory etc.
Multiple flume agents can be configured to collect high volume of data.
It scales horizontally & is stream oriented.It provides high throughput and low latency.It is fault tolerant.

Both Sqoop and Flume, pull the data from the source and push it to the sink. The main difference is Flume is event driven, while Sqoop is not.

Flume Use Case –

Twitter source connects through the streaming API and continuously downloads the tweets (called as events). These tweets are converted into JSON format and sent to the downstream Flume sinks for further analysis of tweets and retweets to engage users on Twitter.
Goibibo uses Flume to transfer logs from production system to HDFS.

Data Storage Component of Hadoop Ecosystem

HBase

Hbase is an open source, distributed, sorted map model.Its a column store-based NoSQL database solution & is similar to Google’s BigTable framework.It supports random reads and also batch computations using MapReduce. With HBase NoSQL database enterprise can create large tables with millions of rows and columns on hardware machine. The best practice to use HBase is when there is a requirement for random ‘read or write’ access to big datasets. HBase’s important advantage is that it supports updates on larger tables and faster lookup. The HBase data store supports linear and modular scaling. HBase stores data as a multidimensional map and is distributed. HBase operations are all MapReduce tasks that run in a parallel manner.

Its well integrated with Pig/Hive/Sqoop. It is consistent and partition tolerant system in CAP theorem.

HBase Use Case-

Facebook is one the largest users of HBase with its messaging platform built on top of HBase in 2010.

Cassandra

Apache Cassandra is a free and open-source distributed database management system designed to handle large amounts of data across many commodity servers.This database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data.Cassandra’s support for replicating across multiple data-centers is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages.

Use Cases:

For Cassandra, Twitter is an excellent example. We know that, like most sites, user information (screen name, password, email address, etc), is kept for everyone and that those entries are linked to one another to map friends and followers. And, it wouldn’t be Twitter if it weren’t storing tweets, which in addition to the 140 characters of text are also associated with meta-data like timestamp and the unique id that we see in the URLs.

Monitoring, Management and Orchestration Components of Hadoop Ecosystem- Oozie and Zookeeper

  • Oozie-

Oozie is a workflow scheduler where the workflows are expressed as Directed Acyclic Graphs. Oozie runs in a Java servlet container Tomcat and makes use of a database to store all the running workflow instances, their states ad variables along with the workflow definitions to manage Hadoop jobs (MapReduce, Sqoop, Pig and Hive).The workflows in Oozie are executed based on data and time dependencies.

Oozie Use Case:

The American video game publisher Riot Games uses Hadoop and the open source tool Oozie to understand  the player experience.

  • Zookeeper-

Zookeeper is the king of coordination and provides simple, fast, reliable and ordered operational services for a Hadoop cluster. Zookeeper is responsible for synchronization service, distributed configuration service and for providing a naming registry for distributed systems.

Zookeeper Use Case-

Found by Elastic uses Zookeeper comprehensively for resource allocation, leader election, high priority notifications and discovery. The entire service of Found built up of various systems that read and write to   Zookeeper.

Here is the recorded session from the IBM Certified Hadoop Developer Course at DeZyre about the components of Hadoop Ecosystem –
Several other common Hadoop ecosystem components include: Avro, Cassandra, Chukwa, Mahout, HCatalog, Ambari and Hama. By implementing Hadoop using one or more of the Hadoop ecosystem components, users can personalize their big data experience to meet the changing business requirements. The demand for big data analytics will make the elephant stay in the big data room for quite some time.

Data Serialisation (Data Interchange Protocols)

AVRO: Apache Avro is a language-neutral data serialization system, developed by  Apache Hadoop.Data serialization is a mechanism to translate data in computer environment (like memory buffer, data structures or object state) into binary or textual form that can be transported over network or stored in some persistent storage media.Java and Hadoop provides serialization APIs, which are java based, but Avro is not only language independent but also it is schema-based.Once the data is transported over network or retrieved from the persistent storage, it needs to be deserialized again. Serialization is termed as marshalling and deserialization is termed as unmarshalling.

Avro uses JSON format to declare the data structures. Presently, it supports languages such as Java, C, C++, C#, Python, and Ruby.Avro has a schema-based system. A language-independent schema is associated with its read and write operations.

Like Avro, there are other serialization mechanisms in Hadoop such as Sequence Files, Protocol Buffers, and Thrift.Avro creates a self-describing file named Avro Data File, in which it stores data along with its schema in the metadata section.Avro is also used in Remote Procedure Calls (RPCs). During RPC, client and server exchange schemas in the connection handshake.

To serialize Hadoop data, there are two ways −

  • You can use the Writable classes, provided by Hadoop’s native library.
  • You can also use Sequence Files which store the data in binary format.

The main drawback of these two mechanisms is that Writables and SequenceFiles have only a Java API and they cannot be written or read in any other language.

Therefore any of the files created in Hadoop with above two mechanisms cannot be read by any other third language, which makes Hadoop as a limited box. To address this drawback, Doug Cutting created Avro, which is a language independent data structure.

Use Case:

Content credit : http://www.tutorialspoint.com

Avro provides rich data structures. For example, you can create a record that contains an array, an enumerated type, and a sub record. These datatypes can be created in any language, can be processed in Hadoop, and the results can be fed to a third language.

 

 Thrift :

Thrift is a lightweight, language-independent software stack with an associated code generation mechanism for RPC. Thrift provides clean abstractions for data transport, data serialization, and application level processing. Thrift was originally developed by Facebook and now it is open sourced as an Apache project. Apache Thrift is a set of code-generation tools that allows developers to build RPC clients and servers by just defining the data types and service interfaces in a simple definition file. Given this file as an input, code is generated to build RPC clients and servers that communicate seamlessly across programming languages.

Thrift supports a variety of languages including C++, Java, Python, PHP, Ruby.

To learn more on Hadoop…keep on reading these tutorials…every day we try to get something new and interesting for all my readers !!

Now we will start learning all Ecosystem components in more detail. Click here to read about how MapReduce Algorithm works with an easy example.