75058

Custom Binary Input - Hadoop

Question:

I am developing a demo application in Hadoop and my input is .mrc image files. I want to load them to hadoop and do some image processing over them.

These are binary files that contain a large header with metadata followed by the data of a set of images. The information on how to read the images is also contained in the header (eg. number_of_images, number_of_pixels_x, number_of_pixels_y, bytes_per_pixel, so after the header bytes, the first [number_of_pixels_x*number_of_pixels_y*bytes_per_pixel] are the first image, then the second and so on].

What is a good Input format for these kinds of files? I thought two possible solutions:

<ol><li>Convert them to sequence files by placing the metadata in the sequence file header and have pairs for each image. In this case can I access the metadata from all mappers?</li> <li>Write a custom InputFormat and RecordReader and create splits for each image while placing the metadata in distributed cache.</li> </ol>

I am new in Hadoop, so I may be missing something. Which approach you think is better? is any other way that I am missing?

Answer1:

Without knowing your file formats, the first option seems to be the better option. Using sequence files you can leverage a lot of SequenceFile related tools to get better performance. However, there are two things that do concern me with this approach.

<ol><li>How will you get your .mrc files into a .seq format?</li> <li>You mentioned that the header is large, this may reduce the performance of SequenceFiles</li> </ol>

But even with those concerns, I think that representing your data in SequenceFile's is the best option.

Recommend