Thursday, October 22, 2009
OpenGL Vertex Buffer Objects
Friday, November 21, 2008
Biggest image I have ever created created
I have adapted the code to create really huge images.
The largest I have to date is computed at 512000 x 300000 pixels. The image is black and white, I use a binary fromat to save space (1 bit/pixel), that still makes the uncompressed file about 20 Gigabytes. Computation time on a Core2Duo 2 Ghz is about 4 days (with some pausing).
I then compute a multiresultion tiff of half size (because it looks better). The final image is about 256000 x 150000.
Here is a detail:

Tuesday, February 13, 2007
External libraries and tools used to build VLIV
Libraries are:
- libtiff for TIFF files handling
- IJG JPEG library for JPEG amd JPEG-in-TIFF handling
- libpng for PNG handling
- zlib (indirect usage through libpng) for Deflate compression in TIFF images
- OWND library for intellimouse handling
All these tools are very well designed, and have saved me lot of time in getting VLIV done.
Wednesday, January 24, 2007
Printing size of very large images
The dimensions in pixels are 86400 x 43200 (remember it's the Earth at 500 m/pixel).
My screen (20 inches 16/10 DELL 2005 FPW) dimensions are about 44 cm x 27 cm (17 x 10.6 inches) for 1680 x 1050 pixels.
Simple math gives us 86400 / 1680 = 52 and 43200 / 1050 = 41.
This means that in order to view the complete image, we would need a matrix of 52 x 41 = 2132 monitors !!
Now lets' go to printing. Maximal resolution the eye can distinguish is about 254 DPI (100 pixels / cm). Now this means that the printed size of the image is : 86400 / 100 = 864 cm and
43200 / 100 = 432 cm (340 x 170 inches). This is huge, it would require more than 640 A4 sheets of paper !!
I have a poster printed from the NASA image. It's about 122 x 76 cm and has been printed at 254 DPI. While it's already very nice, it's only 1/8 of the possible printed size at full resolution.
Here is a small version :
Tuesday, January 23, 2007
Image formats capabilities
The most important capability is a way to directly access a small subsection of the complete image. This is generally achieved by tiling, but some formats allow arbitrary access, so that the tiling feature can be implemented.
Another capability is a way of storing multiple sub-resolutions, thus allowing zooming. Some formats have this built-in, others give a way to compute sub-resolutions using special capability of the format.
The last capability is support for very large file sizes, because Very Large Images require large file size.
Here is a sum-up of these capabilities:
TIFF 32 bit file size limit and consequences
The existing TIFF format is limited in size to 4 gigabytes (because of 32 bit offsets). This format allows data to be compressed using various methods, the most used are deflate (Zip), JPEG and Packbits.
Deflate and Packbits are so-called lossless compression, while JPEG achieves high compression ratios using a lossy method.
Deflate compression rates are about 4:1 on typical photographic images, while JPEG is more in the 10:1 using minimal loss of perceptual quality.
The following table shows what dimensions can be achieved with different compression ratios:
There is an ongoing project called BigTIFF that will break these limitations by a large amount, as it is expected to use 64 bit sizes.
Pyramidal tiling data organization
So we do not want to load the complete gigapixel image in memory. What can we do about it ?
The first idea is to use a scheme that is called tiling. The image is internally organized as an array of rows and columns. This organization makes possible to retrieve a part of the image without loading the complete image. Requesting a part of the image is now requesting only the tiles that are intersecting this part.
Imagine you have a 10000x10000 pixels images, divided in 256x256 pixels tiles. If you want to display only the top-left part on your 1280x1024 screen, then you only need to load 5x4 = 20 tiles, that is 196 608 bytes x 20 = 3 932 160 bytes, instead of 300 000 000 bytes to load the entire image.
This has three immediate consequences :
- Image loading is very fast, because you only load what is visible on the screen
- Image panning can also be made very fast, because as you pan around, tiles not visible can be discarded from memory and visible ones are loaded (this is called on-demand loading)
- Memory requirement is almost constant is about the memory needed for one visible screen of data, regardless of image size, so that you can now load your image on any PC, even PCs with as less as 128 megabytes of memory
Now that we are able to freely pan the image, we may want to be able to zoom out, up to the point where the entire image is visible. Tiling will not help here, because in order to display the complete image, we need to access all pixels, that means loading the whole image to compute a reduced version of this image. This is where pyramidal comes in. The idea is to generate images that are reduced version of the complete one, each beeing 2 times smaller than the previous one. (Un)Zooming is now only a matter of switching between these resolution. Of course these subimages are themselves tiles to allow arbitray access.
Let’s take a small example with an original image that is 10000x8000 pixels. We would generate a pyramid of images:
- 10000x8000 (level 0)
- 5000x4000 (level 1)
- 2500x2000 (level 2)
- 1250x1000 (level 3)
We can see that at level 3, the complete image fits into our 1280x1024 screen. If we are at level 3 and want to zoom in, then we switch to level 2, and so on.
We now have an organization of data that allow us to zoom and pan freely in our large image, with memory requirements limited to our physical screen size !
Of course you may say this comes at a cost in storage, because we have to store all these additional levels. What cost exactly ?
If the image size at level 0 is 1 unit, then level 1 is 1/4 this unit (0.25), level 2 is 1/16 and so on.
This is a very well known mathematical suite whose limit is 1.33333333.., so the overhead is not very large given the benefits.
This pyramidal tiling is used in at least two very widely known applications :

