Friday, December 14, 2007

Vliv 2.5.1 handles plugins

The new version of VLIV supports dynamic loading of plugins.

Plugins are easy to write and I provide two samples.
A plugin for BMP, TIF, PPM, PNG and JPEG images is delivered, and now vliv.exe has no knowledge of image formats.

So if you feel that VLIV should know XXX format or that your implementation of YYY format is the best, all you have to do is write a plugin.

Tuesday, December 11, 2007

Vliv 2.5.0 is out

This new version includes a much improved memory manager that allows really huge images to be loaded at very little memory cost.

It also includes a .new sample virtual image loading.

Also included is experimental multi threading support for tile loading.

Saturday, December 8, 2007

The Last Supper 16 gigapixel image

The team from Haltadefinizione has once again created a very detailed image of a large painting, this time it's The Last Supper by Leonardo da Vinci.

While always impressive, I think a limit has been reached, because higher resolution (and thus more gigapixels) would not allow for a better detail, as they have reached the point where at maximum zoom, we can only almost see molecules of paint...

To give an idea, the painting is 880x460 cm while the image is 172181x93611 pixels, so each pixel is only 0.05x0.05 millimeter...

Wednesday, December 5, 2007

Vliv as a fractal viewer

In a previous post, I was talking about dynamically generated tiles.
I have prototyped this and implemented a simple algorithm for generating dynamically tiles, a Newton fractal generator.

The idea is simple, given a point in the complex plane, and a polynomial, apply the Newton method to find out where the point ends. Each starting point eventually converges to one of the roots of the polynomial, that gives the base color. This color is then shaded using the number of iterations it takes to be around the root (there are of course other coloring algorithms, I choose this one because it is really simple to implement). Computation is done at a single tile level.

The beauty of this is that, using VLIV tiling features, the image size is virtually not limited, so viewing an image of size 256000x256000 is possible, and even not slower than a smaller size.

The complete source code for my sample implementation is less than 200 lines of code, most of it beeing my not optimal Newton method implementation.

Here is the result:


Imagine if the idea was implemented in Google Earth (or Maps), using Google storage and computation capabilities, it would make a nice feature...

This Wikipedia page has more information on Newton Fractals.
Simon Tatham has also a very detailed page on this topic.


Friday, November 16, 2007

A picture of my new fractal object

Next to my new iPod Touch for a size comparison:

3d fractal object generation process

These objects, as I said previously,
have very small details, so the polygonization grid should be very fine.

The result is that in order to get sufficient detail, a huge number of triangles is generated, even for regions where the surface is locally flat, and in theory a much larger grid could have been used.

I have not found any way of implementing adaptive refinement, so I am left with billions (literally) of triangles. In addition to take a large disk space (1 triangle equals 3 vertices, 1 vertex equals 3 floating point values, this makes 36 bytes per triangle), this amount of triangles is far beyond what is possible to display interactively (I think current limit of graphics hardware is about 10 million tris/second).

The solution is called decimation . It simply consists of generating an object with less triangles, taking into account the fact that some areas are almost flat, and thus require less triangles. Of course this usually comes with variable loss of original shape, but it maybe not noticeable, depending on the quality of the implementation.

Over the years I have found many implementations of meshes decimation, here is a survey of common ones.

The real key point for me is that the decimation implementation must be Out Of Core, that is it must work on data (much) larger than available memory. Not all implementation support this, but I found a little Gem, cluspartred by Heiko Lippmann.
Heiko has been kind enough to send me the Linux executable of his program, and I must say it works very well for my purposes.

Starting from a 1 billion triangle mesh, it can generate a 5 million triangle mesh that is almost perfect looking and usable (displayable) on standard machines.
It creates partitions on disk and loads on demand these partitions to decimate them in memory, handling all the details of contiguous partitions. On a recent machine (Xeon 3.2Ghz, see below full specs), this process only takes a few hours.

I then convert the resulting triangle soup to a custom format that I wrote a viewer for, and eventually convert it to the STL format suitable for 3d Printers.

The generation of the initial triangle soup is multi threaded using Intels'TBB parallel_for, and is very efficient, using a grid size of 12000x9671x8417 only takes a few hours on a 8 core 3.2Ghz 2x6 MB cache 16 GB memory Xeon machine (cpuinfo x5482, not a bad machine...)

Thursday, November 15, 2007

More 3d fractal objects

I have ordered and received another object.

These 3d printed objects are the result of some heavy computation.

The basic idea is that the object is an isosurface, where a mathematical function is used to compute values. An introduction to isosurfaces can be found at Hyperfun.

You can find information on the type of 3d fractals used for my objects at Paul Bourke's

While quaternion fractal images have been around for a long time because they are very well suited to the Ray Tracing method, and have been available in the PovRay raytracer for a while now (Hi Skal!), it is much harder to get a polygonal representation of these objects.

A common method to obtain 3d triangles for this kind of objects is called
Polygonization of scalar fields
.

It consists of computing values on a grid in 3d space, then determining triangles for a given unit cell. Once again Paul Bourke describes this very well.

The problem is that fractal objects have very small details, so the grid should be very fine, requiring huge memory requirement, both for computation and storage.

As an example let's look at an object that is polygonized with a coarse grid (150x120x105). The result is not very nice looking, but already has more than 160 000 triangles.



Going to a grid of 500x402x350 makes the object nicer but gives 1 834 396 triangles, so that we are starting to reach the limits what most graphics cards are capable of.



Refining the grid further would only give minimal aesthetic gains, while increasing storage and lowering framerate to an unsusable level. So this is not the way to go.

Next post will unveil the method I use to get nice looking objects as this one, that has less than 5 million triangles (this one can be printed on a 3d printer):

Sunday, October 28, 2007

3d Fractals for real

For a while, I have been interested in 3d fractals.
Recent 3d printing technology development has allowed me to get some of these printed for real, and for relatively cheap.

Getting a file suitable to a 3d printer is a very complicated process.

Here is a picture of me at my home desk holding a SLS object of size 15 cm, printed by 3DProd for 269 euros.

On the desk is an older smaller object (the same that is on the screen), printed on a ZCorp printer.


Sunday, September 30, 2007

Integrating your own gigapixel images in Google Earth

I have found that the processing necessary to integrate very large images in Google Earth 4.2 is, if not easy, possible.

All you need is create a version of your image padded to a power of two in both dimensions, then scale by 2 recursively until image fits in a 256x256 pixels tile.

Then you have to subdivide each image in single tiles.

Finally you have to create a suitable KML file.

The documentation for this feature can be found here.

I have some scripts that automate the process for a given TIFF image.

Saturday, September 22, 2007

Google Maps like nagivation of images

Viewing images on the Web is not always very user friendly, mainly because Web Browsers do not allow arbitrary zooming and panning in images.

A more and more common way to solve this problem is the use of JavaScript with custom nagigation of images, using concepts such as pyramidal organization of images.

A well known implementation of this concept is Google Maps.

While not directly related to very large images, because zooming does not refine information (images are not so-called very large), an interesting work has been conducted by Siva Dirisala, on a variety of Nasa images.

By using more and more of these ideas, we should expect more and more very large images (and more generally all images) to be really in-place zoomable, a nice addition to content of Web pages.

Sunday, August 26, 2007

Gigapxl images now available in Google Earth

Everybody has noticed that Google Earth 4.2 now allows sky view. But few have noticed that it also allows all Gigapxl images viewing.

All you have to do is to allow "selected content" and choose Gigapixl Photos.

Locations are dispatched all around the United States.

Wednesday, August 15, 2007

Vliv source code for sale

During the 3 or 4 years I have proposed Vliv as Shareware, I had exactly 4 registering customers. So I decided to make Vliv Freeware and not crippled in any way. Since then I have had 3 people interested in buying the source code (that is available at a reasonable price), so my revenues have been much more important.

It should be noted that these demands are not regular at all, and since the 3 I had at the beginning, I had no more request for source code.

Thursday, July 12, 2007

Looking for IJG specialists

When I added the support of JPEg to VLIV, I tried to follow the "large images" philosophy, that is I am creating virtual tiles (in fact virtual strips), so only visible part of the file needs to be loaded.

Unfortunately the IJG library does not allow as is arbitrary positioning in the image, so I have to decompress the complete image up to the strip start position.

I subdivide the image in 256 pixels strips.

This is quite costly, as for example the more you go to towards the end, the more you have to decode "for nothing".
Imagine you have a 10000x10000 pixels image and a 1000x1000 display, and you want to display the bottom of the image.

This corresponds to 4 (or 5) strips, each requiring between 9000 and 10000 lines decoding
"for nothing", only to go to beginning of strip, so that while the memory used is much lower than storing the complete image, the CPU used is like 5 times the CPU used for loading the complete image.

There is clearly room for improvement.

Here is my code (repeated for each virtual strip):


// reading up to strip start, discarding results
for (idx = 0; idx < stripstarty; ++idx)
jpeg_read_scanlines(&cinfo, &dstptr, 1);
// reading stripheight
for (idx = 0; idx < stripheight && idxt < imageheight ; ++idx)
jpeg_read_scanlines(&cinfo, &dstptr, 1);


So if you are aware of any method that would speed up the first loop, that would be incredibly useful, and very large JPEG loading would be MUCH faster.

Or of course if you know a JPEG (preferably free) library that supports arbitrary region decoding, feel free to contact me.

Wednesday, July 11, 2007

Experimental support in VLIV for BigTIFF

I have checkout-ed this morning the libtiff CVS and compiled VLIV using this version (it's 4.0 beta I think).

This should allow BigTIFF files support.
I have tested on a few files, but did not go as far as getting large (huge, more than 4 gigabytes) files to better test.

Remember this is a giant step in supporting really large image, I cannot see a real limit other than disk space now, as a 1 Terabyte image only scratches the surface of possible file size with BigTIFF.

If you have large BigTIFF images, please test VLIV support for them and report any problem.

Sunday, July 8, 2007

A specific application for Vliv

The sources for large images seems to be more and more diverse.

I had some satellite data, scanned images from medical devices, numeric photographic stiched images.

A customer contacted me to get the Vliv source code and adapt it to a very specific usage : a kiosk mode for displaying digital art in a gallery. That means removing all the interface and binding functionnality to trackball and buttons.

The image he gave me is not that large (24000x12000) but try opening this in a standard viewer...

I have specifically written for him a script that creates a special pyramidal TIFF from his original painting, with the difference to standard pyramids beeing that the zooming ratio between different levels is not 1/2 but 9/10. That allows very smooth (un)zooming in the image, and classic tiling allows panning.

Wednesday, June 20, 2007

Implementing JPEG2000 in Vliv

The JPEG2000 format looks quite promising, because of its compression ratios and also because of features such as Regions Of Interest . So I am looking at implementing support in VLIV.

There are few libraries available (I am not counting professional and expensive ones):
  1. Jasper, free
  2. OpenJPEG free, OpenSource
  3. J2K-Codec, commercial
I have registered and tested J2K-Codec. The API is very simple and well adapted to Windows usage, in fact I have been able to integrate it in a few 10's lines of code. It is also very fast. However the LITE version (49$ + VAT) does not allow loading images with more than 5 resolutions. That is a problem because I cannot afford the Pro version that does (199$ + VAT). I would rather have a slower LITE version than a one I cannot use at all on some images.


I have not yet understood how tiling works with resolutions in JPEG2000, all I need is a simple way to load part of the image, per-resolution. Right now large images cannot be loaded because complete resolution takes too much memory.


If you wish to help on getting JPEG2000 support in Vliv, then contact me for source code. I am particularly interested on Very Large Images handling (aka tiling or ROI).

Another source of very large images (but not gigapixel)

Here is another source of Very Large Images (largest I found is 26267 x 20676, about 1/2 Gigapixel).

The images are taken with Onboard NASA’s Mars Reconnaissance Orbiter, the HiRISE camera.

They are available in the JPEG2000 format that I am experimentally adding to VLIV
Next post will discuss the JPEG2000 library I am using now.

HiRISE Web site

Monday, June 4, 2007

A complete movie in one image

A friend of mine had this idea : make an image containing all frames from a movie.
So I have experimented a little and I have found it's quite easy to do this.

I am starting from an AVI that is about 700 Megabytes.

The first step is to reduce the size of the movie (so that size is 320x176 for example), using VirtualDub and a non-compressed RGB destination.

Next step is generation of individual frames. I use a two line AVISynth script, run the script in VirtualDub, and voila, 144499 jpeg images, each beeing a frame of the movie.

Next I use some custom code to recreate a single TIFF file with all images appended in a 250x166 tiling (64000x42416 pixels). Actually I take only 1/3 of the images because it would not fit in 32 bits.

Then I use my custom software to generate the TIFF pyramid.

The result is a 1 761 000 bytes file that allows zooming/panning in the movie.

Here is the most unzoomed image (reduced 50% from original size). Can you guess the movie I used ?


Another Gigapixel by Scott Howard

This time it's Chicago at night. Viewable through a Zoomify Flash interface.

Friday, May 11, 2007

New multi Gigapixel image available

This one has 13 real Gigapixels and detail is impressive. It is available through a Zoomify interface.
It looks like we really need BigTIFF because of file size.

Saturday, April 28, 2007

Release of very large Hubble image

NASA has released a very large image with dimensions 29566 X 14321.
Here is a the link to the page.

It opens quite well in VLIV (prefer the tiff version).

Wednesday, April 4, 2007

Fast image processing

Another side effect of using tiles for very large images is that , as you only load what is visible on the screen, you can apply special effect to a very small subset of the complete image, while maintaining real time.

A simple filter can be inverting the colors of the image, or displaying only the red component of the image.

Some filters that deal with multiple pixels are not easily applicable to single tiles, for example blurring would require data from adjacent tile maybe not in memory.

This filter could be easily implemented in VLIV by small plugins, just like Photoshop plugins.
It may even be possible to use already written plugins as the Photoshop plugin API is widely used and documented.

Monday, March 26, 2007

Virtual Very Large Images

While very large image typically waste a large amount of Hard Disk space, there is a category of very large images that are Virtual, that is computed on the fly.

The most common type of this is Fractal images. At a given resolution (zoom level) and position, you can get the tile image by computation.

Other that come to my mind are :
  • Algorithmic images, such as an image that displays all possible 8 bits RGB colors.
  • An image that displays all characters from the Unicode table. Each character can be made to fit one tile and rendered on the fly as it is needed.
  • Rasterized images from vectorial description.

Sunday, March 18, 2007

The Virtual Microscope

I have found another source for Very Large Images (or relatively large), NASA's funded project The Virtual Microscope.

It provides a large set of images, obtained through various microscopy methods, such as SEM.
The application is written in Java, with a nice small set of features (and an ugly look). The project images are delivered as Jars (aka Zip files), with subresolutions in subfolders, and individual tiles as single files. I have no idea why they did not use pyramidal TIFFs.

I do not know the data license, so I will not post VLIV images created from theirs, but it is easy to convert the Jars to single TIFFs suitable for VLIV.

Tuesday, March 13, 2007

VLIV released as freeware

The number of registered (paying) customers of VLIV is quite low (4 people in fact).
So I have decided to make it freeware, that is there is no need to register and the downloadable version is not limited in any way.

Registering is always possible and donations are encouraged if of course you like the program.

Wednesday, February 21, 2007

Estimating the dimension of Google Earth full resolution image

Above my home, the smallest detail I can spot on Google Earth for a pixel is about 10 cm.
The Earth equatorial circumference is about 40 000 kilometers.
This makes 40 000 * 10 000 = 400 000 000 pixels.
Assuming the Earth is spheric and the Google has all data, this makes 400 000 000 * 200 000 000 = 80 000 000 000 000 000 pixels, much more Gigapixels that I can imagine.

Of course I do not think data is available at this resolution for the whole Earth, and my estimation of precision can be wrong, but these numbers are mind-boggling.

The most precise satellite image of the Earth I have is about 1 pixel for 500 meters, this already makes a 86400 * 43200 image.

Wednesday, February 14, 2007

Download the NASA Blue Marble NG pyramidal TIFF

I know seeing is believing. Unless you actually see one gigapixel pyramidal image viewed by VLIV, you will not really understand how nice it is. This is why I have made the NASA Blue Marble NG image downloadable here.

Be warned, it's a large image (427,582,365 bytes). It contains all 8 resolutions from 86400x43200 pixels down to 675x337 pixels.

In order to keep the image size reasonable, an aggressive JPEG compression has been conducted, so there are some artifact you would not experience on lossless compressed image.

Tuesday, February 13, 2007

External libraries and tools used to build VLIV

VLIV is written in pure C (about 3000 lines, not counting resource file). I am using numerous external libraries to load some image types, as well as other tools.

Libraries are:
  • libtiff for TIFF files handling
  • IJG JPEG library for JPEG amd JPEG-in-TIFF handling
  • libpng for PNG handling
  • zlib (indirect usage through libpng) for Deflate compression in TIFF images
  • OWND library for intellimouse handling
External tools I use are:
  • UPX for compacting the VLIV executable
  • NSIS for the installer
All these tools are very well designed, and have saved me lot of time in getting VLIV done.

Saturday, February 10, 2007

What is your largest image ?

The largest images I have on my machine, viewable by VLIV are 86400 x 43200 = 3.7 Gigapixels
(It's the NASA Blue Marble NG) and 96512 x 88832 = 8.6 Gigapixels.

What are the largest images you have been able to view with VLIV ?
Please post them in comments.

Wednesday, January 31, 2007

Request for features

I have always tried to make VLIV as simple as possible, while keeping it as powerful as possible.
So the number of features is quite limited, compared with other Viewers such as IrfanView.

I would like to keep the initial idea, that is focusing on Very Large Images, but like to know what you, users have found missing in VLIV.

I have some ideas myself, but time is missing for large improvements.

So please do not hesitate to ask for features in this article comments, I will consider all propositions.

Wednesday, January 24, 2007

Printing size of very large images

Imagine we have a Very Large Image (such as the one generated from NASA Blue Marble NG).
The dimensions in pixels are 86400 x 43200 (remember it's the Earth at 500 m/pixel).

My screen (20 inches 16/10 DELL 2005 FPW) dimensions are about 44 cm x 27 cm (17 x 10.6 inches) for 1680 x 1050 pixels.
Simple math gives us 86400 / 1680 = 52 and 43200 / 1050 = 41.
This means that in order to view the complete image, we would need a matrix of 52 x 41 = 2132 monitors !!

Now lets' go to printing. Maximal resolution the eye can distinguish is about 254 DPI (100 pixels / cm). Now this means that the printed size of the image is : 86400 / 100 = 864 cm and
43200 / 100 = 432 cm (340 x 170 inches). This is huge, it would require more than 640 A4 sheets of paper !!


I have a poster printed from the NASA image. It's about 122 x 76 cm and has been printed at 254 DPI. While it's already very nice, it's only 1/8 of the possible printed size at full resolution.
Here is a small version :

Tuesday, January 23, 2007

VLIV, The Very Large Image Viewer for Windows

In my spare time, I have coded VLIV as an exercise in programming Windows.
The idea was to create a minimal viewer for very large TIFF (tiled).

It uses a very simple idea : only visible tiles are loaded in memory, as soon as a tile is no more visible, it is discarded.

This works very well, because in no more than the visible tiles have to be loaded, so that panning is fast, and zooming also.

VLIV has no advanced features you could think of, such as a caching, or loading tiles in advance, mostly because on local files, performance is already very good.

TIFF has built in support for tiles, but VLIV also creates Virtual Tiles for some formats that have no native tile support (such as PPM or BMP). It manages only parts of the images instead of loading completely the image, even if the format does not support natively tiling.

It also uses a special capability of the JPEG format to allow instant unzooming.

Here is a screendump of VLIV in action on a 86400x43200 pixels image:



VLIV is Shareware, and the price is 10$ (or Euros). I give instructions on VLIV site to build the shown image.

Image formats capabilities

While most image formats are able to store very large images, not all formats are suitable for displaying these images.

The most important capability is a way to directly access a small subsection of the complete image. This is generally achieved by tiling, but some formats allow arbitrary access, so that the tiling feature can be implemented.

Another capability is a way of storing multiple sub-resolutions, thus allowing zooming. Some formats have this built-in, others give a way to compute sub-resolutions using special capability of the format.

The last capability is support for very large file sizes, because Very Large Images require large file size.


Here is a sum-up of these capabilities:


TIFF 32 bit file size limit and consequences

The existing TIFF format is limited in size to 4 gigabytes (because of 32 bit offsets). This format allows data to be compressed using various methods, the most used are deflate (Zip), JPEG and Packbits.

Deflate and Packbits are so-called lossless compression, while JPEG achieves high compression ratios using a lossy method.

Deflate compression rates are about 4:1 on typical photographic images, while JPEG is more in the 10:1 using minimal loss of perceptual quality.

The following table shows what dimensions can be achieved with different compression ratios:



There is an ongoing project called BigTIFF that will break these limitations by a large amount, as it is expected to use 64 bit sizes.

Very Large Images on your disk

While there are quite many images you can view on the Web (see previous post), there are actually very few you can download to your machine.

You can find few on my VLIV viewer page, and instructions to create the NASA Blue Marble Next Generation from NASA dataset

As most technologies to provide Web viewing are HTTP based, it is quite easy to use a mass downloader to download individual tiles just like the Web viewing techonogy does. Rejoining them to recreate the full image is easy then. Because of copyrights and intellectual property, I will not disclose how to do, but so far I have successfully recreated images originating from the Google Maps API (easy) and Zoomify (harder), even multiple Gigapixel ones.

The most common format for tiles is JPEG, and the largest image I have is about 360x350 tiles of 256x256 pixels.

Needless to say, viewing very large images stored on a local disk is impressive, because of the speed compared to Web viewing. There is no delay when panning and zooming.

Large images on the Web

Giving a public access to gigapixel images requires that you have a way to make them accessible through the web.

There are quite a few technologies that allow this, here are the ones I know about:

  1. The Google Maps API, written in Javascript + DHTML
  2. Google Earth, a standalone application using HTTP to retrieve tiles.
  3. Zoomify, written in Flash
  4. FSI viewer, also written in Flash I think.

With these technologies, a few very large images can be viewed on your browser:

  1. The Earth, on Google Maps, using a mix of satellite images and maps
  2. The Blue Marble Next Generation Dataset from NASA on Yawah (resolution is 500 meters/pixel, so the complete image is about 86400x43200 pixels)
  3. Digital photography, such as

Other sites provide information on very large images, but do not make them available to the public:

  • Max Lyons (who was the first to break the Gigapixel barrier for stitched digital images)
If you know other sites providing very large images, please comment.

Pyramidal tiling data organization

So we do not want to load the complete gigapixel image in memory. What can we do about it ?

The first idea is to use a scheme that is called tiling. The image is internally organized as an array of rows and columns. This organization makes possible to retrieve a part of the image without loading the complete image. Requesting a part of the image is now requesting only the tiles that are intersecting this part.

Imagine you have a 10000x10000 pixels images, divided in 256x256 pixels tiles. If you want to display only the top-left part on your 1280x1024 screen, then you only need to load 5x4 = 20 tiles, that is 196 608 bytes x 20 = 3 932 160 bytes, instead of 300 000 000 bytes to load the entire image.


This has three immediate consequences :

  1. Image loading is very fast, because you only load what is visible on the screen
  2. Image panning can also be made very fast, because as you pan around, tiles not visible can be discarded from memory and visible ones are loaded (this is called on-demand loading)
  3. Memory requirement is almost constant is about the memory needed for one visible screen of data, regardless of image size, so that you can now load your image on any PC, even PCs with as less as 128 megabytes of memory

Now that we are able to freely pan the image, we may want to be able to zoom out, up to the point where the entire image is visible. Tiling will not help here, because in order to display the complete image, we need to access all pixels, that means loading the whole image to compute a reduced version of this image. This is where pyramidal comes in. The idea is to generate images that are reduced version of the complete one, each beeing 2 times smaller than the previous one. (Un)Zooming is now only a matter of switching between these resolution. Of course these subimages are themselves tiles to allow arbitray access.

Let’s take a small example with an original image that is 10000x8000 pixels. We would generate a pyramid of images:

  • 10000x8000 (level 0)
  • 5000x4000 (level 1)
  • 2500x2000 (level 2)
  • 1250x1000 (level 3)

We can see that at level 3, the complete image fits into our 1280x1024 screen. If we are at level 3 and want to zoom in, then we switch to level 2, and so on.

We now have an organization of data that allow us to zoom and pan freely in our large image, with memory requirements limited to our physical screen size !

Of course you may say this comes at a cost in storage, because we have to store all these additional levels. What cost exactly ?

If the image size at level 0 is 1 unit, then level 1 is 1/4 this unit (0.25), level 2 is 1/16 and so on.

This is a very well known mathematical suite whose limit is 1.33333333.., so the overhead is not very large given the benefits.

This pyramidal tiling is used in at least two very widely known applications :

  1. Google Earth
  2. Google Maps

How many people have 4 gigabytes of memory ?

While most computers can display images taken from a digital camera, a special care is required to allow display of gigapixel images. Let’s see why.

Usually, a pixel takes 3 bytes of memory, one for each Red, Green and Blue component. Standard image viewers are loading the image completely in memory to display it, even if visible area of your image is only 1280x1024 pixels because of your physical screen size.

Best digital camera have a 12 Megapixel sensor, let’s say 4000x3000 pixels. This makes 36 000 000 bytes required for storing in memory the image. This is possible on any machine available now.

Let’s now take a gigapixel image at 40000x30000 pixels. This makes 100 times more memory, peaking at 3 600 000 000 bytes, that is 3.6 gigabytes.

Unless you have actually 4 Gigabytes of memory and your OS allows a chunk of this size to be allocated, then there is no way you can display the image on your machine, standard software will either crash or not allow image loading or will take forever to load.


Unless you use some clever way of organizing the image data and you have a dedicated viewer that knows how to handle this specific organization.


Next post will discuss this organization, usually called pyramidal tiling.

Welcome

Hi all,

This blog is dedicated to so called gigapixel images, ie images with dimensions in excess of 30000x30000 pixels.
These image are becoming more and more common. Posts will deal with topics such as tools to build these images, tools to display these images (with emphasis on my own viewer http://vlivviewer.free.fr/vliv.htm),where to get those images, and technical issues when dealing with such large image sizes.