This post contain 13 extremely beautiful and perfectly executed HDR-pictures. Some of them might look surreal, too colorful, even magic or fake, but they are not — keep in mind that they’ve all been developed out of usual photos, and not a single image is an illustration. How ever those pics are amazing, so please enjoy!!!!!
High dynamic range imaging
In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allows a greater dynamic range of exposures (the range of values between light and dark areas) than normal digital imaging techniques. The intention of HDRI is to accurately represent the wide range of intensity levels found in real scenes ranging from direct sunlight to shadows. High Dynamic Range Imaging was originally developed in the 1930s and 1940s by Charles Wyckoff. Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid 1940s. The process of tone mapping together with bracketed exposures of normal digital images, giving the end result a high, often exaggerated dynamic range, was first reported in 1993, and resulted in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995. In 1997 this technique of combining several differently exposed images to produce a single HDR image was presented to the computer graphics community by Paul Debevec. This method was developed to produce a high dynamic range image from a set of photographs taken with a range of exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now popularly used to refer to this process. This composite technique is different from (and may be of lesser or greater quality than) the production of an image from a single exposure of a sensor that has a native high dynamic range. Tone mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer screen.
The idea of using several exposures to fix a too extreme range of luminosity was pioneered as early as the 1850’s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard techniques, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two in a single picture in positive. HDRI lighting plays a great part in movie making when computer 3D objects are to be integrated into real-life scenes.
Comparison with traditional digital images
Information stored in high dynamic range images usually corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called “scene-referred”, in contrast to traditional digital images, which are “device-referred” or “output-referred”. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called “gamma encoding” or “gamma correction”. The values stored for HDR images are often linear, which means that they represent relative or absolute values of radiance or luminance. HDR images require a higher number of bits per color channel than traditional images, both because of the linear encoding and because they need to represent values from 10?4 to 108 (the range of visible luminance values) or more. 16-bit (“half precision”) or 32-bit floating point numbers are often used to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.