A method for increasing dynamic range of original image data representing an
image comprises applying an expansion function to generate from the original
image data expanded data having a dynamic range greater than that of the original
image data and, obtaining an expand map comprising data indicative of a degree
of luminance of regions associated with pixels in the image. The method then combines
the original image data and the expanded data according to the expand map to yield
enhanced image data. Apparatus for boosting the dynamic range of image data comprises
a dynamic range expander that produces expanded data, a luminance analyzer that
produces an expand map and a combiner that combines the original and expanded
data according to a variable weighting provided by the expand map.
Banterle, Francesco
Ledda, Patrick
Debattista, Kurt
Chalmers, Alan
Bloj, Marina
In recent years many tone mapping operators (TMOs) have been presented in order to display high dynamic range images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The inverse of tone mapping, inverse tone mapping, expands a low dynamic range image (LDRI) into an HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. We propose a new framework that approximates a solution to this problem. Our framework uses importance sampling of light sources to find the areas considered to be of high luminance and subsequently applies density estimation to generate an expand map in order to extend the range in the high luminance areas using an inverse tone mapping operator. The majority of today's media is stored in the low dynamic range. Inverse tone mapping operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image based lighting (IBL). Moreover, we show another application that benefits quick capture of HDRIs for use in IBL.
Hulusic, Vedad
Debattista, Kurt
Aggarwal, Vibhor
Chalmers, Alan
The entertainment industry, primarily the video games industry, continues to dictate the development and performance requirements of graphics hardware and computer graphics algorithms. However, despite the enormous progress in the last few years, it is still not possible to achieve some of industry's demands, in particular high-fidelity rendering of complex scenes in real-time, on a single desktop machine. A realisation that sound/music and other senses are important to entertainment led to an investigation of alternative methods, such as cross-modal interaction in order to try and achieve the goal of "realism in real-time". In this paper we investigate the cross-modal interaction between vision and audition for reducing the amount of computation required to compute visuals by introducing movement related sound effects. Additionally, we look at the effect of camera movement speed on temporal visual perception. Our results indicate that slow animations are perceived as smoother than fast animations. Furthermore, introducing the sound effect of footsteps to walking animations further increased the animation smoothness perception. This has the consequence that for certain conditions, the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. The results presented are another step towards the full understanding of the auditory-visual cross-modal interaction and its importance for helping achieve "realism int real-time".
Serious games are playing an increasingly important role for training people about real world situations. A key concern is thus the level of realism that the game requires in order to have an accurate match of what the user can expect in the real world with what they perceive in the virtual one. Failure to achieve the right level of realism runs the real risk that the user may adopt a different reaction strategy in the virtual world than would be desired in reality. High-fidelity, physically based rendering has the potential to deliver the same perceptual quality of an image as if you were "there" in the real world scene being portrayed. However, our perception of an environment is not only what we see, but may be significantly influenced by other sensory input, including sound, smell, touch, and even taste. Computation and delivery of all sensory stimuli at interactive rates is a computationally complex problem. To achieve true physical accuracy for each of the senses individually for any complex scene in real-time is simply beyond the ability of current standard desktop computers. This paper discusses how human perception, and in particular any crossmodal effects in multi-sensory perception, can be exploited to selectively deliver high-fidelity virtual environments for serious games on current hardware. Selective delivery enables those parts of a scene which the user is attending to, to be computed in high quality. The remainder of the scene is delivered in lower quality, at a significantly reduced computation cost, without the user being aware of this quality difference.
Chalmers, Alan
Aggarwal, Vibhor
Debattista, Kurt
Hulusic, Vedad
The quality of real-time computer graphics has progressed enormously in the last decade due to the rapid development in graphics hardware and its utilisation of new algorithms and techniques. The computer games industry; with its substantial software and hardware requirements; has been at the forefront in pushing these developments. Despite all the advances; there is still a demand for even more computational resources. For example; sound effects are an integral part of most computer games. This paper presents a method for reducing the amount of effort required to compute the computer graphics aspects of a game by exploiting movement related sound effects. We conducted a detailed psychophysical experiment investigating how camera movement speed and the sounds affect the perceive smoothness of an animation. The results show that walking (slow) animations were perceived as smoother than running (fast) animations. We also found that the addition of sound effects; such as footsteps; to a walking/running animation affects the animation smoothness perception. This entails that for certain conditions the number of frames that need to be rendered each second can be reduced saving valuable computation time. Our approach will enable the computed frame rate to be decreased; and thus the computational requirements to be lowered; without any perceivable visual loss of quality.
Serious games are playing an increasingly important role for training people about real world situations. A key concern is thus the level of realism that the game requires in order to have an accurate match of what the user can expect in the real world with what they perceive in the virtual one. Failure to achieve the right level of realism runs the real risk that the user may adopt a different reaction strategy in the virtual world than would be desired in reality. High-fidelity, physically based rendering has the potential to deliver the same perceptual quality of an image as if you were “;there”; in the real world scene being portrayed. However, our perception of an environment is not only what we see, but may be significantly influenced by other sensory input, including sound, smell, touch, and even taste. Computation and delivery of all sensory stimuli at interactive rates is a computationally complex problem. To achieve true physical accuracy for each of the senses individually for any complex scene in real-time is simply beyond the ability of current standard desktop computers. This paper discusses how human perception, and in particular any crossmodal effects in multi-sensory perception, can be exploited to selectively deliver high-fidelity virtual environments for serious games on current hardware. Selective delivery enables those parts of a scene which the user is attending to, to be computed in high quality. The remainder of the scene is delivered in lower quality, at a significantly reduced computation cost, without the user being aware of this quality difference.