Context, position and objectives of ExaViz
While the massive increase in computational power has made it possible to simulate large-scale molecular phenomena routinely producing huge and complex data sets and computation results, the ability of human users to deal with them has been trailing behind. As Bell and colleagues recently put it, "the demands of data-intensive science represent a challenge for diverse scientific communities" [BEL09]. Given the types of problems that are addressed today by scientists, engineers and other users, it is critical that human expertise and computing power complement each other through well-adapted visualization, interaction and analysis tools. Fox and Hendler have just argued that scientists have to "integrate visualization into the exploration phase of science" [FOX11]. Whether it is about testing hypotheses, detecting unexpected patterns or supporting creativity, to name a few, neither humans nor computers alone suffice. Novel tools are necessary to address these challenges. Quoting O'Donoghue et al.: "For most users, a key challenge is to benefit from the deluge of data without being overwhelmed by it. This challenge is still largely unfulfilled and will require the development of truly integrated and highly useable tools" [ODO10].
As scientific supercomputing applications in life and materials sciences move towards the exascale, the need for adapted solutions is even more pressing. This is echoed by the 2010 SciDAC report: "Even with exascale systems, however, the infrastructure needed to generate and analyze molecular data will require development of simulation management tools that encompass clustering, archiving, comparison, debugging, visualization, and communication — all of which must also address current computing bottlenecks that limit the scope of analysis" [EXA10]. Similarly, the international exascale software project roadmap dedicates an entire chapter to application support, data analysis and visualization [EXA11a,b]. Indeed, classical user interfaces for studying nanoscale molecular structures and simulations have reached their limits and cannot adequately deal with the size and complexity of today's problems. This reduces our ability to explore, understand, modify and control complex data and computations. Researchers have also been exploring alternatives to the classical desktop context in order to display and share large amounts of data, such as extended display areas (boards, tables and walls), immersive devices (virtual and augmented reality) or novel interaction devices and techniques, but today only few are in active use by potential target users. The field of Visual Analytics [WAR10] addresses these issues by combining visualization with interaction to provide users with tools and techniques “to synthesize information and derive insight from massive, dynamic, ambiguous, and often conflicting data; detect the expected and discover the unexpected; provide timely, defensible, and understandable assessments; and communicate assessment effectively for action”.