Visualization Tools Illuminate Complex Phenomena

Visually studying the effects of pressure and temperature changes on fluid flow in petroleum-rich stratigraphic layers in the Louisiana Basin is an example of a leading-edge research project that, in many ways, is no different from the hundreds of others underway at the Center for Theory and Simulation in Science and Engineering at Cornell University -- popularly known as the Cornell Theory Center. But this and other chosen activities employ state-of-the-art supercomputing visualization tools to help the investigator comprehend complex physical phenomena and their underlying principles. POWER System Visualization methodology is "as applicable to industrial design and engineering as it is to scientific research," points out Mal Kalos, director of the Cornell Theory Center. At the Center, this capability is derived from a potent hardware/software visualization environment comprised of the 32-processor POWER Visualization System (PVS), built by IBM to address high-end visualization requirements, and Visualization Data Explorer (DX) software, also developed by IBM, which boasts an approach to applications programming that encourages even computer-shy scientists to use the system. The PVS/DX system is now part of the formidable lineup in the Center's Cornell National Supercomputer Facility, a world-class supercomputing environment. Beside the PVS, the facility houses an IBM E5/9000 Model 900 vector mainframe and a scalable cluster of 32 IBM RISC System/6000 workstations, among other high-powered machines. Geological Research Take, for example, the Global Basin Research Network (GBRN), a consortium of major oil companies and universities led by Cornell and Columbia, that is the leader in geological research. Research concerns a 30 x 6 km section of the Louisiana Basin, an oil-rich area in the Gulf of Mexico, and hopes to understand further the dynamic conditions necessary for the production of oil. The ultimate goal, of course, is to enable the companies to extrapolate their findings to predict where oil most likely can be found elsewhere around the globe. Over the course of many years, individual members of the consortium have collected seismic data on this portion of the sea floor, but no one had ever seen all the data put together in a meaningful way. Enter the PVS/DX System at the Cornell Theory Center. Representing the GBRN consortium, scientists Roger Anderson of Columbia University and Larry Cathles of Cornell put the supercomputing visualization system to work, first to obtain static views of the data, then to create simulations. The motion of fluid (oil) through the irregular fault structure, derived from computational fluid dynamics, was superimposed on the dataset where an observer could simulate "walking around" or "diving into" the data structure to observe the flow in process. In the investigation, the PVS provided the lightning-fast calculation speed needed to create the images, and the visualization software supplied the brain power. DX orchestrated the rendering of images showing the strata and the fluid flow. It managed the large datasets, actually rendered the images and generated the animation frames. Furthermore, all of this could be visualized for a time span of many years because of the historical data available. DX enables the scientist to view a PVS-generated animation, then change the point of view or lighting characteristics of the data, and to repeat the scenario again for further refinement. When the executing platform is fast enough, this interactive orchestration happens in fractions of a second. "Such rapid feedback makes the machine an extension of the scientist's mind, "comments Siegel. "It affords insights not available otherwise." Long-Standing Relationship The primary reason that the Center acquired PVS was the potential it offered for dramatic advances in visualizing sophisticated science. Simply stated, "PVS is the fastest supercomputer for visualization available -- the fastest image rendering engine focused on scientific problems," Siegel asserts. Although there are no formal plans to expand the PVS capability, additional computer suites are likely to show up as the escalating demand causes the existing PVS to become saturated. In addition, the communication bandwidth for remote users will eventually be upgraded to data transmission rates close to the 100Mps that Center-based local users currently enjoy. As for the Data Explorer, Cornell researchers are helping IBM develop additional program modules that add functionality to the software. They have just launched a national DX repository, allowing a large audience to contribute and use a variety of Data Explorer modules.

Featured

  • three silhouetted education technology leaders with thought bubbles containing AI-related icons

    Ed Tech Leaders Rank Generative AI as Top Tech Priority

    In a recent CoSN survey, an overwhelming majority of ed tech leaders (94%) said they see AI as having a positive impact on education. Respondents ranked generative AI as their top tech priority, with 80% reporting their districts have gen AI initiatives underway, or plan to in the current school year.

  • computer monitor with a bold AI search bar on the screen

    Google Rolls Out AI Mode in Search

    About a year after introducing AI Overviews for its flagship search offering, Google has announced broad availability of AI Mode in Search.

  • glowing shield hovers above a digital cloud platform with abstract data streams and cloud icons in the background

    Google to Acquire Cloud Security Firm Wiz in $32 Billion Deal

    Google has announced it will acquire cloud security startup Wiz for $32 billion. If completed, the acquisition — an all-cash deal — would mark the largest in Google's history.

  • students using digital devices, surrounded by abstract AI motifs and soft geometric design

    Ed Tech Startup Kira Launches AI-Native Learning Platform

    A new K-12 learning platform aims to bring personalized education to every student. Kira, one of the latest ed tech ventures from Andrew Ng, former director of Stanford's AI Lab and co-founder of Coursera and DeepLearning.AI, "integrates artificial intelligence directly into every educational workflow — from lesson planning and instruction to grading, intervention, and reporting," according to a news announcement.