Gazing up at the sky on a clear, dark night, you can readily convince yourself that the stars are tantalizingly close—close enough, if not to touch, then at least to visit with a spacecraft. A small dose of astronomy, however, quickly dispels the hope of interstellar travel: the stars are impossibly distant, separated by trillions of miles, and the distances we humans have traveled vanishingly small. With that in mind, you could be forgiven for revising your first reaction to the night sky and becoming convinced that any journeys our species makes to the stars will take place only in our imaginations.
But imagination thrives on ideas. Stirring people’s imaginations with an accurate and dynamic representation of our place in the universe is well worth engaging the best minds and methods. After all, the first maps of the New World, and the first reasonably accurate globes of Earth, created a powerful sense of wonder, widening our perspective of humanity’s place in the world.
Similar flat maps and globes, as elaborately decorated as fantasy could inspire, have for centuries portrayed the stars and constellations of the night sky. But unlike a globe of the Earth, a celestial globe has little practical use today. No one believes anymore, as scholars did in the Middle Ages, that the stars are lights on a uniformly distant sphere. Tracing a path from star to star on such a surface, as if it were the outline of a constellation, reveals next to nothing about the shifting perspective that a true stellar voyager might experience, or what our Sun might look like from another star. No flat map, no globe painted with stars, can accurately render the true three-dimensional spatial relations among the objects scattered across the sky. No flat map, no globe painted with stars, can accurately render the true three-dimensional spatial relations among the objects scattered across the sky.
Imagine that you could travel to the Big Dipper in a faster-than-light spaceship that could take you there in less than a minute. At the beginning of your journey, the small group of seven bright stars takes its familiar shape: three stars for the handle, four stars for the bowl of the dipper. Some of those stars, of course, are actually closer than others. So as you leave our solar system and approach the Dipper, its outlines become distorted. You pass the nearest star, then the second-nearest, the third, and now, with the seven stars all around you, it hardly makes sense to think of them as a dipper at all. They have become just a collection of stars.
Until recently, a trip to the Big Dipper could take place only in one’s imagination. But now, powerful new tools have been created that enable you to experience such an interstellar journey in a planetarium or even on a laptop computer, with an accuracy as pinpoint as modern astrophysics can provide.
The possibility of taking such a virtual tour of the universe in three dimensions has been realized by the NASA-supported Digital Universe atlas, developed by the Hayden Planetarium of the American Museum of Natural History. Depending on your taste and the time you devote to your tour, the Digital Universe can carry you anywhere—from the orbits of the innermost planets, to the stars that form the constellations, to the galactic neighborhood of our own Milky Way, to the most distant known objects in the universe.
The best way to begin is to let an astronomer take you for your first spin, while you enjoy the ride. The dome in the Hayden Planetarium, or in any other large planetarium, is an ideal theater for traveling along the simulated starways. But if you want to drive, to pilot your own spaceflight and change course whenever you like, you can download the free software and catalog of celestial objects at the Hayden Planetarium's Digital Universe Atlas. Ride or drive, either way the look and feel of our stellar and galactic neighborhoods have become accessible not only to the professional astronomer, but to virtually everyone.
The Digital Universe atlas has grown out of a convergence of two great streams of technical achievement: celestial mapmaking, the product of centuries of observation and scientific breakthrough, combined with hardware and software engineering, which enables sophisticated data visualization. In principle, the merging of those two streams is simple; in practice it is laborious but brilliantly synergistic. Together they have drawn back the curtain on the universe in all its three-dimensional glory.
Knowing the positions of celestial objects makes it straightforward to calculate their apparent relative positions from any fixed perspective. Recalculating positions from a series of perspectives along a smooth trajectory, and displaying them rapidly in sequence, creates the illusion of smooth, animated motion through space. Thus, an abstract collection of data becomes a visceral experience.
Cosmic cartography begins with astronomical measurements. Astronomers share those measurements through a network of catalogs and publications, and computers have combined and seamlessly integrated the network into the atlas. In that way, hundreds of thousands of celestial objects from numerous catalogs have found their way into the Digital Universe.
At its base, the Digital Universe atlas is built on highly precise astrometry—the “latitude” and “longitude” of objects in the sky—combined with the best available estimates of the distances of those objects from Earth. The work of mapping the former two coordinates, at ever-increasing precision, constitutes thousands of years of effort. But surprisingly, it was only 166 years ago that astronomers made the first relatively accurate distance measurement to an object outside our solar system. Until that time, nothing definite was known except that the stars were very far away.
In 1838 Friedrich Bessel, then director of the Königsberg Observatory in Berlin, calculated the distance to the star 61 Cygni. Bessel measured how the star appeared to shift relative to the surrounding stars, a result of viewing it from one side of the Earth’s orbit around the Sun, then observing it again from the other side of the orbit six months later. This shift in perspective is called parallax, and from its magnitude Bessel calculated the approximate distance to 61 Cygni with simple geometry.
The parallax method remains the most accurate technique for measuring distances to objects outside our solar system. But the diameter of the Earth’s orbit, 186 million miles, limits the use of the method to the nearest stars, within about 500 light-years of Earth. A light-year is about 6 trillion miles, and so 500 light-years seems quite a substantial distance. Yet it constitutes only a small “bubble” of observable space, centered on Earth.
How can distances to objects be surveyed beyond our neighborhood bubble? Within our Milky Way, a star’s spectrum reveals its luminosity class—hence, its intrinsic brightness. By comparing a star’s intrinsic brightness with its apparent brightness (as seen from Earth), its distance can be estimated. Unfortunately, the method is not as accurate as parallax, but it is the best method available for most stars that are too far away for parallax measurements.
Another method relies on the knowledge of variable stars, whose brightness varies periodically. The rate at which the apparent brightness of certain variable stars changes is directly related to their intrinsic brightness. If you measure the period over which a star varies, as well as its brightness as seen from Earth, you can estimate its distance. In 1918 the astronomer Harlow Shapley applied that method to find the distance to many globular star clusters in the Milky Way. In fact, he was ultimately able to locate the center of our galaxy—a giant leap forward in the spatial understanding of the universe. With the variable-star method, the bubble of known distances extends outward into extragalactic space, about 50 million light-years from Earth.
Beyond that distance, other objects can serve as “standard candles”—objects whose brightness at their source is always the same, much like lightbulbs of equal wattage. For example, if one lightbulb is ten feet from an observer, and a second lightbulb is a hundred feet away, the farther lightbulb would appear to be the dimmer (a hundred times dimmer, to be precise). Similarly, astronomers infer distance by assuming, on sound independent grounds, that certain astronomical objects all have the same “wattage,” and so they can all serve as mutually corroborating standard candles. One standard candle, detectable from as far away as about 5 billion light-years, is the explosion of a certain kind of massive dying star known as a type-Ia supernova. All type-Ia explosions are assumed to be of similar luminosity.
Beyond the 5-billion-light-year boundary, yet another method is available for determining three-dimensional structure: redshift. In the 1920s the astronomer Edwin Hubble noted that the farther a galaxy is from Earth, the greater its redshift, or the amount by which its light is shifted toward longer, or redder, wavelengths of the electromagnetic spectrum. The same effect governs the pitch of a train whistle: to a listener standing on a platform, the wavelength of the sound becomes longer, and the pitch becomes lower, as the train speeds away. By measuring the redshift and applying Hubble’s relation between redshift and distance, one can estimate distances to the farthest reaches of the observable universe, many billions of light-years away.
While astronomers were radically expanding their view of the universe in the early part of the twentieth century, a new invention was also fueling a rising public enthusiasm: the planetarium. The first optical-projection planetarium made its debut in 1923, when engineers at the Carl Zeiss Company in Jena, Germany, projected stars on the interior of a hemisphere. A marvel of engineering, the “Wonder of Jena” attracted tremendous attention, and similar technology soon spread across Europe and the United States. A faithful reproduction of the night sky—pinpoint, luminous stars that rose and set, along with planets that followed their proper trajectories—formed the core of planetarium technology, enabling instructors to give dramatic demonstrations of the science of celestial motions and Newtonian physics.
As astronomical knowledge about the universe has grown, new tools have been invented or acquired to keep the planetariums up to date. In the decades from the 1920s until the early 1990s, slide, film, and video projectors were added to the mix of planetarium technology to tell the story of astronomical progress. But portraying the wonders of a universe that seemed to be expanding as knowledge grew proved to be challenging. How could planetariums convey the immensity of the universe? How could they simulate the experience of a flight to the farthest reaches of time and space?
Even astronomers had to grapple with the difficulty of representing their discoveries in a two-dimensional format. A classic example is the discovery of the large-scale structure of galaxies in the 1980s: astronomers were forced to analyze the size and shape of the local universe by viewing two-dimensional “slices” through the three-dimensional data.
When smaller, more powerful computers became available, astronomers were finally able to examine data more comprehensively—and with greater insight. All the objects in a virtual space could at last be plotted in proper perspective. Around that same time, the Evans & Sutherland (E&S) Computer Corporation in Salt Lake City, a pioneering company in computer graphics, introduced a digital projector for planetariums that could, with a single lens on a video projection tube, display a black-and-white star field—as well as other monochromatic images made up of dots and lines. Unlike earlier planetarium projectors, E&S’s device presented a three-dimensional universe: the stars had depth, the orbits of the planets in the solar system could be observed from any angle, and users could input data from many sources.
For the Hayden Planetarium, the new technology offered the potential to convey contemporary astrophysics as never before. In 1995 the American Museum of Natural History was embarking on a complete redesign of the planetarium, then sixty years old. The public viewing space was to be housed within a sphere suspended in a glass cube, in what would become the Rose Center for Earth and Space. Several years earlier, in 1991, J. Richard Gott III, an astrophysicist at Princeton University, had chaired the museum’s study on the future of planetariums. The museum, which had just completed a renovation of its dinosaur displays to reflect new understanding, was receptive to Gott’s suggestions about how to revamp the Hayden Planetarium so as to represent the current state of knowledge about the universe.
Optical-mechanical projection systems, such as the Zeiss star projectors, worked extremely well at depicting stars as they appear from Earth, because stars appear “fixed” in the night sky. The Sun, Moon, and planets seem to move relative to the stars, and their motions could readily be reproduced by small, special-purpose lens systems. But to create seamless, full-color images of the varying scales of the universe, a more flexible system was needed: a computer in which the graphics cover the entire planetarium dome. A mosaic of video projections could act as an enormous computer monitor, presenting a full hemisphere of imagery.
When the Hayden Planetarium reopened in 2000, after its extensive renovation, a virtual trip through the universe required a supercomputer. Navigating databases of thousands of celestial objects and displaying them in a series of still images at the standard video rate of thirty times a second posed a tremendous computational challenge. Fortunately, the phenomenal growth and popularity of flight simulators and electronic video games spurred the field of 3-D “Edge” of the observable universe, the most distant source of data from astronomical surveys in any direction, is depicted by the Digital Universe atlas. Because light travels at a finite speed, the image portrays the distant past, not long after the big bang. Representing the cosmic horizon is the cosmic microwave background radiation, whose minute temperature variations are rendered here in false-color blues and greens. data visualization to grow up almost overnight. Thanks in part to the video-game industry, personal computers today incorporate graphics processors that surpass the capabilities of the supercomputer the planetarium purchased only five years ago. The new technology arrived practically ready-made for transfer into industry and academia.
To view the Digital Universe atlas created at the Hayden Planetarium, you can use a program called Partiview (for “particle view”), developed by Stuart Levy, a research programmer at the National Center for Supercomputing Applications in Champaign-Urbana, Illinois. A Partiview user can explore any part of the observable universe. The technology renders a series of perspective views fast enough for you to explore the database of stars and galaxies as if you were “traveling” in real time, rather than flipping through two-dimensional snapshots, one after another.
The illusion of motion is critical to gaining an understanding of the spatial relations among celestial objects, because it gives the viewer a physical experience of the scales and positions of the objects. As you move among the stars and galaxies in the vicinity of our Sun and our Milky Way, you learn how to find your way around. But you may also find that it’s all too easy—and a humbling and disorienting experience—to get lost!
Cruising the stars and galaxies is no longer confined to facilities with supercomputers and multiple video projectors. The technological innovations that fueled advances at the Hayden Planetarium and other large institutions around the world are making their way into smaller domes as well, and even onto laptop computers. Single fish-eye projectors that cover an entire dome now display digital skies in planetariums in many schools, science centers, and public libraries.
The observable universe is immense beyond any ordinary experience, but not beyond the human ability to chart, visualize, and share. We begin to grasp its immensity by translating it into something we can see. As visual creatures, we use “immersive” technology to gain a sense of familiarity with the region around us, beginning with the Earth and moving constantly outward to expand our horizon of the familiar. By experiencing our place in ever-widening regions, we come to identify a much larger “home” than we ever imagined before. In much the way our species has, for millennia, viewed the night sky with awe, perhaps the Digital Universe can help stimulate a cosmic perspective toward our own species.