They have shown the world it can be done. Now researchers are working through the challenges created by a camera that generates images so big the equivalent of 500 high-definition displays would be needed to view a single image in its entirety.
A multidisciplinary team of 40 researchers from Duke University, the University of Arizona, and the University of California--San Diego, as well as a number of industry partners working daily for three years built a camera that can take gigantic pictures with five times as much detail as a person with perfect 20/20 vision can see. The camera could revolutionize space surveillance, image security, microscopic surgery, and video broadcasting, among other areas.
The gigapixel camera, dubbed AWARE-2, has 100 times as many pixels as most point-and-shoot cameras. Pixels are the smallest component of a digital image. The more pixels, the clearer, or more highly resolved, an image.
Before the gigapixel camera, a few enormously detailed photos had been generated by creating very large film negatives then scanning them at extremely high resolutions, or by taking lots of separate digital images and stitching them together into a mosaic on a computer. Both approaches can produce stunningly detailed images, but the processes are slow and costly. A key objective for the researchers on the AWARE project was to build a camera that was fast and economical.
The gigapixel camera is made up of 98 small cameras that surround a common lens, which gathers light and sends it to the cameras. The microcameras each simultaneously capture a small part of the device’s 120-by-50 degree field of view. A specially designed computer processor then stitches together in seconds the smaller images to form one gigantic picture with unprecedented detail.
Even after researchers determined the most efficient and economical optical configuration for the camera and the quickest, most seamless way to stitch together the images, the groundbreaking gigapixel camera was not without its challenges. Data management and user interaction presented unique problems. Finding solutions to these issues was the subject of a recent special Homecoming lecture by Michael Gehm, a faculty member in the UA department of electrical and computer engineering and the College of Optical Sciences. Gehm led the team that developed software to combine the input from the microcameras.
Data Management Problems and User Interaction Solutions
The gigapixel camera has the potential to generate 10 images/frames--each containing a billion pixels, or 1 gigabyte of data-- every second. That’s far more than can be stored in conventional file formats. A one terabyte, or 1,000 gigabyte, hard drive would fill in 1,000 seconds, or 17 minutes. And 80 gigabit Ethernet cables would be needed to carry the data stream. It would take the equivalent of 500 high-definition displays to show one entire gigapixel image.
Technologies do exist to help overcome some of these technological challenges, Gehm said, but they are not economical solutions. Even if researchers devised ways to handle all of the data-management challenges for a one-gigabyte image, he added, the problem would not be solved because the research team is already working on a 50-gigapixel version of the camera.
“We can make more pixels than we have technology to handle,” he said. “At this rate, we could fill the entire data stream capacity in the world in nine years.”
The solution, Gehm explained, is to rethink what imaging at high resolutions means and change how we interact with the systems and the data. Specifically, the camera becomes a live data stream with which multiple users interact simultaneously by using individual HD displays to zoom in on different parts of a single image. This essentially allows a single giant image to become many detailed photographs or videos. For example, each person watching coverage of a sporting event could choose which portion of the image to view. The full resolution images could then be archived as storage and bandwidth allowed.
What’s Next for the Gigapixel Camera?
Alumni, faculty, staff and students at the special lecture also got a glimpse into the future of the gigapixel camera. The second generation prototype, AWARE-10, is in construction and scheduled for completion in just a few short months, January 2013. It is significantly more advanced than its predecessor, Gehm said. The 10-gigapixel camera will be capable of producing color images on a 100-by-60 degree field of view with resolution twice as clear as the original gigapixel camera.
The original prototype was 2 ½ feet square, mostly taken up by electronics, and even with Aware-10’s shrinking electronics components, the super camera still has a long way to go for the consumer market. Nevertheless, researchers believe the continued miniaturization of electronics components could put portable gigapixel cameras into the hands of consumers in the next five years.
The AWARE camera research was supported by the Defense Advanced Research Projects Agency (DARPA).