ECE Grad Student Balancing Power and Performance in the Cloud, in Real Time

ECE graduate student Farah Fargo (center) received the Best Research Poster award at the 2014 IEEE International Cloud and Autonomic Computing Conference in London for her research in cloud computing and load-balancing systems.

ECE graduate student Farah Fargo (center) received the Best Research Poster award at the 2014 IEEE International Cloud and Autonomic Computing Conference in London for her research in cloud computing and load-balancing systems. As more and more organizations realize the benefits of cloud computing -- reduced hardware costs, increased bandwidth, and anywhere, anytime access to data, for example -- engineers are tasked with developing technology to more effectively manage resources in the cloud.

One hurdle in cloud computing involves balancing power consumption and performance. As such, UA electrical and computer engineering graduate student Farah Fargo, in collaboration with ECE professor Salim Hariri and other graduate students, has introduced a real-time monitoring system that enables server hosts to dramatically reduce power consumption while maintaining quality of service. 

The “Autonomic Cloud Management System” research earned Fargo a Best Research Poster award at the recent 2014 IEEE International Cloud and Autonomic Computing Conference in London. Hariri also received an award for his leadership role in cloud computing research. 

In the simplest terms, cloud computing means sharing a network of remote servers hosted on the Internet, rather than relying on local servers or personal devices, to store and access computing resources that range from applications to data centers. For everyday computer users, Google Drive, with its online applications and storage, is one example of a cloud service.

It takes energy -- lots of it -- to power the mega-data centers that serve up cloud resources. By some accounts cloud data centers consumed an estimated 91 billion kilowatt hours of electricity in 2013, enough to power all the households in New York City twice over.

At peak usage times, more servers are needed to effectively meet demand. But when cloud usage is low, the extra servers still consume energy while sitting idly waiting for the next peak period. 

The system provides real-time feedback about the cloud workload -- number of users performing what tasks -- at any given time and assigns available cloud resources on an as-needed basis. Rather than power extra servers in anticipation of increased workload, the system provides just enough server space to host what is needed in any given second.

“Over-provisioning techniques are typically used for meeting the peak workloads,” Fargo explained. “In comparison, our technique dynamically matches the application requirements with just enough system resources to meet the quality of service requirements for the cloud applications, which leads to a reduction in power consumption.”

Fargo’s research shows that this approach compared with over-provisioning techniques can reduce power consumption up to 87 percent.  

Unlike most load-balancing systems on the market, Fargo’s system can address power and performance management at the same time and accomplish several tasks at once. 

“We are managing multiple resources at the same time: the number of cores, core frequency and memory amount,” Fargo said. “Other systems typically only use one of those attributes, such as core frequency, to reduce power consumption.” 

Fargo was the only researcher among 15 presenting at the conference who was working on a real-time project. All others were based on simulations. 

Said Hariri, “To be selected as the best poster says two things: the quality of research being conducted is among the best of the posters that were reviewed, and our work is very promising.”

University of Arizona College of Engineering