Researchers at the Massachusetts Institute of Technology have developed an algorithm that aims to make database cloud infrastructure more efficient by pushing more similar workloads onto fewer servers, rather than distributing them as widely as possible.
The premise is surprising, given that many database companies make a point of divvying up the responsibility of processing to keep latency low. But if cloud providers build on and adopt the researchers’ DBSeer algorithm, it could improve cloud database performance.
Infrastructure-as-a-Service (IaaS) providers run virtual machines on servers. That might not be the most efficient approach for databases, because resources aren’t shared among the applications running on any given server, the researchers argued in a recent paper. It might be better to observe current workloads, predict the needs of future workloads and bring together the different sorts of loads on different servers. Then cloud providers could adjust service-level agreements to promise a certain level of latency rather than charge customers based on the number and size of virtual machines, the researchers noted.
DBSeer might also be of interest to database appliance and server vendors. Teradata (s tdc) is incorporating the algorithm into proprietary software. Meanwhile, one of the MIT researchers, Carlo Curino, now works at Microsoft (s msft), and Chinese webscale server vendor Quanta funded the research.
So far, DBSeer, which is available on GitHub, has only been shown to accurately predict workload needs for transactional MySQL databases. More research would be necessary to apply the algorithm to other database management systems.
The change in thinking could make good financial sense. The more hardware in use inside a cloud provider’s data centers, the more expensive it is for customers. If the appliances could work more efficiently, costs could drop.
Feature image courtesy of Shutterstock user Johann Helgason.