Computer Scientists Create New Algorithm That’s Exponentially Faster Than Any We’ve Seen Before

0
3955
Computer Scientists Create New Algorithm That’s Exponentially Faster Than Any We’ve Seen Before
Programming code abstract technology background of software developer and Computer script

The use of algorithms in technology is nothing new. There are algorithms around that help us avoid traffic and there are algorithms around that can identify new drug molecules. And while these algorithms are already pretty powerful in their own right, what if they were to perform even faster?

That’s what computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have been working on intently for the past few months. And in their research, they’ve managed to develop a whole new kind of algorithm that can speed up computation exponentially by significantly reducing the number of steps it takes in which to reach the desired solution. 


A lot of optimization problems rely on the use of age-old algorithms in which to work. These algorithms use step-by-step approaches that are sized in proportion to the amount of data they’re dealing with. The problem is that this leads to a bottleneck effect where more questions are raised in areas of research that are simply too expensive to explore.

“These optimization problems have a diminishing returns property,” says the senior author of the study and Assistant Professor of Computer Science at SEAS, Yaron Singer. “As an algorithm progresses, its relative gain from each step becomes smaller and smaller. This algorithm and general approach allow us to dramatically speed up computation for an enormously large class of problems across many different fields, including computer vision, information retrieval, network analysis, computational biology, auction design, and many others.”

What would have previously taken months to compute can now be done in a matter of seconds thanks to the new algorithm? “This new algorithmic work and the corresponding analysis opens the doors to new large-scale parallelization strategies that have much larger speedups than what has ever been possible before,” commented Jeff Bilmes, Professor in the Department of Electrical Engineering at the University of Washington. 

Traditional algorithms used to fix optimization issues work by narrowing down the search space until the best solution is found. The new algorithm takes a different approach. It samples a number of possibilities at the same time, then chooses the one that’s most likely to lead to the answer. “The strength of our algorithm is that in addition to adding data, it also selectively prunes data that will be ignored in future steps,” explains Eric Balkanski, a graduate student at SEAS and co-author of the paper.


As part of the research, Singer and Balkanski conducted experiments to show just how good their algorithm was. They showed it being able to sort through a pretty large data set containing 1 million ratings on various movies taken from more than 6,000 users and come up with a personalized recommendation list 20 times faster than anything else around. The algorithm was also tested on a taxi dispatch problem which looked to allocate the right taxis to the right customers, in the most efficient way. It proved to be 6 times faster than any other known solution.

Balkanski is confident that this figure would increase even more when applied to larger scale applications such as social media analytics or sponsored search auctions. The algorithm could potentially be used in other areas such as in the designing of better drugs to treat illnesses such as diabetes, Alzheimer’s, HIV, multiple sclerosis, and hepatitis C. Or, in designing advanced sensor arrays for medical imaging.

“This research is a real breakthrough for large-scale discrete optimization,” says Andreas Krause, professor of Computer Science at ETH Zurich. “One of the biggest challenges in machine learning is finding good, representative subsets of data from large collections of images or videos to train machine learning models. This research could identify those subsets quickly and have the substantial practical impact on these large-scale data summarization problems.” 


More News to Read

Comments

comments