2 Comments

Summary:

Scientists have researched the effectiveness of deep learning techniques for discovering exotic particles and found some significant improvements over previous methods. They believe deep learning could help analyze data from the Large Hadron Collider.

CERN_large_hadron_collider
photo: CERN

Researchers from the University of California, Irvine, have published a paper demonstrating the effectiveness of deep learning in helping discover exotic particles such as Higgs bosons and supersymmetric particles. The research, which was published in Nature Communications, found that modern approaches to deep neural networks might be significantly more accurate than the types of machine learning scientists traditionally use for particle discovery and might also save scientists a lot of work.

For more information on advanced artificial intelligence, check out this anthology of Gigaom’s coverage of the space over the years, which also includes links to several great background sources on deep learning.

To get a sense of how challenging particle discovery is, consider that a collider can produce 100 billion collisions per hour and and only about 300 will produce a Higgs boson. Because the particles decay almost immediately, scientists can’t expressly identify them, but instead must analyze (and sometimes infer) the products of their decay.

Traditionally, scientists have used machine learning models — including neural networks — to help classify decay patterns that signify the existence, however temporary, of exotic particles. However, those efforts require focusing on a relatively small number of variables from very complex datasets and they’re limited in accuracy by the features the scientists have trained them to look for.

higgs

The results for Higgs boson detection. Higher is better. Source: Nature Communications / UC Irvine

In the UCI experiment, which involved the analysis of data from 500,000 simulated collisions, deep learning models proved significantly more accurate — up to 8 percent —  in identifying those signals compared with legacy approaches. As the paper explains, the self-learning nature of deep neural networks on raw collision data appears to be the key to their effectiveness:

“The deep-learning techniques show nearly equivalent performance using the low-level features and the complete features, suggesting that they are automatically discovering the insight contained in the high-level features. Finally, the deep-learning technique finds additional separation power beyond what is contained in the high-level features, demonstrated by the superior performance of the DN with low-level features to the traditional network using high-level features.”

In a press release announcing the research, one of the researchers is quoted as saying the techniques could be applied to the next batch of Large Hadron Collider experiments in 2015.

It’s this type of research that helps explain why deep learning has created such a buzz in the past couple years, at least among folks interested in the applications of the techniques. While much of the public discussion is centered around companies such as Google, Microsoft and Facebook, and the billions of dollars they’re collectively investing in deep learning research, the approaches themselves have utility far beyond commercial image search, sentiment analysis and voice recognition.

Applied to complex data in fields like science and medicine, deep learning could help us better understand our world and save a lot of lives.

For more on the sheer scale of data CERN is generating from its Large Hadron Collider, check out CERN infrastructure manager Tim Bell at our Structure Europe conference in 2013.

  1. Is it any wonder that deep learning works so well on some of these recognition tasks, given how it compares to human IT cortex?:

    http://arxiv.org/abs/1406.3284

    And neural nets, and machine learning more generally, are also being applied to problem-solving tasks, not just finding needles in haystacks. For example, here is some new work on applying them to producing efficient mathematical identities:

    http://arxiv.org/abs/1406.1584

    And here is a paper on applying machine learning to improve automated theorem-proving:

    http://link.springer.com/article/10.1007/s10817-014-9301-5

    http://arxiv.org/abs/1402.2359

    If one can produce a sufficiently good theorem-prover/reasoner, then one can produce a method to discover algorithms.

    Reply Share
    1. Ahh, I see there’s now a Wired article on the comparison of deep learning to the brain (that I mentioned in thst first link above):

      http://www.wired.com/2014/07/cadieu

      Reply Share