How to “read” the photo(s) above – the bottom-layer consist of photos take by the Hubble Telescope and as you move to the top you can see the result of the technique used by the researchers to decipher the formation-stage of a galaxy as explained below.
One of the challenges face by astronomers is to analyze images of galaxies and understand how they form and evolve. Galaxies in the space can be classified into three types. 1) “Blue-nugget” – these are gas-rich galaxies full of young and hot stars which emit blue wavelengths of light. Blue-nugget hence means a young galaxy with active star formation, while a “red-nugget” suggest galaxies with older and cooling stars which emit red light. Researchers from Paris Observatory and Paris Diderot University used images from the Hubble Space Telescope to train a system based on deep learning to better classify galaxies than human being can currently perform. They used mock images to train the deep learning system to recognize three key phases of galaxy evolution previously identified in the simulations. The researchers then gave the system a large set of actual Hubble images to classify.
The researchers noted that their application of Deep learning has the potential to reveal aspects of the observational data that humans can’t see. The machine seemed to successfully find in the data the different stages of galaxy evolution identified in the simulations. Read more below. Source: Artificial Intelligence Brings New Tools to Astronomy
What is Deep Learning?
Deep Learning is a concept which falls under the broader machine learning discipline. As the name suggests, the “deep” in Deep Learning refers to how we human beings go deep into a subject when we want to acquire more knowledge (learning) about a subject. In the old age plain-vanilla automation, one uses DATA – then writes some CODE which can be repeated and which is generalised – to create OUTPUT. So the flow was DATA -> CODE -> Automated OUTPUTS. In the age of the machine learning, you pass the DATA to a platform (R, Matlab, Python), then point the platform to OUTPUT and the platforms writes the CODE (algorithm) for you. A lot of machine learning implementations are task specific (find a car, classify cats or dogs, optimisation) but deep learning algorithms helps to find patterns or generalised data representations. The downfall of this approach is that the algorithm learns patterns by creating complex weighting system which grows in complexity as the data grows. But I am not sure why we fear this complexity when we do not even understand how we human beings react to an stimuli without being able to explain why we reacted the way we reacted.
This form of machine learning emanated from watching how our brain works. A specific stimuli in our brain fires a sequence of electrical signals (yes our brain is an electro-magnetic device) which in turn creates a series of neuronal responses. This neural activity eventuates to a task we embarked upon hitting the stimuli. Deep learning is the study to map how the brain responded via neural activity to map the stimuli to the accomplished task. Deep leaning is currently implemented in the field of computer vision (autonomous driving), natural language processing (automated voice recognition), visual art processing (used by museums and art collectors), drug discovery (by pharmaceuticals), CRM, e-shops and e-advertisement (amazon), among many others. The more we understand our brain, the better will be our journey to imitate it artificially, augmenting our ability to do more than we can do currently.
Girish
Blogpost: http://www.girishnair.com
