Ed's Big Plans

Computing for Science and Awesome

SOM in Regression Data (animation)

with 2 comments

The below animation is what happens when you train a Kohonen Self-Organizing Map (SOM) using data meant for regression instead of clustering or classification. A total of 417300 training cycles are performed — a single training cycle is the presentation of one exemplar to the SOM. The SOM has dimensions 65Γ—65. Exemplars are chosen at random and 242 snapshots are taken in total (once every 1738 training cycles).

SOM Melting Point Dataset (Regression)

The dataset I used is Dr. M. Karthikeyan’s 4173 molecules expressed in the first thirty principle components of QSAR descriptors.

I think that the circles formed in the early training cycles are due to the centre exemplar being far different than those found along the circumference; of which the centre exemplar is chosen first as the other nodes are repelled away from the newly organized region.

Compiled with ImageMagick (770kb β†’ 221kb) and Gifsicle (221kb β†’ 209kb).

convert -delay 12 -loop 0 __blog_*.png blogmap.gif
convert blogmap.gif -deconstruct -layers Optimize -matte -colors 4 blogmap_it0.gif
gifsicle -O2 blogmap_it0.gif -o blogmap_it1.gif

Thanks to AI-Junky (Mat Buckland) for clarifying a lot of the math πŸ˜€

Eddie Ma

March 17th, 2011 at 7:52 pm

Matt says...

Coooool… although I don’t really know what’s going on. I really wish I was taking a machine learning course, but the field is large enough that I probably wouldn’t run in to what you’re working on anyway.

It’s Wikipedia time!

Eddie Ma says...

Oh, this was for the Neural Network course I’m taking. Ya, this post was a bit brief — the thing with this particular animation is that I’m breaking a rule of machine learning. In general, we must do our best to apply devices that are good at solving specific problems to those exact types of problems. Here, a device that’s good for clustering has been applied to a regression problem. The resulting map is aesthetically pleasing, and the circular arrangement of exemplars is interesting during training — but that’s about it. The solution map has no practical meaning. I had previously wanted it to draw a smooth gradient, but in thinking about the dimensionality of the problem (really — 30 input nodes) — marbled cheddar was the best that can be done. HOWEVER! If you scan the literature for the terms “SOM” and “Regression”, you’ll see some interesting adaptations, so all is not lost πŸ˜€