Archive for the ‘Neural Networks’ tag
For the final project in Neural Networks (CIS 6050 W11), I decided to cluster blog posts based on the difference between an author’s word choice and the word choice of the entire group of authors.
>>> Attached Course Paper: Kohonen Self-Organizing Maps in Clustering of Blog Authors (pdf) <<<
A self-organizing map (Kohonen SOM) strategy was used. The words chosen to compose a given blog post defined wherein the map it should be placed. The purpose of this project was to figure out what predictive power could be harnessed given the combination of the SOM and each author’s lexicon; i.e. whether or not it is possible to automatically categorize an author’s latest post without the use of any tools besides the above.
Data: Thirteen authors contributing a total of fourteen blogs participated in the study (all data was retrieved on 2011 March 8th). The below table summarizes the origin of the data.
|Author||Posts||Lexicon||Blog Name||Subject Matter|
|Andre Masella||198||7953||MasellaSphere||Synth Bio, Cooking, Engineering|
|Andrew Berry||46||2630||Andrew Berry Development||Drupal, Web Dev, Gadgets|
|Arianne Villa||41||1217||…mrfmr.||Internet Culture, Life|
|Cara Ma||12||854||Cara’s Awesome Blog||Life, Pets, Health|
|Daniela Mihalciuc||211||4454||Citisen of the World†||Travel, Life, Photographs|
|Eddie Ma||161||5960||Ed’s Big Plans||Computing, Academic, Science|
|Jason Ernst||61||3445||Jason’s Computer Science Blog||Computing, Academic|
|John Heil||4||712||Dos Margaritas Por Favor||Science, Music, Photography|
|Lauren Stein||91||4784||The Most Interesting Person||Improv, Happiness, Events|
|Lauren Stein (Cooking)||7||593||The Laurentina Cookbook||Cooking, Humour|
|Liv Monck-Whipp||30||398||omniology||Academic, Biology, Science|
|Matthew Gingerich||98||395||The Majugi Blog||Academic, Synth Bio, Engineering|
|Richard Schwarting||238||7538||Kosmokaryote||Academic, Computing, Linux|
|Tony Thompson||51||2346||Tony Thompson, Geek for Hire||Circuitry, Electronic Craft, Academic|
†Daniela remarks that the spelling of Citisen is intentional.
In order to place the blog posts into a SOM, each post was converted to a bitvector. Each bit is assigned to a specific word, so that the positions of each bit consistently represents the same word from post to post. An on-bit represented the presence of a word while an off-bit represented the absence of a word. Frequently used words like “the” were omitted from the word bit-vector, and seldom used words were also omitted.
Results: The center image (in the collection to the left) is a density map where darker pixels indicates a larger number of posts — this centre map represents all of the posts made by all of the authors pooled together.
Because of the number of posts and the number of authors, I’ve exploded the single SOM image into the remaining fourteen images.
It was found that posts were most often clustered together if they were both by the same author and on the same topic. Clusters containing more than one author generally did not show much agreement about the topic.
Regions of this SOM were dominated by particular authors and topics as below.
|Top Left||Liv||Academic Journals|
|Top Border||Lauren||Human Idiosyncrasies|
|Up & Left of Centre||Daniela||Travel|
|Centre||all||short and misfit posts|
|Just Below Centre||Matthew||Software Projects|
|Bottom Left||Andre||Language Theory|
|Bottom Right||Eddie||Web Development|
Discussion: There are some numerical results to go along with this, but they aren’t terribly exciting — the long and the short of it is that this project should to be repeated. The present work points towards the following two needed improvements.
First, the way the bitvectors were cropped at the beginning and at the end were based on a usage heuristic that doesn’t really conform to information theory. I’d likely take a look at the positional density of all bits to select meaningful words to cluster.
Second, all posts were included — this results in the dense spot in the middle of the central map. Whether these posts are short or just misfit, many of them can probably be removed by analyzing their bit density too.
Appendix: Here are two figures that describe the distribution of the data in the word bitvectors.
When we sort the words based from a high number of occurrences down to a low number of occurrences, we get graphs that look like the above two. A rank class contains all words that have the same number of occurrences across the entire study. The impulse graph on the left shows the trend for the number of unique words in each rank class. The number of words drastically increases as the classes contain fewer words. The impulse graph on the right shows the trend for the count of uses for words in a given rank class. The number of uses decreases as words become more rare.
These graphs were made before the main body of the work to sketch out how I wanted the bitvectors to behave — they verify that there was nothing unusual about the way the words were distributed amongst the data.
Brief: Met with Chris last week. Chris finished with the convergence tests and some cross validation sets on his descriptors and recommended his own design for 80/20 prediction tests… Meanwhile, I’ve updated the InChI grammar used for the NGN to work with the new data, and have set up experiments to run convergence tests using the SMILES-NGN and InChI-NGN on the eight possible QSAR datasets on SharcNet (16 processes total)… Next on the list– create a script to evaluate his preliminary cross validation experiments (based on Neural Network predicted vs. target values) and provide instructions for running the convergence tests with my NGN software… Will need to pull up an old nugget.py to wrap the convergence test (current one doesn’t halt and always runs 100 trials).
Chris’ project has come back to the forefront– after I defend my thesis on Wednesday, it’ll certainly have all of my attention.
We will at least be meeting on Monday though to discuss what can be done in the interim.
Convergence Tests Went Fine
We decided that it would be good for Chris to run a few convergence tests on the datasets he put together across each of the available descriptor sets. So far, many have come back converged meaning that it would be good to proceed. There are two concerns I have. First, do we want to melt the converged descriptors together; do we want to melt all of the descriptors together regardless of convergence? Second, if we don’t– can we do it after the fact and argue that neural network convergence is a good determiner for what descriptors are correlated with results we care about?
To clarify– I mean “concatenating” real value vectors when I say “melting”. This means that we splice together a few linear arrays of numbers and come up with a new longer array that’s still fixed length.
The second question is only true if it turns out that selected melted converged descriptors have better predictive power than when all descriptors are melted together– it’s an even stronger case (and more practical) if it turns out that the descriptors behave better in concert than any particular subset on its own.
That would be an interesting case. The cost of running an additional eight or sixteen experiments to test that hypothesis is cheap to set up, cheap to do.
Alternatives– A Faster Solution
An alternative approach is to naïvely forget about descriptor space reduction / augmentation for now, and just go on and create training and test sets– or cross validation sets– I think with the strained timelines, this would be the wiser objective to knock down first. I’ll make a ticket for myself for both these actually — I should look up how to use the Tanimoto coefficient actually– that will assist in the design of “maximum dissimilarity” test sets to ensure we have good predictive / extrapolation power.
And the NGN…
Finally, I need to go back and uncover a working version of the NGN to use with Chris’ data– I don’t think that InChI is possible, but we’ll try anyway. The SMILES strings are already here, so I can certainly at least run a few convergence tests of my own. That constitutes eighty runs at worst (8 * 10 trials fail) and eight runs at best (1 * 10 trials converge on the first try). I am going to leave this in Unix compatible form because there isn’t enough time to complete the windows port of the NGN.
This should be OK though since everything will be set up for SharcNet.
Okay– I managed to finish that 3-layer neural network implementation the other day– actually, it was a while ago but I didn’t post about it from being busy. It’s a pretty standard network, but I’m proud to say it’s small and works for OSX and Win32. I have to put in a few #define directives to have it work with Linux as well.
I will have to document it too when I get a chance. The reason why I made a brand new executable (instead of using the source from my previous projects) is because I needed something that would take in launch-time parameters so that it didn’t need to be recompiled each time someone decides to use the binary on a new dataset with a different number of inputs. Right now, the thing has barely any solid parameters that can’t be touched at launch-time.
The NNcmk (Neural Network – Cameron, Ma, Kremer) package is C compilable, uses the previously developed in-house library for the NGN and will be available shortly after I’m satisfied that I’ve squashed all the bugs, fixed the output and have documented the thing completely. I think Chris has difficulty with it right now mostly because I didn’t specify exactly what parameters do what– I did at least provide a (DOS) batch file with an example run-in-train-mode / run-in-test-mode sequence…
Back to work on that paper right now though…
Brief: We’ve taken on a new strategy– Chris is building a novel database of LD50 values for many many compounds. We’ll be generating descriptors with some free software, (JoeLib) and (CDK). Eventually, the fixed-width descriptor vectors will be used, as well as the SMILES and InChI counterparts in Neural Networks and NGNs respectively; the ultimate goal is the development of either a nested neural decision tree whose subtrees are the descriptor network and NGNs… OR, the nesting of the descriptor network inside an NGN… OR, the creation of an expert voting system where each decision system gets to vote on a particular molecule of interest. With the windows NN software draft and NGN ready for SharcNet, preliminary trials can start soon.
Chris’ project has grown to data sets of roughly three hundred exemplars for each the mouse and rat data sets– these are the sets that mapped molecules to some physiological defect, by organ or tissue. I think he’ll be onto his next phase shortly– taking the data and applying some machine learning construct to it.
I’ve recommended four papers to him to read– three of which discuss QSAR in general, and compare the performance of different approaches. The last paper explicitly uses neural networks for descriptors in regression of melting points. The use of neural networks or similar technology is something that he’s expressed a lot of interest in, so I think this selection falls in well. I’ve provided him with an adapted version of the melting point dataset where the domain is re-expressed as SMILES and InChI.
I think it might be good to set him up with NGNs for those items as well as NNs for the descriptor vector used in the melting point paper.