Ed's Big Plans

Computing for Science and Awesome

  • Page 1 of 2
  • 1
  • 2
  • >

Archive for the ‘Neural Networks’ tag

Blog Author Clustering with SOMs

without comments

For the final project in Neural Networks (CIS 6050 W11), I decided to cluster blog posts based on the difference between an author’s word choice and the word choice of the entire group of authors.

>>> Attached Course Paper: Kohonen Self-Organizing Maps in Clustering of Blog Authors (pdf) <<<

A self-organizing map (Kohonen SOM) strategy was used. The words chosen to compose a given blog post defined wherein the map it should be placed. The purpose of this project was to figure out what predictive power could be harnessed given the combination of the SOM and each author’s lexicon; i.e. whether or not it is possible to automatically categorize an author’s latest post without the use of any tools besides the above.

Data: Thirteen authors contributing a total of fourteen blogs participated in the study (all data was retrieved on 2011 March 8th). The below table summarizes the origin of the data.

Author Posts Lexicon Blog Name Subject Matter
Andre Masella 198 7953 MasellaSphere Synth Bio, Cooking, Engineering
Andrew Berry 46 2630 Andrew Berry Development Drupal, Web Dev, Gadgets
Arianne Villa 41 1217 …mrfmr. Internet Culture, Life
Cara Ma 12 854 Cara’s Awesome Blog Life, Pets, Health
Daniela Mihalciuc 211 4454 Citisen of the World Travel, Life, Photographs
Eddie Ma 161 5960 Ed’s Big Plans Computing, Academic, Science
Jason Ernst 61 3445 Jason’s Computer Science Blog Computing, Academic
John Heil 4 712 Dos Margaritas Por Favor Science, Music, Photography
Lauren Stein 91 4784 The Most Interesting Person Improv, Happiness, Events
Lauren Stein (Cooking) 7 593 The Laurentina Cookbook Cooking, Humour
Liv Monck-Whipp 30 398 omniology Academic, Biology, Science
Matthew Gingerich 98 395 The Majugi Blog Academic, Synth Bio, Engineering
Richard Schwarting 238 7538 Kosmokaryote Academic, Computing, Linux
Tony Thompson 51 2346 Tony Thompson, Geek for Hire Circuitry, Electronic Craft, Academic

Daniela remarks that the spelling of Citisen is intentional.

In order to place the blog posts into a SOM, each post was converted to a bitvector. Each bit is assigned to a specific word, so that the positions of each bit consistently represents the same word from post to post. An on-bit represented the presence of a word while an off-bit represented the absence of a word. Frequently used words like “the” were omitted from the word bit-vector, and seldom used words were also omitted.

Results: The center image (in the collection to the left) is a density map where darker pixels indicates a larger number of posts — this centre map represents all of the posts made by all of the authors pooled together.

Because of the number of posts and the number of authors, I’ve exploded the single SOM image into the remaining fourteen images.

It was found that posts were most often clustered together if they were both by the same author and on the same topic. Clusters containing more than one author generally did not show much agreement about the topic.

Regions of this SOM were dominated by particular authors and topics as below.

Region Authors Topics
Top Left Liv Academic Journals
Eddie Software Projects
Jason Academic
Top Border Lauren Human Idiosyncrasies
Richard Linux
Top Right Lauren Improv
Up & Left of Centre Daniela Travel
Centre all short and misfit posts
Right Border Andre Cooking
Just Below Centre Matthew Software Projects
Bottom Left Andre Language Theory
Andrew Software Projects
Jason Software Projects
Bottom Border Richard Academic
Bottom Right Eddie Web Development
Jason Software Tutorials

Discussion: There are some numerical results to go along with this, but they aren’t terribly exciting — the long and the short of it is that this project should to be repeated. The present work points towards the following two needed improvements.

First, the way the bitvectors were cropped at the beginning and at the end were based on a usage heuristic that doesn’t really conform to information theory. I’d likely take a look at the positional density of all bits to select meaningful words to cluster.

Second, all posts were included — this results in the dense spot in the middle of the central map. Whether these posts are short or just misfit, many of them can probably be removed by analyzing their bit density too.

Appendix: Here are two figures that describe the distribution of the data in the word bitvectors.

When we sort the words based from a high number of occurrences down to a low number of occurrences, we get graphs that look like the above two. A rank class contains all words that have the same number of occurrences across the entire study. The impulse graph on the left shows the trend for the number of unique words in each rank class. The number of words drastically increases as the classes contain fewer words. The impulse graph on the right shows the trend for the count of uses for words in a given rank class. The number of uses decreases as words become more rare.

These graphs were made before the main body of the work to sketch out how I wanted the bitvectors to behave — they verify that there was nothing unusual about the way the words were distributed amongst the data.

Eddie Ma

June 29th, 2011 at 2:10 pm

Meeting with Chris

without comments

Brief: Met with Chris last week. Chris finished with the convergence tests and some cross validation sets on his descriptors and recommended his own design for 80/20 prediction tests… Meanwhile, I’ve updated the InChI grammar used for the NGN to work with the new data, and have set up experiments to run convergence tests using the SMILES-NGN and InChI-NGN on the eight possible QSAR datasets on SharcNet (16 processes total)… Next on the list– create a script to evaluate his preliminary cross validation experiments (based on Neural Network predicted vs. target values) and provide instructions for running the convergence tests with my NGN software… Will need to pull up an old nugget.py to wrap the convergence test (current one doesn’t halt and always runs 100 trials).

Soon: Port everything to Ubuntu Linux so that we can maintain compatibility without further porting care of Sun Virtualbox VM… Meeting again tomorrow…

NNcmk: A Neural Network (Win32 & OSX)

without comments

Okay– I managed to finish that 3-layer neural network implementation the other day– actually, it was a while ago but I didn’t post about it from being busy. It’s a pretty standard network, but I’m proud to say it’s small and works for OSX and Win32. I have to put in a few #define directives to have it work with Linux as well.

I will have to document it too when I get a chance. The reason why I made a brand new executable (instead of using the source from my previous projects) is because I needed something that would take in launch-time parameters so that it didn’t need to be recompiled each time someone decides to use the binary on a new dataset with a different number of inputs. Right now, the thing has barely any solid parameters that can’t be touched at launch-time.

The NNcmk (Neural Network – Cameron, Ma, Kremer) package is C compilable, uses the previously developed in-house library for the NGN and will be available shortly after I’m satisfied that I’ve squashed all the bugs, fixed the output and have documented the thing completely. I think Chris has difficulty with it right now mostly because I didn’t specify exactly what parameters do what– I did at least provide a (DOS) batch file with an example run-in-train-mode / run-in-test-mode sequence…

Back to work on that paper right now though…

Meeting with Chris

without comments

Brief: We’ve taken on a new strategy– Chris is building a novel database of LD50 values for many many compounds. We’ll be generating descriptors with some free software, (JoeLib) and (CDK). Eventually, the fixed-width descriptor vectors will be used, as well as the SMILES and InChI counterparts in Neural Networks and NGNs respectively; the ultimate goal is the development of either a nested neural decision tree whose subtrees are the descriptor network and NGNs… OR, the nesting of the descriptor network inside an NGN… OR, the creation of an expert voting system where each decision system gets to vote on a particular molecule of interest. With the windows NN software draft and NGN ready for SharcNet, preliminary trials can start soon.

Meeting with Chris

without comments

Chris’ project has grown to data sets of roughly three hundred exemplars for each the mouse and rat data sets– these are the sets that mapped molecules to some physiological defect, by organ or tissue. I think he’ll be onto his next phase shortly– taking the data and applying some machine learning construct to it.

I’ve recommended four papers to him to read– three of which discuss QSAR in general, and compare the performance of different approaches. The last paper explicitly uses neural networks for descriptors in regression of melting points. The use of neural networks or similar technology is something that he’s expressed a lot of interest in, so I think this selection falls in well. I’ve provided him with an adapted version of the melting point dataset where the domain is re-expressed as SMILES and InChI.

I think it might be good to set him up with NGNs for those items as well as NNs for the descriptor vector used in the melting point paper.

NGN Software Updates

without comments

Two major items should be done to the source code toward the completion of the next objectives; they are a cleanup of one legacy argument, and the addition of new training behaviour.

Cleanup Random Number “Argument”

The present generated NGN binary does not treat the random number seed the same way as other arguments; it takes it in through stdin via the pipe operator at program launch. This should be cleaned up so that the user may use a commandline switch, then an integer to specify the random seed.

Training Dataset Balancing, Boosting and Verification Dataset

The cleanest way to enable balancing and boosting during training is by altering the binary executable rather than any wrapping script. Both of these options should philosophically be enabled only when either the diskonce or parseonce options are also enabled; this ensures that the data is already preprocessed, and can be referenced in program memory during its operation. Balancing requires an one additional integer argument one additional double argument so that the software understands 1: how many bins to balance against (as it is mathmatically impossible to balance in the set of reals) and 2: how much tolerance to give bins when true balancing is impossible. Balancing sees the deterministic selection of the first n-elements in each bin, until the bins cannot tolerate any more deviation. A training epoch occurs, and new n-elements are selected for each bin; this implies bins with fewer elements will see training more often.

For boosting, the software will tag each datapoint for its range of NGN activation against its target from greatest to smallest deviation; a command line parameter double value will indicate what proportion sorted from greatest to least will see repetition in training more often.

An algorithm based on the favour of retraining on balancing and boosting will be developed so that selection by favour of balancing is given a turn, then in favour of boosting interlaced. Every ten epochs or so, a complete train on the entire dataset in classic sequence is done to determine the overall RMSE of the system, it is only then that a convergence can be determined; that is– the intermittent ten epochs cannot converge (by definition of this algorithm).

Verification datasets can be implemented as a feature of the wrapping script or as a feature of the binary; it makes sense in this case to try and implement as a feature of the binary. From the prespecified number of bins determined in the balancing argument, one example will be selected out and isolated to be used for verification. This will allow a converged network to be internally tested prior to being used as a model for an actual external test set (which can only be determined by the wrapping script); this is useful as a means to “proofread” the model, an allow even converged cases to be rejected on suspiscion of problem over/under-fitting.

Eddie Ma

May 18th, 2009 at 1:03 pm

  • Page 1 of 2
  • 1
  • 2
  • >