## Archive for the ‘Featured’ Category

## Exposing MATLAB data with JFrame & JTextArea

Today, I’ll describe my first foray into MATLAB. My task is simple — there’s already a GUI developed as a toolbox. The user inputs a bunch of data, and it crunches it away and provides a graph and displays a dialogue box about the moments of the data. The issue here is that we’d like to know more about the data — particularly, we’d like to retrieve the ordered pairs corresponding to each of the plotted points in the graph.

In this exercise, I’ll show you how to use JFrame & JTextArea to display the coordinates of a graph in an already existing MATLAB GUI toolbox or plugin.

*The reason why I’ve decided on this approach rather than outputting directly to the MATLAB console is because I eventually want to add additional functions to reshape and reformat the text, and also to save the text that appears in its own window using additional Java swing components. But that’s a story for another day.*

**The Quick How-To … ( ***3 steps*** )**

Choose the toolbox or plugin you’d like to modify and open up its “.m” file. This is where you’ll want to add your custom code. There are three parts to this exercise — first, we need to get a few classes from Java; then, we need to store the data we want to display; finally, we make the display appear.

In this example, I’ll import the bare minimal amount of Java classes — you can extend this functionality by adding more swing classes if you like. I’ve arbitrarily prefixed my Java variables above with the prefix “expose”.

import javax.swing.JFrame; import javax.swing.JTextArea; import javax.swing.JScrollPane; // Scrollbars -- optional. exposeReport = JFrame('Data Report'); exposeTa = JTextArea(24, 80); exposeScroll = JScrollPane(exposeTa); exposeScroll.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS); exposePane = exposeReport.getContentPane(); exposePane.add(exposeScroll); exposeReport.pack();

*I’m not exactly sure why MATLAB doesn’t include scrollbars automatically.*

Of the above, there are only two variables which we’ll need to refer to again later — “exposeReport” and “exposePane”.

The next step is to use your JTextArea as an output terminal. Just append strings to it as the graph is being built — you’ll have to look at your specific plugin to figure out the logic behind the graph — in general, you’ll be looking for a for-loop and a function that alters the graph.

// Look for a 'for' loop with a line that adds data to the graph. // Here, I've used the variable SomeX for a point's x-coordinate, // and SomeY for a point's y-coordinate. exposeTa.append(sprintf('%d\t%d\n', SomeX, SomeY));

The loop spins around and incrementally adds all of the points. Note that I’ve used “%d” as a conversion to numbers expressed in base-ten (including floating point values). This is different from the conversion character in C where “%d” indicates only integers.

We add the final code after the loop exits. This next code makes your data report window visible — just as you’d expect from Java.

exposeReport.setVisible(1);

I opted to put this line after the code that reveals the toolbox’s original summary window — this causes my report to appear on top.

That’s all there is to it! Enjoy 😀

**MATLAB from a CS student perspective**

MATLAB has always been a bit of an oddball language for me. It’s dynamically typed but also exposes Java’s classes if you ask nicely. It’s C-looking and provides functions that the C-programmer would be happy accepting such as *strcat* and *sprintf*, but again — putting on the dynamic spin by returning the constructed string rather than modifying the contents of a buffer. All in all, the design choices do a good job of making MATLAB do what it’s intended to do; it gives scientists and engineers a way to succinctly express math to a machine without having to take too many computer science courses.

## Blog Author Clustering with SOMs

For the final project in Neural Networks (CIS 6050 W11), I decided to cluster blog posts based on the difference between an author’s word choice and the word choice of the entire group of authors.

>>> **Attached Course Paper:** Kohonen Self-Organizing Maps in Clustering of Blog Authors (pdf) <<<

A self-organizing map (Kohonen SOM) strategy was used. The words chosen to compose a given blog post defined wherein the map it should be placed. The purpose of this project was to figure out what predictive power could be harnessed given the combination of the SOM and each author’s lexicon; i.e. whether or not it is possible to automatically categorize an author’s latest post without the use of any tools besides the above.

** Data:** Thirteen authors contributing a total of fourteen blogs participated in the study (all data was retrieved on 2011 March 8th). The below table summarizes the origin of the data.

Author |
Posts |
Lexicon |
Blog Name |
Subject Matter |

Andre Masella | 198 | 7953 | MasellaSphere | Synth Bio, Cooking, Engineering |

Andrew Berry | 46 | 2630 | Andrew Berry Development | Drupal, Web Dev, Gadgets |

Arianne Villa | 41 | 1217 | …mrfmr. | Internet Culture, Life |

Cara Ma | 12 | 854 | Cara’s Awesome Blog | Life, Pets, Health |

Daniela Mihalciuc | 211 | 4454 | Citisen of the World^{†} |
Travel, Life, Photographs |

Eddie Ma | 161 | 5960 | Ed’s Big Plans | Computing, Academic, Science |

Jason Ernst | 61 | 3445 | Jason’s Computer Science Blog | Computing, Academic |

John Heil | 4 | 712 | Dos Margaritas Por Favor | Science, Music, Photography |

Lauren Stein | 91 | 4784 | The Most Interesting Person | Improv, Happiness, Events |

Lauren Stein (Cooking) | 7 | 593 | The Laurentina Cookbook | Cooking, Humour |

Liv Monck-Whipp | 30 | 398 | omniology | Academic, Biology, Science |

Matthew Gingerich | 98 | 395 | The Majugi Blog | Academic, Synth Bio, Engineering |

Richard Schwarting | 238 | 7538 | Kosmokaryote | Academic, Computing, Linux |

Tony Thompson | 51 | 2346 | Tony Thompson, Geek for Hire | Circuitry, Electronic Craft, Academic |

†*Daniela remarks that the spelling of Citisen is intentional.*

In order to place the blog posts into a SOM, each post was converted to a bitvector. Each bit is assigned to a specific word, so that the positions of each bit consistently represents the same word from post to post. An on-bit represented the presence of a word while an off-bit represented the absence of a word. Frequently used words like “the” were omitted from the word bit-vector, and seldom used words were also omitted.

** Results:** The center image (in the collection to the left) is a density map where darker pixels indicates a larger number of posts — this centre map represents all of the posts made by all of the authors pooled together.

Because of the number of posts and the number of authors, I’ve exploded the single SOM image into the remaining fourteen images.

It was found that posts were most often clustered together if they were both by the same author and on the same topic. Clusters containing more than one author generally did not show much agreement about the topic.

Regions of this SOM were dominated by particular authors and topics as below.

Region |
Authors |
Topics |

Top Left | Liv | Academic Journals |

Eddie | Software Projects | |

Jason | Academic | |

Top Border | Lauren | Human Idiosyncrasies |

Richard | Linux | |

Top Right | Lauren | Improv |

Up & Left of Centre | Daniela | Travel |

Centre | all |
short and misfit posts |

Right Border | Andre | Cooking |

Just Below Centre | Matthew | Software Projects |

Bottom Left | Andre | Language Theory |

Andrew | Software Projects | |

Jason | Software Projects | |

Bottom Border | Richard | Academic |

Bottom Right | Eddie | Web Development |

Jason | Software Tutorials |

** Discussion:** There are some numerical results to go along with this, but they aren’t terribly exciting — the long and the short of it is that this project should to be repeated. The present work points towards the following two needed improvements.

First, the way the bitvectors were cropped at the beginning and at the end were based on a usage heuristic that doesn’t really conform to information theory. I’d likely take a look at the positional density of all bits to select meaningful words to cluster.

Second, all posts were included — this results in the dense spot in the middle of the central map. Whether these posts are short or just misfit, many of them can probably be removed by analyzing their bit density too.

** Appendix:** Here are two figures that describe the distribution of the data in the word bitvectors.

When we sort the words based from a high number of occurrences down to a low number of occurrences, we get graphs that look like the above two. A rank class contains all words that have the same number of occurrences across the entire study. The impulse graph on the left shows the trend for the *number of unique words* in each rank class. The number of words drastically increases as the classes contain fewer words. The impulse graph on the right shows the trend for the *count of uses* for words in a given rank class. The number of uses decreases as words become more rare.

These graphs were made before the main body of the work to sketch out how I wanted the bitvectors to behave — they verify that there was nothing unusual about the way the words were distributed amongst the data.

## On random initial weights (neural networks)

I found an old project in my archives the other day. The long and the short of it is — on occasion — if we use a three-layer back-propagation artificial neural network, initializing the hidden weight layers to larger values works better than initializing them to small values. The general wisdom is that one initializes weight layers to small values just so the bias of the system is broken, and weight values can move along, away and apart to different destination values. The general wisdom is backed by the thought that larger values can cause nets to become trapped in incorrect solutions sooner — but on this occasion, it manages to allow nets to converge more frequently.

The text is a bit small, but the below explains what’s going on.

There are two neural network weight layers (2D real-valued matrices) — the first going from input-to-hidden-layer and the second going from hidden-to-output-layer. We would normally initialize these with small random values, say in the range [-0.3, 0.3]. What we’re doing here is introducing a gap in the middle of that range that expands out and forces the weights to be a bit larger. The x-axis describes the size of this gap, increasing from zero to six. The data point in the far left of the graph corresponds to a usual initialization, when the gap is zero. The gap is increased in increments of 0.05 in both the positive and negative directions for each point on the graph as we move from left to right. The next point hence corresponds to an initialization of weights in the range [-0.35, -0.05] ∨ [0.05, 0.35]. In the far right of the graph, the weights are finally randomized in the range [-6.30, -6.00] ∨ [6.00, 6.30]. The y-axis is the probability of convergence given a particular gap size. Four-hundred trials are conducted for each gap size.

*The result is that the best probability of convergence was achieved when the gap size is 1.0, corresponding to weights in the random range *[-1.3 -1.0] ∨ [1.0, 1.3]*.*

What were the particular circumstances that made this work? I think this is largely dependent on the function to fit.

The Boolean function I chose was a bit of an oddball — take a look at its truth table …

Input 1 | Input 2 | Input 3 | Output |

0 | 0 | 0 | 0 |

0 | 0 | 1 | 0 |

0 | 1 | 0 | 1 |

0 | 1 | 1 | 1 |

1 | 0 | 0 | 1 |

1 | 0 | 1 | 1 |

1 | 1 | 0 | 0 |

1 | 1 | 1 | 1 |

This comes from the function { λ : Bit_1, Bit_2, Bit_3 → (Bit_1 ∨ Bit_2) if (Bit_3) else (Bit_1 ⊕ Bit_2) }.

This function is unbalanced in the sense that it answers true more often than it answers false given the possible combinations of its inputs. This shouldn’t be too big of a problem though — we really only need any linearly inseparable function.

This project was originally done for coursework during my master’s degree. The best size for your initial weights is highly function dependent — I later found that this is less helpful (or even harmful) to problems with a continuous domain or range. It seems to work well for binary and Boolean functions and also for architectures that require recursion (I often used a gap of at least 0.6 during my master’s thesis work with the Neural Grammar Network).

*Remaining parameters: training constant = 0.4; momentum = 0.4; hidden nodes = 2; convergence RMSE = 0.05 (in retrospect, using a raw count up to complete concordance would have been nicer); maximum allowed epochs = 1E5; transfer function = logistic curve.*

## SOM in Regression Data (animation)

The below animation is what happens when you train a Kohonen Self-Organizing Map (SOM) using data meant for regression instead of clustering or classification. A total of 417300 training cycles are performed — a single training cycle is the presentation of one exemplar to the SOM. The SOM has dimensions 65×65. Exemplars are chosen at random and 242 snapshots are taken in total (once every 1738 training cycles).

## C & Bioinformatics: ASCII Nucleotide Comparison Circuit

Here’s a function I developed for Andre about a week ago. The C function takes two arguments. Both arguments are C characters. The first argument corresponds to a degenerate nucleotide as in the below table. The second argument corresponds to a non-degenerate nucleotide {‘A’, ‘C’, ‘G’, ‘T’} or any nucleotide ‘N’. The function returns zero if the logical intersection between the two arguments is zero. The function returns a one-hot binary encoding for the logical intersection if it exists so that {‘A’ = 1, ‘C’ = 2, ‘G’ = 4, ‘T’ = 8} and {‘N’ = 15}. All of this is done treating the lower 5-bits of the ASCII encoding for each character as wires of a circuit.

Character | ASCII (Low 5-Bits) |
Represents | One Hot | Equals |

A |
00001 |
A | 0001 |
1 |

B | 00010 | CGT | 1110 |
14 |

C | 00011 | C | 0010 |
2 |

D | 00100 | AGT | 1101 |
13 |

G | 00111 | G | 0100 |
4 |

H | 01000 | ACT | 1011 |
11 |

K |
01011 | GT | 1100 |
12 |

M | 01101 | AC | 0011 |
3 |

N | 01110 | ACGT | 1111 |
15 |

R | 10010 | AG | 0101 | 5 |

S | 10011 | GC | 0110 |
6 |

T | 10100 | T |
1000 |
8 |

V | 10110 | ACG |
0111 |
7 |

W | 10111 | AT |
1001 |
9 |

Y | 11001 | CT |
1010 |
10 |

The premise is that removing all of the logical branching and using only binary operators would make things a bit faster — I’m actually not sure if the following solution *is* faster because there are *twelve variables local to the function scope* — we can be assured that at least half of these variables will be stored outside of cache and will have to live in memory. We’d get a moderate speed boost if at all.

/* f(): Bitwise comparison circuit that treats nucleotide and degenerate nucleotide ascii characters as mathematical sets. The operation performed is set i intersect j. arg char i: Primer -- accepts ascii |ABCDG HKMNR STVWY| = 15. arg char j: Sequence -- accepts ascii |ACGTN| = 5. return char (k = 0): false -- i intersect j is empty. return char (k > 0): 1 -- the intersection is 'A' 2 -- the intersection is 'C' 4 -- the intersection is 'G' 8 -- the intersection is 'T' 15 -- the intersection is 'N' return char (undefined value): ? -- if any other characters are placed in i or j. */ char f(char i, char j) { // break apart Primer into bits ... char p = (i >> 4) &1; char q = (i >> 3) &1; char r = (i >> 2) &1; char s = (i >> 1) &1; char t = i &1; // break apart Sequence into bits ... char a = (j >> 4) &1; char b = (j >> 3) &1; char c = (j >> 2) &1; char d = (j >> 1) &1; char e = j &1; return ( // == PRIMER CIRCUIT == ( // -- A -- ((p|q|r|s )^1) & t | ((p|q| s|t)^1) & r | ((p| r|s|t)^1) & q | ((p| s )^1) & q&r& t | ((p| t)^1) & q&r&s | (( q| t)^1) & p&s | (( q )^1) & p&r&s ) | ( // -- C -- ((p|q|r )^1) & s | ((p| r|s|t)^1) & q | ((p| s )^1) & q&r& t | ((p| t)^1) & q&r&s | (( q|r )^1) & s&t | (( q| t)^1) & p &r&s | (( r|s )^1) & p&q& t ) << 1 | ( // -- G -- (( q|r| t)^1) & s | ((p|q| s|t)^1) & r | ((p|q )^1) & r&s&t | ((p| r )^1) & q& s&t | ((p| t)^1) & q&r&s | (( q|r )^1) & p& s | (( q| t)^1) & p& s ) << 2 | ( // -- T -- ((p|q|r| t)^1) & s | (( q| s|t)^1) & r | ((p| r|s|t)^1) & q | ((p| r )^1) & q& s&t | ((p| t)^1) & q&r&s | (( q )^1) & p& r&s&t | (( r|s )^1) & p&q& t ) << 3 ) & ( // == SEQUENCE CIRCUIT == ( // -- A -- ((a|b|c|d )^1) & e | ((a| e)^1) & b&c&d ) | ( // -- C -- ((a|b|c )^1) & d&e | ((a| e)^1) & b&c&d ) << 1 | ( // -- G -- ((a|b )^1) & c&d&e | ((a| e)^1) & b&c&d ) << 2 | ( // -- T -- ((a| e)^1) & b&c&d | (( b| d|e)^1) & a& c ) << 3 ); }

Andre’s eventual solution was to use a look-up table which very likely proves faster in practice. At the very least, this was a nice refresher and practical example for circuit logic using four sets of minterms (one for each one-hot output wire).

Should you need this logic to build a fast physical circuit or have some magical architecture with a dozen registers (accessible to the compiler), be my guest 😀

## C & Math: Sieve of Eratosthenes with Wheel Factorization

In the first assignment of Computer Security, we were to implement The Sieve of Eratosthenes. The instructor gives a student the failing grade of 6/13 for a naive implementation, and as we increase the efficiency of the sieve, we get more marks. There are the three standard optimizations: (1) for the current prime being considered, start the indexer at the square of the current prime; (2) consider only even numbers; (3) end crossing out numbers at the square root of the last value of the sieve.

Since the assignment has been handed in, I’ve decided to post my solution here as I haven’t seen C code on the web which implements wheel factorization.

We can think of wheel factorization as an extension to *skipping all even numbers*. Since we know that all even numbers are multiples of two, we can just skip them all and save half the work. By the same token, if we know a pattern of repeating multiples corresponding to the first few primes, then we can skip all of those guaranteed multiples and save some work.

The wheel I implement skips all multiples of 2, 3 and 5. In Melissa O’Neill’s *The Genuine Sieve of Erastothenes*, an implementation of the sieve with a priority queue optimization is shown in Haskell while wheel factorization with the primes 2, 3, 5 and 7 is discussed. The implementation of that wheel (and other wheels) is left as an exercise for her readers 😛

But first, let’s take a look at the savings of implementing this wheel. Consider the block of numbers in modulo thirty below corresponding to the wheel for primes 2, 3 and 5 …

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |

10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |

20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 |

Only the highlighted numbers need to be checked to be crossed out during sieving since the remaining values are guaranteed to be multiples of 2, 3 or 5. This pattern repeats every thirty numbers which is why I say that it is *in modulo thirty*. We hence skip 22/30 of all cells by using the wheel of thirty — a savings of 73%. If we implemented the wheel O’Neill mentioned, we would skip 77% of cells using a wheel of 210 (for primes 2, 3, 5 and 7).

(Note that the highlighted numbers in the above block also correspond to the multiplicative identity *one* and numbers which are coprime to 30.)

Below is the final code that I used.

#include <stdlib.h> #include <stdio.h> #include <math.h> const unsigned int SIEVE = 15319000; const unsigned int PRIME = 990000; int main(void) { unsigned char* sieve = calloc(SIEVE + 30, 1); // +30 gives us incr padding unsigned int thisprime = 7; unsigned int iprime = 4; unsigned int sieveroot = (int)sqrt(SIEVE) +1; // Update: don't need to zero the sieve - using calloc() not malloc() sieve[7] = 1; for(; iprime < PRIME; iprime ++) { // ENHANCEMENT 3: only cross off until square root of |seive|. if(thisprime < sieveroot) { // ENHANCEMENT 1: Increment by 30 -- 4/15 the work. // ENHANCEMENT 2: start crossing off at prime * prime. int i = (thisprime * thisprime); switch (i % 30) { // new squared prime -- get equivalence class. case 1: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 6; case 7: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 4; case 11: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 2; case 13: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 4; case 17: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 2; case 19: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 4; case 23: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 6; case 29: if(!sieve[i] && !(i % thisprime)) {sieve[i] = 1;} i += 1; // 29 + 1 (mod 30) = 0 -- just in step } for(; i < SIEVE; i += 30) { if(!sieve[i+1] && !((i+1) % thisprime)) sieve[i+1] = 1; if(!sieve[i+7] && !((i+7) % thisprime)) sieve[i+7] = 1; if(!sieve[i+11] && !((i+11) % thisprime)) sieve[i+11] = 1; if(!sieve[i+13] && !((i+13) % thisprime)) sieve[i+13] = 1; if(!sieve[i+17] && !((i+17) % thisprime)) sieve[i+17] = 1; if(!sieve[i+19] && !((i+19) % thisprime)) sieve[i+19] = 1; if(!sieve[i+23] && !((i+23) % thisprime)) sieve[i+23] = 1; if(!sieve[i+29] && !((i+29) % thisprime)) sieve[i+29] = 1; } } { int i = thisprime; switch (i % 30) { // write down the next prime in 'thisprime'. case 1: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 6; case 7: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 4; case 11: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 2; case 13: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 4; case 17: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 2; case 19: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 4; case 23: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 6; case 29: if(!sieve[i]) {thisprime = i; sieve[i] = 1; goto done;} i += 1; } for(; i < SIEVE; i += 30) { if(!sieve[i+1]) {thisprime = i+1; sieve[i+1] = 1; goto done;} if(!sieve[i+7]) {thisprime = i+7; sieve[i+7] = 1; goto done;} if(!sieve[i+11]) {thisprime = i+11; sieve[i+11] = 1; goto done;} if(!sieve[i+13]) {thisprime = i+13; sieve[i+13] = 1; goto done;} if(!sieve[i+17]) {thisprime = i+17; sieve[i+17] = 1; goto done;} if(!sieve[i+19]) {thisprime = i+19; sieve[i+19] = 1; goto done;} if(!sieve[i+23]) {thisprime = i+23; sieve[i+23] = 1; goto done;} if(!sieve[i+29]) {thisprime = i+29; sieve[i+29] = 1; goto done;} } done:; } } printf("%d\n", thisprime); free(sieve); return 0; }

Notice that there is a switch construct — this is necessary because we aren’t guaranteed that the first value to sieve for a new prime (or squared prime) is going to be an even multiple of thirty. Consider sieving seven — the very first prime to consider. We start by considering 7^{2} = 49. Notice 49 (mod 30) is congruent to 19. The switch statement incrementally moves the cursor from the 19th equivalence class to the 23rd, to the 29th before pushing it one integer more to 30 — 30 (mod 30) is zero — and so we are able to continue incrementing by thirty from then on in the loop.

The code listed is rigged to find the 990 000th prime as per the assignment and uses a sieve of predetermined size. Note that if you want to use my sieve code above to find whichever prime you like, you must also change the size of the sieve. If you take a look at *How many primes are there?* written by Chris K. Caldwell, you’ll notice a few equations that allow you to highball the nth prime of your choosing, thereby letting you calculate that prime with the overshot sieve size.

Note also that this sieve is not the most efficient. A classmate of mine implemented The Sieve of Atkin which is magnitudes faster than this implementation.