Since I’m heading into Reading Week, I’ve decided that this is a good time to take a moment to reflect on the semester. So far, it’s been very rewarding.
It’s fitting then that I should tell the story starting with the break before this semester. During that break, I spent a few days figuring out whether or not I was happy working on the symmetrical protein evolution problem at the University of Waterloo. As it turned out, my brain had become so trained into thinking like a computer scientist, there was actually an added cognitive tax in order to communicate well with my biology and biochemistry advisors and labmates. It was clear that I would better serve in a role elsewhere. The friends I made at Waterloo are the good ones — they are exactly as capable, willing to help and academically diverse as I need them to be. I’m happy to say that we’ll be picking each others’ brains for a long time. The evolution problem is a very interesting one, and I’ve promised my former advisors that I would help whoever they find to take my place, as that person would inherit a pile of my source code, data and in-house briefings.
I am here now — returned to the University of Guelph. The graduate programs at the two universities differ in many ways — the one most significant to my brain is when graduate courses are taken. At Waterloo, the culture has it that one works on their thesis primarily at all times and takes an occasional course to fulfill degree requirements. At Guelph, the culture has instead that one amortizes all of their courses in the first semester so that if the course materials are even ostensibly useful during thesis work, that knowledge would be mentally installed and available.
(Of course, the cultural differences are likely influenced by the different disciplines too.)
So this semester, I’m taking Computer Security, Artificial Neural Networks and Image Processing. Computer Security has been mostly to do with number theory. I’ve retrained myself in matrix multiplication and inversion while learning fast exponentiation, modulus power, modulus inverse, multiplicative and additive groups, the totient function, sieves, the extended Euclidean algorithm and much more I’m sure. I’m happy that I’m continuing to save my notes digitally in triplicate so that I’ll have a backup when my soft meat brain starts to forget. The number theory amounts to nice security devices such as RSA encryption. There’s another half semester to go, so whether it will be equally math intensive — or more application intensive built upon what we’ve already learned — remains to be seen. Artificial Neural Networks is a nice course for me — it rounds out my repertoire of architectures and training algorithms seeing as I had only worked with the feed forward, recurrent and recursive (back propagation) cases during my Master’s work. Image Processing has been a curious class. We’ve been through one round of presentations and are heading into learning the math behind a number of transforms applicable to image processing. This class is very firmly grounded in the symbolic maths and algebra — each adjective, each concept is meticulously and correctly described mathematically. Consider the precise meaning of continuity below as an optional property of fuzzy inverses.
∀x0 ∈ [0, 1], ∀ε ∈ ]0, +∞[, ∃η ∈ ]0, +∞[, ∀x ∈ [0,1], |x – x0| < η → |¬(x) – ¬(x0)| < ε
Assertion: No matter how small the difference is between two x’s, there will always exist a value for the difference of their negations.
There have been many items to memorize in this course — luckily, the above mathematical statement isn’t one of them
I’ve chosen a thesis topic as well. As it turns out, I’ll be working in machine learning — but this time on something of an expert decision support system that replaces the human proof-reader during nucleotide sequencing. If I do things correctly, the system I build will be able (1) to anticipate when it will make a mistake, (2) to anticipate how the human would react to that mistake and (3) to replace the erroneous token with the repaired token that a human expert would choose. I’ll have to go into more details as I discover them for myself — but it looks to be fun — you’ll notice that step (1) is probably recursively applicable to steps (2) and (3) — this device should know when it will make a mistake about a mistake — how this plays out in the decision states of my system remains to be discovered