Ed's Big Plans

Computing for Science and Awesome

TIM Barrels and 4-Alpha Helix Bundles

without comments

Beta-Alpha TIM Barrels and 4-Alpha Helix bundles are the first of the major folds I’ll be looking at here at Waterloo…

As with all academic projects, the probability of goal, approach and method mutation is high. Looking at the above protein folds serves as an excellent starting point as I’ll be applying some of the established methods that Andrew and Aaron have developed.

Alpha helices and Beta sheets are objects that any highschool biologist is acquainted with. To recapitulate, alpha helices are sequences of amino acids arranged so that the alpha carbon of each amino acid falls along the path of a helix. The number of amino acids per turn in this peptide and the regularity of the helix are determined by both the sequence and the environment that peptide finds itself in. Amino acids in the beta sheet conformation are arranged so that their alpha carbons zig zag. The result is a nice wide and flat shape schematically drawn as a sheet. Alpha helices and Beta sheets are collectively called secondary structural elements.

So, folds are these giant overarching classification of proteins– Folds themselves are inherently structural, so classifying them OR using them as classifications is only relevant in structural studies and databases on the web like SCOP and CATH. In databases such as SCOP and CATH, classification of similarly structured proteins start by determining whether the protein contains mostly alpha helices; mostly beta sheets; beta sheet and alpha helices alternatively and irregularly; or beta sheet and alpha helices in distinct regions of a protein. In SCOP, further classification is done by manually assigning proteins to smaller and smaller categories, while in CATH, these classifications are done by a hidden Markov model and then manually inspected (or not). It turns out that CATH uses a similar manual approach, and uses HMMs only to assist; contrast with PFAM which actually utilizes HMMs for the majority of work and is verified afterward by humans.

Certain folds like the beta-Trefoil and TIM Barrel benefit from containing only proteins that cleanly fit into some subcategory or several subcategories– it is then possible to just drill into the right level of categorization and pull out all of the beta-Trefoils and TIM Barrels we want.

The 4-Alpha-Helix Bundle constitutes a fold of protein that manages to be spread around the databases, being a very common secondary structural repeat; also a very small repeat when compared to the two giants above. These two items represent an interesting contrast too. Both machine and human intelligence pulls TIM Barrels together while sprinkling alpha-helix bundles across databases and subcategories. And yes, the size difference helps too.

So, I’m starting with a structural then sequence based alignment for single domain TIM Barrels and alpha-helix bundles; to be completely focused, an objective is named: To identify where sequence repeats occur in each individual protein.

Eddie Ma

September 24th, 2009 at 2:09 pm

Knots were the wrong math

without comments

The Knot Math was eventually understood to be the wrong kind of math to model our problem on.

Knots take the form of a circle that has been broken and rejoined at a point on its circumference after being wrapped about itself an arbitrary number of times. What we’re working on doesn’t utilize any function that twists loops of DNA the same way. The knot maths provide a way to real-value-vectorize these shapes, but do not provide an easy way to insert our own data. There are two properties that relate to the incompatibility. The first is that knot maths consider two knots equal if their topology with respect to the number of twists they have are identical. Our problem does not consider these two knots equal, as distance and sequence specificity (imagine each particle on the rope circle was labeled) are required. Second, what we produce overlaps arbitrarily by lying a circle segment on top of another circle segment whereas the knot maths produce overlaps with twists. While I think there could be a clever way to identify our problem with the knot math, I don’t think there is a feasible or cost (time) effective way to do this.

Brain continues to storm.

I did managed to uncover some very exciting papers however. One of them was on a piece of software called TangleSolve— which does do site specific recombination and visualization of DNA knots– reading on this software was actually instrumental in understanding why our problem was not identifiable here. Side note– topoisomerase — is an enzyme involved with DNA knot formation and super coiling relaxation.

Eddie Ma

September 15th, 2009 at 1:29 pm

I’m a T.A. Now.

with 2 comments

Brief: Analytical Methods in Molecular Biology is the course that I’ll be TAing this term. It looks like it’ll be a lot of fun. I’m surprised at the amount I remember from my courses at Guelph; I’m also surprised by the amount I’m relearning.

Update: I’ll outline the course here– we discuss the reason to, and how to use synthetic biology in order to identify and characterize genes. Characterization is an intentionally broad word indicating the determination of the putative DNA sequence in question, the range of phenotypes its alleles produce, the mass, charge, solubility of the protein it produces along with catalytic and structural activity etc., and finally some profile generated using various bioinformatics scoring for the labeling of homologues, related structures and sequences etc.. Well, that’s my take on it so far although I likely will revise my understanding of the nature of this course as I progress through it.

My TA partner is Ariana Marcassa who finds herself in her third year of undergrad. I think we’ll make a good team.

Eddie Ma

September 15th, 2009 at 9:20 am

DNA … Knots and Lambdas

without comments

A long time ago, one Andre lead a team of students in a journey of mathematical and computational modeling; at the very least, we have reached some useful insights from our tidy trip albeit at a distance from the solution.

Presented here is a very jumbled, very abridged account of the activities of the modeling team this summer and the eventual realization that brings us to now.

The Problem Revisited

So we have a sequence. Actually, two sequences. Actually, we have two loops. Two loops of DNA that will contain a specific sequence used for cassette exchange. The problem is the design of these two loops. We want to design them so that we can predictably exchange specific objects between them. We used an enzyme for recombination that is sensitive to specific sites to perform the exchanges.

The above paragraph is an abstract-abstract of the UW iGem Project.

The Top Down Approach

What I eventually labeled in my mind as the top-down approach is called that in analogy to parsing. In parsing, we build a tree. We can do this conceptually from the bottom-up, or from the top-down. From the bottom-up, we know everything we need to know to build the tree… we know as much as we want to know, we even know if there exists not a tree for this particular string of tokens. From the top-down, we’d have to use some magical induction to chain tokens together by determining a structure that the tokens will find pleasing.

The magical induction of the top-down approach is none other than brute force. There is no magic, just an exponential explosion. The base of this power is the length of the string and the exponent of the power alludes to the complexity and depth of the grammar.

We don’t parse for the sequence problem– that is, we assume the grammar to be irrelevant, that a flat degenerate chain is a sufficient enough tree; we operate on sequences with our enzyme instead.

For our sequence problem, we pick three loops. We see if the first two loops add together with respect to the enzyme to make the third loop. By hand, one is tempted to use various heuristics of deductive logic but it became complicated and soon overflowed the allowed dozen or so objects a human brain may accommodate per instant. The machine was dragged in, and the three loops were shown to it using Python.

We presented three loops of one logical suite of tokens. It ran to completion and to no surprise, this was not our solution. We did this again for all three-loops where each loop is one logical suite. That ran to completion and again, no solution– again to be expected; not yet long enough to accommodate the anticipated length of the solution.

One logical block became two, became three… and at each step, the base of the exponent to our magical induction grew.

Four logical blocks… we halted the experiment; the machine would’ve taken a month to finish that block.

The exponential explosion was real, and our bid that the solution may be just short enough to fit therein was proven false.

The Bottom Up Approach

Months passed, various members went on various summer excursions… and many have returned now. We discuss many theoretical approaches. We resample the problem, sniffing for hints. Actually, it’s been Andre, Jordan and me … we haven’t discussed this with the remaining modeling team yet because of just how vague our new lines of intrigue are. I will revise my opinion if the thought that more individuals means faster solution finding crosses my mind again.

I’ve had a few conversations, one with my MSc advisor, Stefan; one with a friend Andrew Baker; and another with my undergraduate project advisor, Bettina. So far, no one’s seen this specific problem before or can allude to either an approach, technology or research that they’ve seen…

We reformalize the problem with the following constraints as follows.

  • Must deal with circularity of DNA, hence by circularly shift invariant
  • Must accommodate or encapsulate reverse complementation


Several lines of intrigue we visit now.

First, Knot Theory– provides a representation for knots as real-valued vectors; unique shapes however may produce degenerate vectors. Knots allow us to take our loop of DNA and place the putative recombinatory hotspots one on top of another. Missing from this item is precisely how to dope the vectors with our own sequence data.

Second, Lambda Math and Logical Programming provide a language and a method respectively to map vectors from left to right. The form of the abductive equations for this problem are yet to be discovered however. We’re thinking about this method because we suspect that the recombinase enzyme activity can be completely expressed as a mathematical construct on our doped knot vectors. We hope that this construct can be expressed with abductive statements.

Third, Recombinatory Calculus– actually, this item is in stark competition with Logical Programming as the functional crux of the model. Recombinatory Calculus which is fairly distant from Recombinatorics, mind– is a math that has shown all other math functions can be constructed by just two atoms. If it turns out that the final representation of a DNA loop looks more like arguments for these two atoms, then we may pursue this– but at present, it seems to be losing against Logical Programming– the allure of the two atoms subsides as we realized the complexity for even the addition function for integers.


Luckily… roughly a dozen papers have been recovered from various repositories that discuss knot math and how to hack it sufficiently to kindly represent DNA loops. We continue to read and discuss these papers until we feel it reasonable to raise it with the entire modeling group… that is, when the science is done and the engineering begins anew.

New Diagram for MSc-X3 (math paper)

featured post

without comments

Brief: I’m particularly happy with this diagram… I had something along these lines in my head for a while, but I never could figure out how to draw it correctly. I never thought that simplifying it to three easy steps was the smarter thing to do.

Some Assembly Required.

Eddie Ma

August 20th, 2009 at 12:08 pm


without comments

Brief: I forgot all about sqkillall.py! It’s a convenience script for killing all of the SharcNet jobs belonging to you! (More about it; Source code).

Eddie Ma

July 28th, 2009 at 1:02 pm