Ed's Big Plans

Computing for Science and Awesome

  • Page 1 of 2
  • 1
  • 2
  • >

Archive for the ‘Mathematical Modeling’ tag

The Return of Phi C31

without comments

I’ve been so out of the loop with iGEM over the last month. I’ll need to figure out how to get back into the swing of things, probably starting with the post mortem meeting on Tuesday. Generally, since no new maths could be put on the table that actually encompassed the problem well– the brute force approach was kicked into high gear with a few more filters to increase the probability of success.

Call these “System Filters” since they aren’t really based on biologically significant concepts, really just sanity checks that are conceptually consistent with the project (i.e. we’d run out of hard disk space otherwise…). Significantly, Matthew implemented “Blank Stare”, which destroys reactants that exceed a given length (thus preventing them from hogging the CPU looking for less parsimonious solutions). Less significant were Andre’s “Lone Gunman” which deletes arbitrary chromosomes with stochastic efficiency and my “Tag” which prevents chromosomes from cross reacting.

(On second thought, “Tag” IS a “Biological Filter” not a “System Filter” because it removes redundancy by implementing the rule that we only admit bacteria that have exactly one chromosome.)

I should mention that “significance” above isn’t about the triviality of the code, it’s about the amount of anticipated efficiency boon we’d gain from an item’s deployment.

Tomorrow’s post mortem will continue the work I’ve started on our iGEM 2009 Wiki Modelling page… We’ll decide what we want to mention, how close we got to our solution and figure out how to precisely characterize the problem space uncovered by our various attempts.

Additionally, we should probably discuss the relevance of John’s attN site cloning and tests to see if the operators show any sign of degeneracy, and which ones in particular.

Finally, I should mention that Brandon has been working on a C++ port of the whole application we wrote in Python to elucidate how much the virtual machine impacted the performance of our solver– the team is quite divided on this idea with a big half (myself included) thinking that the exponential growth due to the algorithm is the greater factor– Brandon may have some answers for us when it’s up and running.

Knots were the wrong math

without comments

The Knot Math was eventually understood to be the wrong kind of math to model our problem on.

Knots take the form of a circle that has been broken and rejoined at a point on its circumference after being wrapped about itself an arbitrary number of times. What we’re working on doesn’t utilize any function that twists loops of DNA the same way. The knot maths provide a way to real-value-vectorize these shapes, but do not provide an easy way to insert our own data. There are two properties that relate to the incompatibility. The first is that knot maths consider two knots equal if their topology with respect to the number of twists they have are identical. Our problem does not consider these two knots equal, as distance and sequence specificity (imagine each particle on the rope circle was labeled) are required. Second, what we produce overlaps arbitrarily by lying a circle segment on top of another circle segment whereas the knot maths produce overlaps with twists. While I think there could be a clever way to identify our problem with the knot math, I don’t think there is a feasible or cost (time) effective way to do this.

Brain continues to storm.

I did managed to uncover some very exciting papers however. One of them was on a piece of software called TangleSolve— which does do site specific recombination and visualization of DNA knots– reading on this software was actually instrumental in understanding why our problem was not identifiable here. Side note– topoisomerase — is an enzyme involved with DNA knot formation and super coiling relaxation.

Eddie Ma

September 15th, 2009 at 1:29 pm

DNA … Knots and Lambdas

without comments

A long time ago, one Andre lead a team of students in a journey of mathematical and computational modeling; at the very least, we have reached some useful insights from our tidy trip albeit at a distance from the solution.

Presented here is a very jumbled, very abridged account of the activities of the modeling team this summer and the eventual realization that brings us to now.

The Problem Revisited

So we have a sequence. Actually, two sequences. Actually, we have two loops. Two loops of DNA that will contain a specific sequence used for cassette exchange. The problem is the design of these two loops. We want to design them so that we can predictably exchange specific objects between them. We used an enzyme for recombination that is sensitive to specific sites to perform the exchanges.

The above paragraph is an abstract-abstract of the UW iGem Project.

The Top Down Approach

What I eventually labeled in my mind as the top-down approach is called that in analogy to parsing. In parsing, we build a tree. We can do this conceptually from the bottom-up, or from the top-down. From the bottom-up, we know everything we need to know to build the tree… we know as much as we want to know, we even know if there exists not a tree for this particular string of tokens. From the top-down, we’d have to use some magical induction to chain tokens together by determining a structure that the tokens will find pleasing.

The magical induction of the top-down approach is none other than brute force. There is no magic, just an exponential explosion. The base of this power is the length of the string and the exponent of the power alludes to the complexity and depth of the grammar.

We don’t parse for the sequence problem– that is, we assume the grammar to be irrelevant, that a flat degenerate chain is a sufficient enough tree; we operate on sequences with our enzyme instead.

For our sequence problem, we pick three loops. We see if the first two loops add together with respect to the enzyme to make the third loop. By hand, one is tempted to use various heuristics of deductive logic but it became complicated and soon overflowed the allowed dozen or so objects a human brain may accommodate per instant. The machine was dragged in, and the three loops were shown to it using Python.

We presented three loops of one logical suite of tokens. It ran to completion and to no surprise, this was not our solution. We did this again for all three-loops where each loop is one logical suite. That ran to completion and again, no solution– again to be expected; not yet long enough to accommodate the anticipated length of the solution.

One logical block became two, became three… and at each step, the base of the exponent to our magical induction grew.

Four logical blocks… we halted the experiment; the machine would’ve taken a month to finish that block.

The exponential explosion was real, and our bid that the solution may be just short enough to fit therein was proven false.

The Bottom Up Approach

Months passed, various members went on various summer excursions… and many have returned now. We discuss many theoretical approaches. We resample the problem, sniffing for hints. Actually, it’s been Andre, Jordan and me … we haven’t discussed this with the remaining modeling team yet because of just how vague our new lines of intrigue are. I will revise my opinion if the thought that more individuals means faster solution finding crosses my mind again.

I’ve had a few conversations, one with my MSc advisor, Stefan; one with a friend Andrew Baker; and another with my undergraduate project advisor, Bettina. So far, no one’s seen this specific problem before or can allude to either an approach, technology or research that they’ve seen…

We reformalize the problem with the following constraints as follows.

  • Must deal with circularity of DNA, hence by circularly shift invariant
  • Must accommodate or encapsulate reverse complementation

Intrigue

Several lines of intrigue we visit now.

First, Knot Theory– provides a representation for knots as real-valued vectors; unique shapes however may produce degenerate vectors. Knots allow us to take our loop of DNA and place the putative recombinatory hotspots one on top of another. Missing from this item is precisely how to dope the vectors with our own sequence data.

Second, Lambda Math and Logical Programming provide a language and a method respectively to map vectors from left to right. The form of the abductive equations for this problem are yet to be discovered however. We’re thinking about this method because we suspect that the recombinase enzyme activity can be completely expressed as a mathematical construct on our doped knot vectors. We hope that this construct can be expressed with abductive statements.

Third, Recombinatory Calculus– actually, this item is in stark competition with Logical Programming as the functional crux of the model. Recombinatory Calculus which is fairly distant from Recombinatorics, mind– is a math that has shown all other math functions can be constructed by just two atoms. If it turns out that the final representation of a DNA loop looks more like arguments for these two atoms, then we may pursue this– but at present, it seems to be losing against Logical Programming– the allure of the two atoms subsides as we realized the complexity for even the addition function for integers.

Direction

Luckily… roughly a dozen papers have been recovered from various repositories that discuss knot math and how to hack it sufficiently to kindly represent DNA loops. We continue to read and discuss these papers until we feel it reasonable to raise it with the entire modeling group… that is, when the science is done and the engineering begins anew.

iGEM: Freedom Unhashed

with 2 comments

An iGEM modeling meeting was held yesterday wherein Andre revealed his big plans for switching the team into enduserhood. Unfortunately, I didn’t follow along as well as I could have this time around and can really only document and comment on the bottom line.

We’ve again self-organized into two to three teams based on task. The first team is charged with creating a hashing function which creates a sequence of integrase usable tokens from an integer. The second (and third?) team is responsible for creating a check to ensure that a given product corresponds correctly to a given pair of reactant sequences. Finally, the dangling task of creating an even bigger external harness along with modifications to the present main.py program logic is likely being handled by the latter team.

The Hashing Task is kind of interesting because it essentially calls for unhashing an integer into a meaningful sequence rather than hashing a meaningful sequence into a unique integer. Since the reactant strings can themselves be lexicographically sequenced, then the task quickly becomes an enumeration or counting problem whereupon we find the most efficient way to count through the possible permutations of reactant tokens until we reach the integer that we want. The backward task (what we’re doing) may end up being implemented as the forward task with a sequential search.

The hashing subteam is headed by Jordan, the modeling head from last year and is joined by myself and Wylee– I honestly don’t see this as a task that can’t be completed by one person in a single bout of insanity– so it’s likely that I’ll hop over to Andre’s reactant-product verification team whenever this finishes.

We’ve planned another meeting for Tuesday 5pm next week to pull whatever we have together and to tackle any nascent problems.

Reactant-Product Verification is I think the more straight forward item, at least to explain. It is likely more technically challenging. Basically, we make the reaction go forward, and if the product matches what we wanted, then we favour the persistence of the product. … Err, at least that’s how I understood it… I’ll probably need to pop in and ask about it on Thursday before the big oGEM Skype meeting.

Side note– Oddly, both Shira and John were present at this meeting– it probably means we’re expecting progress 😀

Eddie Ma

July 22nd, 2009 at 5:36 pm

Operator Group Meeting

without comments

The Operator Group (UWiGem/Modeling/Operators) had a meeting about a week ago– the meeting ended up being between three people: Matthew, Andre and me at the iGem office. We’ve basically figured out everything we needed to in terms of raw interfaces between our module and the remaining two modules (Filtering Group and Giant Scaffold Group). The DNAClass was updated with the needs Andre presented– one of which is the ability to iterate over a DNAObject while returning yielding both the token index and token in a duplet: (index, token).

The implementation of the enumerate() built-in in Python (PEP 279) doesn’t allow for abstract function overriding. It always counts a collection as it iterates over it starting from zero. Ideally, the count should reflect the index of the circular DNA strand which means that it should be able to count forward or backward (iterate as reverse compliment), and count from any arbitrary position in the loop.

Note that the reverse compliment copy constructor (DNAObject.rc()) does not cause indexes to be reversed… It actually produces a reverse compliment strand and doesn’t do anything special with the indices (i.e. The new strand increments positively as it iterates forwardly). This behaviour is being debated now– On the one hand, it’s correct because a reverse compliment strand is a new strand; however, it is not a strand de novo– it came from a positive sequence.

I’m now waiting for Andre to let me know about the functions and data frameworks needed for the Operators module; my feeling is that the functions will be the straight forward integrase enzyme actions and that the data framework will simply be a python list.

Eddie Ma

June 26th, 2009 at 2:42 pm

Modeling Meeting

without comments

Modeling Team Selection with Flush();

Modeling Team Selection with Flush();

A modeling meeting occurred on Wednesday. Andre headed off the discussion and revisited the entire program layout in a nice chalkboard cartoon. Unfortunately, Andre generally doesn’t push down hard enough or make wide enough lines with the chalk in order to make a high enough contrast image against the black board for photography (i.e. faint drawing => no photos, sorry).

The discussion saw the formalization and division of the programming problem into three distinct software components as follows.

  • Genetic Fragment Operators
  • Genetic Fragment Filters
  • Overall Program Logic

Genetic Fragment Operators

These are the functions that represent reverse-complementation, enzyme activity etc..

Genetic Fragment Filters

These are functions that represent removing uninteresting, ‘inert’, undesirable and fatal fragments of DNA. This definition will become more precise once we’ve worked on the project a bit and better understand the philosophical correctness of each of these notions.

Overall Program Logic

The overall program logic will constitute producing some structure that represents a Big Bag of DNA (as opposed to a cell), communication between this Big Bag, the Operator module and the Filter module and of course– our main program loop.

What I’m doing…

I’ve been tasked with producing a universal representation of DNA which includes a circular iterator on a loop of DNA with an arbitrary starting position. This is OK to do in Python with the use of the ‘yield’ operator. I will be borrowing from Jordan / Brendan / My own previous ideas for this representation– we want to have an easy single-letter-token system and for the moment are happy with the single byte space ascii has to offer.

Eddie Ma

June 12th, 2009 at 8:41 am

  • Page 1 of 2
  • 1
  • 2
  • >