Ed's Big Plans

Computing for Science and Awesome

Archive for the ‘UWiGem’ tag

BioCompiler might start life as BioCOBOL

with 3 comments

Update: Matthew has found an even more thorough review paper that discusses computer assisted synthetic biology approaches — it can be found at doi:10.1016/j.copbio.2009.08.007.

The iGEM competition year is running to a close. The teams are headed into November 2010 and have roughly one month left before attending the conference. I’m personally not attending the conference this year — I think the undergraduates will get more out of the experience.

The current year sees our continued efforts to precipitate a dedicated software team — a functioning autonomous unit that will serve to supplement and enhance Waterloo’s impact in the iGEM competition. More importantly, we’re going to do some very interesting science. We’ve had some success talking with other student groups across campus — notably, we should probably talk with the student IEEE/CUBE chapter when we have more work completed. We had involved BIC as well, however, it was early on and we had even less ground to stand upon ( — the primordial software team was mostly interested in BioMortar and BrickLayer at the time — the later project having been taken up by the iGEM Coop students as in Python this year).

This whole BioCompiler business started when Matthew uncovered a nice candidate problem: the compilation of a schematic for behaviour to a fully functioning synthetic biological circuit. Let’s be as precise as we can be here. I mean to say, we will take a description (which could resemble a piece of formal language source code) — and have it compiled into the sequence of BioBricks that will produce the desired behaviour.

This idea has been approached by several groups before — but each time, a different subproblem was considered (this is a reorganization of Matthew’s very nice list here).

  • Synthetic biology programming language: Genetic Engineering of Living Cells (GEC) (Microsoft Research) is a project that hones in on a formal language specification.
  • Synthetic biology computer assisted design (CAD): Berkeley Software (iGEM 2009) created a suite of items — Eugene (formal language for synthetic biology), Spectacles (visualizer integrating parts with their behaviours), Kepler (a dataflow broker). As well, Berkeley is responsible for the award winning Clotho (iGEM 2008 – Best Software Tool) which is a workbench that connects with the parts registry database (amongst other possible resources).
  • Systems biology pathway reaction simulation: Systems Biology Markup Language (SBML), Systems Biology Workbench (SBW) and Jarnac are a set of tools that perform systems biology analysis (which we consider to be an output of synthetic biology). SBML is the formal language, Jarnac is a reaction network simulator (which utilizes JDesigner as a front end) and SBML is a dataflow broker between SBML and Jarnac.
  • BioBrick specific pathway simulation: Minnesota’s Team Comparator (iGEM 2008) created SynBioSS — a tool which estimates the concentration of reactants and products given the appropriate BioBricks on a simulated circuit.

Update — here are a few more items thanks to Matthew Gingerich, George Zarubin and Andre Masella.

  • Molecular biology and bioinformatics analysis: The European Molecular Biology Open Software Suite (EMBOSS) is a toolkit developed by the European Molecular Biology Network (EMBnet) for bioinformatics. This might not be immediately relevant, but it is interesting. EMBOSS is actually relatively complete. More distantly along this vein, there’s also Bioconductor which focuses mostly on microarray analysis and is implemented in R.
  • More synthetic biology computer aided design (CAD): TinkerCell is a GUI-driven piece of software that supports extension with C++ and Python. TinkerCell along with the suite created by Berkeley Software are the two most promising target systems in which to integrate BioCompiler. Finally, there’s GenoCAD which appears to be in an early phase of development — this software looks to emphasize construction with correct syntax and attribute grammars — I’ll have to read more about this.
  • Formal laboratory protocol description: BioCoder is another Microsoft research project — the designed language aims to be both human readable and complete for automation. It reminds me of standard operating procedures (SOP) with greater precision. While BioCoder compiles from protocol to automation, we’ll be compiling to circuitry and protocol. BioCoder will give us some insights about the kinds of protocols that others are thinking of.

There is of course more software, but these are the items that we have become most familiar with — that we like — and that we consider to be standards toward which our own work should strive.

Two interesting problems arise when we think about these subproblems. First, the programming languages specified (including Eugene from Berkeley’s CAD suite) are exactly what they claim to be. Formal specifications. This is possible because of how concrete they are. They literally document what a synthetic biology circuit is. But this isn’t too different from what humans have been doing in iGEM all along. Second, the synthetic biology items don’t really seem to talk to the systems biology items — whereas we expect that the two — being input and output — to be inextricably linked. I explain my thoughts on the two below.

Why a programming language?

Andre enlightened me to this the other day. Humans invented programming languages to do two things. On an arrow running from the concrete toward the esoteric, we have the practical concern of compressing the amount of code that we want to write while retaining our programs’ expressiveness. This is the origin of macro systems such as COBOL. On an arrow pointing in the reverse — from the cerebral abstract down to the literal — we have the theoretical concern of mathematical beauty, of completeness. This is the origin of such functional languages like Lisp. For humans to have any hope of creating such a Lisp-like language for synthetic biology — we would need to understand all of the reactionary nuances about it, the system with which we tamper — at least inasmuch as a painful heuristic approximation. This is a feat we are no where near completing though footholds are managed with systems biology.

So here we are, BioCOBOL is the first step. A developing simple — though complete macro-like system that is in league in terms of abstraction with the programming language / CAD -like projects we’ve seen thus far. Only, we aim to increase the efficiency of circuit design; so that the programming language is not a literal mapping of the human document — rather, it is an explanation of behaviour. We will abstract it ever so slightly with each iteration of development — departing further and further away from CAD. A subteam headed by Brandon is currently developing the syntax and search algorithms required for the job — Brandon suggests that a weighted traversal of valid circuits should form our algorithmic primitive. My subteam is attempting to characterize as many known circuits in iGEM as possible — analyzing what pieces of input (stimuli: chemical concentrations, gradients, quora, oscillators) may be compressed for their common usage — what function prototypes already exist ( — reacts to an input: promoters) for compression — what the standard outputs are (analogy to printing error messages: GFP-family) etc. — again in the effort to realize what is losslessly compressible. Our software should eventually provide the correct laboratory protocol as well.

In this sense, we are respecting what a compiler is before we even approach more sophisticated compilation: it transforms a document in one language to another.

Incidentally, I should mention that Jordan and George are working on a modelling problem with the lab team — I’m not clear on the specifics, but I take it they want to pull out some differential equations on a set of promoters.

Why do we care that Synthetic Biology logic should talk to Systems Biology logic?

Finally, it is clear that we will eventually want to become even more abstract — even more mathematically complete, even more expressive. While we may never know enough about systems biology to create BioLISP (in our lifetime), we expect there to be sufficient research for us to discover — and perhaps research we can conduct ourselves to come ever closer. Systems biology allows us to think about synthetic biology in terms of reaction concentrations; free energy etc.. It gives the notion of compilation its own ground; the ground we want to cover. Imagine the perfect BioCompiler — stating the a problem to be compiled in terms of the input and output of the system. Let’s be precise here: I mean to say, the products and reactants or behaviour of our circuit. Let us describe what our circuit will do instead of what our circuit is made of. This — the missing link, this compiler — is the logical final step of BioCompiler.

Eddie Ma

October 26th, 2010 at 10:57 am

The Return of Phi C31

without comments

I’ve been so out of the loop with iGEM over the last month. I’ll need to figure out how to get back into the swing of things, probably starting with the post mortem meeting on Tuesday. Generally, since no new maths could be put on the table that actually encompassed the problem well– the brute force approach was kicked into high gear with a few more filters to increase the probability of success.

Call these “System Filters” since they aren’t really based on biologically significant concepts, really just sanity checks that are conceptually consistent with the project (i.e. we’d run out of hard disk space otherwise…). Significantly, Matthew implemented “Blank Stare”, which destroys reactants that exceed a given length (thus preventing them from hogging the CPU looking for less parsimonious solutions). Less significant were Andre’s “Lone Gunman” which deletes arbitrary chromosomes with stochastic efficiency and my “Tag” which prevents chromosomes from cross reacting.

(On second thought, “Tag” IS a “Biological Filter” not a “System Filter” because it removes redundancy by implementing the rule that we only admit bacteria that have exactly one chromosome.)

I should mention that “significance” above isn’t about the triviality of the code, it’s about the amount of anticipated efficiency boon we’d gain from an item’s deployment.

Tomorrow’s post mortem will continue the work I’ve started on our iGEM 2009 Wiki Modelling page… We’ll decide what we want to mention, how close we got to our solution and figure out how to precisely characterize the problem space uncovered by our various attempts.

Additionally, we should probably discuss the relevance of John’s attN site cloning and tests to see if the operators show any sign of degeneracy, and which ones in particular.

Finally, I should mention that Brandon has been working on a C++ port of the whole application we wrote in Python to elucidate how much the virtual machine impacted the performance of our solver– the team is quite divided on this idea with a big half (myself included) thinking that the exponential growth due to the algorithm is the greater factor– Brandon may have some answers for us when it’s up and running.

Knots were the wrong math

without comments

The Knot Math was eventually understood to be the wrong kind of math to model our problem on.

Knots take the form of a circle that has been broken and rejoined at a point on its circumference after being wrapped about itself an arbitrary number of times. What we’re working on doesn’t utilize any function that twists loops of DNA the same way. The knot maths provide a way to real-value-vectorize these shapes, but do not provide an easy way to insert our own data. There are two properties that relate to the incompatibility. The first is that knot maths consider two knots equal if their topology with respect to the number of twists they have are identical. Our problem does not consider these two knots equal, as distance and sequence specificity (imagine each particle on the rope circle was labeled) are required. Second, what we produce overlaps arbitrarily by lying a circle segment on top of another circle segment whereas the knot maths produce overlaps with twists. While I think there could be a clever way to identify our problem with the knot math, I don’t think there is a feasible or cost (time) effective way to do this.

Brain continues to storm.

I did managed to uncover some very exciting papers however. One of them was on a piece of software called TangleSolve— which does do site specific recombination and visualization of DNA knots– reading on this software was actually instrumental in understanding why our problem was not identifiable here. Side note– topoisomerase — is an enzyme involved with DNA knot formation and super coiling relaxation.

Eddie Ma

September 15th, 2009 at 1:29 pm

DNA … Knots and Lambdas

without comments

A long time ago, one Andre lead a team of students in a journey of mathematical and computational modeling; at the very least, we have reached some useful insights from our tidy trip albeit at a distance from the solution.

Presented here is a very jumbled, very abridged account of the activities of the modeling team this summer and the eventual realization that brings us to now.

The Problem Revisited

So we have a sequence. Actually, two sequences. Actually, we have two loops. Two loops of DNA that will contain a specific sequence used for cassette exchange. The problem is the design of these two loops. We want to design them so that we can predictably exchange specific objects between them. We used an enzyme for recombination that is sensitive to specific sites to perform the exchanges.

The above paragraph is an abstract-abstract of the UW iGem Project.

The Top Down Approach

What I eventually labeled in my mind as the top-down approach is called that in analogy to parsing. In parsing, we build a tree. We can do this conceptually from the bottom-up, or from the top-down. From the bottom-up, we know everything we need to know to build the tree… we know as much as we want to know, we even know if there exists not a tree for this particular string of tokens. From the top-down, we’d have to use some magical induction to chain tokens together by determining a structure that the tokens will find pleasing.

The magical induction of the top-down approach is none other than brute force. There is no magic, just an exponential explosion. The base of this power is the length of the string and the exponent of the power alludes to the complexity and depth of the grammar.

We don’t parse for the sequence problem– that is, we assume the grammar to be irrelevant, that a flat degenerate chain is a sufficient enough tree; we operate on sequences with our enzyme instead.

For our sequence problem, we pick three loops. We see if the first two loops add together with respect to the enzyme to make the third loop. By hand, one is tempted to use various heuristics of deductive logic but it became complicated and soon overflowed the allowed dozen or so objects a human brain may accommodate per instant. The machine was dragged in, and the three loops were shown to it using Python.

We presented three loops of one logical suite of tokens. It ran to completion and to no surprise, this was not our solution. We did this again for all three-loops where each loop is one logical suite. That ran to completion and again, no solution– again to be expected; not yet long enough to accommodate the anticipated length of the solution.

One logical block became two, became three… and at each step, the base of the exponent to our magical induction grew.

Four logical blocks… we halted the experiment; the machine would’ve taken a month to finish that block.

The exponential explosion was real, and our bid that the solution may be just short enough to fit therein was proven false.

The Bottom Up Approach

Months passed, various members went on various summer excursions… and many have returned now. We discuss many theoretical approaches. We resample the problem, sniffing for hints. Actually, it’s been Andre, Jordan and me … we haven’t discussed this with the remaining modeling team yet because of just how vague our new lines of intrigue are. I will revise my opinion if the thought that more individuals means faster solution finding crosses my mind again.

I’ve had a few conversations, one with my MSc advisor, Stefan; one with a friend Andrew Baker; and another with my undergraduate project advisor, Bettina. So far, no one’s seen this specific problem before or can allude to either an approach, technology or research that they’ve seen…

We reformalize the problem with the following constraints as follows.

  • Must deal with circularity of DNA, hence by circularly shift invariant
  • Must accommodate or encapsulate reverse complementation

Intrigue

Several lines of intrigue we visit now.

First, Knot Theory– provides a representation for knots as real-valued vectors; unique shapes however may produce degenerate vectors. Knots allow us to take our loop of DNA and place the putative recombinatory hotspots one on top of another. Missing from this item is precisely how to dope the vectors with our own sequence data.

Second, Lambda Math and Logical Programming provide a language and a method respectively to map vectors from left to right. The form of the abductive equations for this problem are yet to be discovered however. We’re thinking about this method because we suspect that the recombinase enzyme activity can be completely expressed as a mathematical construct on our doped knot vectors. We hope that this construct can be expressed with abductive statements.

Third, Recombinatory Calculus– actually, this item is in stark competition with Logical Programming as the functional crux of the model. Recombinatory Calculus which is fairly distant from Recombinatorics, mind– is a math that has shown all other math functions can be constructed by just two atoms. If it turns out that the final representation of a DNA loop looks more like arguments for these two atoms, then we may pursue this– but at present, it seems to be losing against Logical Programming– the allure of the two atoms subsides as we realized the complexity for even the addition function for integers.

Direction

Luckily… roughly a dozen papers have been recovered from various repositories that discuss knot math and how to hack it sufficiently to kindly represent DNA loops. We continue to read and discuss these papers until we feel it reasonable to raise it with the entire modeling group… that is, when the science is done and the engineering begins anew.

oGEM: An iGEM Story

without comments

Thursday last week, the Waterloo iGem team had a online conference over Skype with the iGem teams of Toronto and Ottawa. Also present was Andrew Hessel– the seeder of Canadian iGEM teams… It was pretty extensive, so I’ll just discuss the parts that ended up being immediate goals for the Waterloo team.

oGEM Meeting Over Skype

oGEM Meeting Over Skype

The objective is to end up making an Ontario federation in synthetic biology under the iGEM scaffold a reality; we would eventually expand out of iGEM and cover synthetic biology across Canada, but — plan small, think big.

There must also be incentives for being part of such a federation– an obvious answer is a network of distributed services which are greater as oGEM than the sum of its parts.

Plan Small, Think Big

One of the first things we can experiment on is the idea of a social engineering application. This feature is being investigated by the Waterloo team– Arianne has created a mock-up using Elgg. The objective is to allow individuals across oGEM to know what expertise exists in the network, and to contact appropriate users for collaboration or help based on the interests or skills listed by each user.

Sigma is for Summation

Incentive services for oGEM are being tackled by our team. Andre wants to introduce a federated database of strains and cultures (codenamed BioMortar)– whereas iGEM offers clonable biobricks, the issue remains that cellular transformation is not deterministic. It may work some of the time, or most of the time– for some teams, certain bricks just aren’t successfully cloned. This federation would allow the cataloging of living frozen strains in freezers across oGEM and if users are willing, all synthetic open-source strains. Eventually, someone seeking a strain they’ve had issues with would message someone with a working copy as it were, and request it be shipped.

Caveats

We expect caveats to emerge– first of which is dedication. In our meeting, it is clear that working groups must emerge to take over tasks– working groups that are passionate about their own objectives. I suppose UWiGEM represents two working groups each operating on one of the above steps. Ottawa has volunteered to look at the legal caveats– how oGEM identifies itself as a legal entity as well as its level of permitted activity while still being a part of iGEM and synthetic biology across the nation is all very vague. It’s good that someone has an idea of how to investigate this feature of the problem terrain!

Group Photo

Attending the meeting on the Waterloo side…

oGEM Chat - Waterloo Side

oGEM Chat - Waterloo Side

Left to right in the above photo: Danielle, Andre, John, Leah.

iGEM: Freedom Unhashed

with 2 comments

An iGEM modeling meeting was held yesterday wherein Andre revealed his big plans for switching the team into enduserhood. Unfortunately, I didn’t follow along as well as I could have this time around and can really only document and comment on the bottom line.

We’ve again self-organized into two to three teams based on task. The first team is charged with creating a hashing function which creates a sequence of integrase usable tokens from an integer. The second (and third?) team is responsible for creating a check to ensure that a given product corresponds correctly to a given pair of reactant sequences. Finally, the dangling task of creating an even bigger external harness along with modifications to the present main.py program logic is likely being handled by the latter team.

The Hashing Task is kind of interesting because it essentially calls for unhashing an integer into a meaningful sequence rather than hashing a meaningful sequence into a unique integer. Since the reactant strings can themselves be lexicographically sequenced, then the task quickly becomes an enumeration or counting problem whereupon we find the most efficient way to count through the possible permutations of reactant tokens until we reach the integer that we want. The backward task (what we’re doing) may end up being implemented as the forward task with a sequential search.

The hashing subteam is headed by Jordan, the modeling head from last year and is joined by myself and Wylee– I honestly don’t see this as a task that can’t be completed by one person in a single bout of insanity– so it’s likely that I’ll hop over to Andre’s reactant-product verification team whenever this finishes.

We’ve planned another meeting for Tuesday 5pm next week to pull whatever we have together and to tackle any nascent problems.

Reactant-Product Verification is I think the more straight forward item, at least to explain. It is likely more technically challenging. Basically, we make the reaction go forward, and if the product matches what we wanted, then we favour the persistence of the product. … Err, at least that’s how I understood it… I’ll probably need to pop in and ask about it on Thursday before the big oGEM Skype meeting.

Side note– Oddly, both Shira and John were present at this meeting– it probably means we’re expecting progress 😀

Eddie Ma

July 22nd, 2009 at 5:36 pm