Cbcb:Pop-Lab:Ted-Report

From Cbcb
Revision as of 17:50, 24 January 2010 by Tgibbons (talk | contribs) (→‎January 29, 2010: Finally added the table)
Jump to navigation Jump to search

Older Entries

2009

January 15, 2010

Minimus Documentation

Presently, the only relevant Google hit for "minimus" on the first page of results is the sourceforge wiki. The only example on this page is incomplete and appears to be an early draft made during development.

Ideally, it should be easy to find a complete guide with the general format:

  • Simple use case:
`toAmos -s path/to/fastaFile.seq -o path/to/fastaFile.afg`
`minimus path/to/fastaFile(prefix)`
  • Necessary tools for set up (toAmos)
  • Other options
  • etc

The description found on the Minimus/README page (linked to from the middle of the starting page) is more appropriate, but features use cases that may no longer be common and references another required tool (toAmos) without linking to it or describing how to access it. A description of this tool can be found on Amos File Conversion Utilities page (again, linked to from the starting page), but it is less organized than what I've come to expect from a project page and it is easy to get lost or distracted by the rest of the Amos documentation while trying to peace together the necessary steps for a basic assembly.

Comparative Network Analysis pt. 2

  • Meeting with Volker this Friday to discuss how best to apply network alignment to what he's doing
  • I'm simultaneously trying to find a way to apply my network alignment technique to predicting genes in metagenomic samples
    • I've been trying to find a way to get beyond the restriction that my current program requires genes to be annotated with an EC number. A potentially interesting next step may be to use BioPython to BLAST the sequence of each enzyme annotated in every micro-organism in KEGG against a metagenomic library.
      • The results would be stretches of linked reactions that have been annotated in KEGG pathways.
      • This method could be applied to contigs just as easily as finished sequences. In a scenario where perhaps there was low coverage, it could be used to identify genes which are probably there but just weren't sampled by showing the presence of the rest pathway. In short, this could finally accomplish what Mihai asked me to work on when I showed up.
      • The major theoretical shortcoming of this approach is that it could only identify relatively well characterized pathways.
      • The practical shortcoming of this approach will start by obtaining a fairly complete copy of KEGG (which as we've learned is a mess to parse locally and unusably slow to call through the API), and will continue to the computational challenge of such a large scale BLAST operation.
    • Ask Bo about this when he gets back. He may have already done this.

January 22, 2010

  • Met with Dan and Sergey to talk about the Minimus-Bambus pipeline
    • Minimus is running fine. I've begun characterizing its run-time behavior (see next week's entry)
    • After some tweeking by Sergey, Bambus was able to finish but did not generate a scaffold. We're going to talk about this after the meeting on Monday.
    • Sergey had an interesting idea for making a better read simulator:
      • Error-free reads are cheap and easy to generate. The problem is with the error model.
      • The "best" tool (that we are aware of) which includes error models is MetaSim, but the error models are years out of date and the authors has been historically unreachable. While Mihai has now shown me how to edit the models in a reasonable way from flat files allowing to characterize base substitutions, I'm not convinced it would be faster or easier to write a program that would modify these files than it would be to just write an entirely new program; and given the amount of time I've spent trying to use MetaSim, I'm more than ready to walk away from it. Oh yeah, and MetaSim doesn't work from the command line, so no scripting.
      • Sergey has pointed out that most companies will assemble E. coli when they release a new sequencer. Conveniently, there are many high quality assemblies of E. coli available for reference. It might therefore be possible to generate new error models for these sequencers in an automated fashion by mapping the E. coli reads to the available reference genomes, collecting the error frequencies, and then using them to mask synthesized reads.
      • I also talked with Mohammad and Mihai about this, who seemed to also think it was a pretty good idea. Mihai has proposed having Sergey or Mohammad add the described error model-generator to his read sampler (written in C) when they have time, but not in preparation of the oral microbiome data.
  • Met with James to discuss my work with Volker
    • Told him about my meeting with Volker and the paper he wanted me to prepare, more or less by myself. The concepts of the papers are these:
      • Most available genomic sequences of mycobacteria are of a very small subset of highly pathogenic organisms.
      • Subtractive comparative genomics can be used to identify genes that are potentially responsible for differing phenotypes (such as extreme pathogenicity), but there must be an available genomic sequences for closely related organisms with differing phenotypes.
      • Volker has sequenced 2 more non-pathogenic strains of mycobacteria (gastri, and kansasiiW58) with the intention of increasing the effectiveness of these subtractive comparative genomic studies.
      • The meat of the paper would be comparing the results of subtractive comparative genomic analysis using all currently available strains in RefSeq, with the results from also using the two novel sequences.
      • The other, smaller publishable portion of this project would be a comparison of gastri and kansasiiW58 to each other because they are allegedly thought to be extremely closely related, and yet they have distinct phenotypes (which I've now forgotten).
      • James seemed to think this could make an okay paper, and he confirmed that he did not understand that Volker was looking for someone to do all of the analysis, both computational and biological, with Volker only contributing analysis of the analysis after it was all over.
    • Ended up also discussing his work on differential abundance in populations of microorganisms.
      • I'm going to start working on taking over and expanding Metastats this semester.
      • I'm also going to start talking to Bo when he gets back about exactly what he's doing, and how I might be able to include pathway prediction in my expansion of Metastats without stepping on his toes.
      • Mihai has given me his approval to focus on this.
  • Met with Mihai to discuss working with Volker
    • Explained that rather than looking for someone to do only the complex portions of the computational analysis, Volker was/is looking for someone to do the complete analysis.
    • In exchange, Volker is offering first authorship and, if need be, to split the student's funding with their primary PI.
    • I think I'm capable of doing this within 3 or 4 months but it would consume my time pretty thoroughly.
    • Mihai agreed that this is a reasonable deal, but that I have no personal interest in studying mycobacteria, and it's therefore unwise of me to invest a bunch of time becoming an expert on an organism I have no interest in continuing to study or work with. I've therefore offered Volker to work closely with one of his graduate students who could meet with me every week or two. I would be willing to do all of the computational analysis and explain it to them, but they would have to actually look up potentially interesting genes and relationships I discover and help me keep the analysis biologically interesting and relevant.
  • Met with Mihai and Mohammad to discuss our impending huge-ass(embly) problem
    • Talked about strategies for iterative assembly as an approach to assembling intractably large data sets. Most have glaring short-comings and complications.
    • Discovered Mike Schatz has a map-reduce implementation of an assembler that uses De Bruijn graphs and is better suited to assemblies with high coverage but short read lengths.

January 29, 2010

I'm testing minimus and bambus in preparation of the oral microbiome data, and after spamming several lab members with email, it occurred to me that it would be considerably more considerate to put the information here instead.

New! Now in table form:

Minimus performance analysis on Privet
Number of 75bp Reads (in millions): 1 2 4 8 16 20 Model
RAM used by the Overlapper (in GB): 1.2 2.4 4.5 8.7 (17) 21.5 #reads in millions * 1.1 GB = RAM Used
RAM used by the Contigger (in GB): 3.0 6.0 12.1 (24) (48) (60) #reads in millions * 3.0 GB = RAM Used
Run Time of the Overlapper (in min): 3 9 34 (144) (576) 783 (#reads in millions * 1.5) ^ 2 = run time in min
Run Time of the Contigger (in min): 9 66 473 - - - -

Privet has 2.4GHz Opteron 850 processors and 32GB of RAM. Minimus is not parallelized.
Numbers listed in parenthesis are predictions made using the listed models.
Models were generated by fitting data to the generalized versions of he equations listed and then averaging the constants.

Memory Usage Analysis

Linear memory scaling of the Minimus overlapper:

  • 1 million reads => 1.2GB (3.7% of 32GB)
  • 2 million reads => 2.4GB (7.4% of 32GB)
  • 4 million reads => 4.5GB (14.2% of 32GB)
  • 8 million reads => 8.7GB (27.3% of 32GB)
  • 20 million reads => 21.5GB (~67.5% of 32GB)

Model:

Minimus uses just over 1GB of RAM for every 1 million 75bp reads

Linear memory scaling of the Minimus contigger:

  • 1 million reads => 3.0GB (9.3% of 32GB)
  • 2 million reads => 6.0GB (18.6% of 32GB)
  • 4 million reads => 12.1GB (37.8% of 32GB)
  • 8 million reads => expect 24+GB
  • 20 million reads => (yet to be seen but it's probably going to be about 60GB, which will presumably cause it to break)
    • Update: top froze showing that tigger was using 99+% of the 32GB of RAM on privet. The AMOS log showed there was a core dump. So the 60GB estimate is probably reasonable. Will try later on walnut, which has 64GB of RAM.

Model:

Minimus uses just over 3GB of RAM for every 1 million 75bp reads

Run Time Analysis

All listed times were observed from a single run using 100% of a single core of one of the 2.4GHz Opteron 850 processors in privet.

Non-Linear run time scaling of the Minimus overlapper:

  • 1 million reads => 3 min
  • 2 million reads => 9 min
  • 4 million reads => 34 min
  • 8 million reads => expect ~144min
  • 20 million reads => 13.2 hrs

I built the following model by fitting a simple polynomial equation to the run times for the 1, 2, & 20 million read runs, and then averaged the constants I got for each (1.7, 1.45, 1.5, & 1.4, respectively).

(#reads in millions * 1.5) ^ 2 = run time in min

Non-Linear run time scaling of the Minimus contigger:

  • 1 million reads => 9 min
  • 2 million reads => 66 min
  • 4 million reads => 7.8 hrs
  • 8 million reads =>
  • 20 million reads => N/A (crashed on privet due to lack of memory)

Applying the same technique to these numbers (which doesn't appear to work nearly as well because it requires averaging 3 & 4 instead of 1.5 & 1.7), you get the following model.

(#reads in millions * 3.5) ^ 2 = run time in min

Other Observations About the Assembly

  • Because of the short read length, every million reads is only 7.5MB of sequence. This is roughly 1-2x coverage of an average single bacteria. These test sets have reads sampled from roughly 100 bacterial genomic sequences, so the coverage is extremely low.
  • Unsurprisingly, a cursory glance through the contig files show that each is only comprised of 2-3 reads, on average.
  • Therefore if the complexity of the oral microbiome data is high and/or the contamination of human DNA is extreme (80-95%), the coverage may be extremely low. This may make the use of Mike's assembler impractical, or at least that's how I'm going to keep justifying this testing to myself until someone corrects me.