tag:blogger.com,1999:blog-2973803697864979211.comments2019-10-16T00:49:25.771-07:00Imperial MitochondriacsUnknownnoreply@blogger.comBlogger11125tag:blogger.com,1999:blog-2973803697864979211.post-53756765374276644942019-06-10T10:25:25.867-07:002019-06-10T10:25:25.867-07:00Hi Ferdinando, I guess my point is simply that if ...Hi Ferdinando, I guess my point is simply that if one observes variability in the number of mutations per base-pair in different regions of the mitochondrial genome, that may be because 1) mutations at certain positions of the genome have stronger functional effects, and such mutations may be selectively removed through quality-control pathways (so you see fewer of them); or 2) certain positions, for whatever reason, are more or less susceptible to mutation. (2) could be because there might be e.g. sequences or positions in the genome which are more prone to copying errors, or perhaps are more prone to e.g. oxidative damage. Both of the hypotheses potentially explain differences in the mutational load per bp.Juvid Aryamanhttps://www.blogger.com/profile/05751288336964830816noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-61258377601529123882019-06-07T04:07:44.099-07:002019-06-07T04:07:44.099-07:00Nice summary!
What is the difference between "...Nice summary!<br />What is the difference between "stronger selective pressures against mutation on the rest of the genome" and "potentially an intrinsically lower de novo mutation rate"?<br />(I suppose reset is a typo).Ferdinando Insalatahttps://www.blogger.com/profile/15316468429318156803noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-66198599979147100492019-01-24T04:45:44.871-08:002019-01-24T04:45:44.871-08:00I see, interesting concepts!I see, interesting concepts!Ferdinando Insalatahttps://www.blogger.com/profile/15316468429318156803noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-51478225541963587572019-01-24T02:51:07.733-08:002019-01-24T02:51:07.733-08:00Ah, thanks for your question. Simply to say that t...Ah, thanks for your question. Simply to say that the range of values measured is larger when they square their metric. So, if 0<x<2, and you compare y=x versus y=x^2, the latter has a greater "dynamic range" in y than the former. Intuitively, if a metric has a greater dynamic range, it will probably also give you greater statistical power, at the expense of being less robust. That's my guess for why they square it.Juvid Aryamanhttps://www.blogger.com/profile/05751288336964830816noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-17181273749181611252019-01-23T11:10:00.693-08:002019-01-23T11:10:00.693-08:00Hi,
what do you mean by dynamic range of the mitoc...Hi,<br />what do you mean by dynamic range of the mitochondrial complexity index? (MCI)Ferdinando Insalatahttps://www.blogger.com/profile/15316468429318156803noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-91755663567381412542015-04-25T10:39:27.749-07:002015-04-25T10:39:27.749-07:00If you can post a URL then I think that's the ...If you can post a URL then I think that's the best way. You might be able to do this by following the steps in this article http://www.mybloggerlab.com/2013/03/how-to-embed-pdf-and-other-documents-in-blogger-posts.html Juvid Aryamanhttps://www.blogger.com/profile/05751288336964830816noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-24988198661394554222015-04-25T06:14:23.056-07:002015-04-25T06:14:23.056-07:00Well, I took a leaf out of your book Juvid and did...Well, I took a leaf out of your book Juvid and did some simulations to explore the impact of variation in Z on ϕ(log x, log y) and on ϕ(clr x,clr y)<br /><br />...and I realised that the clr() tranformation is very important because it makes ϕ independent of variation in Z. So, I think I have to revise some of my suspicions in previous posts!<br /><br />I have a 2-page PDF describing this additional exploration... is there a way I could share that with readers of your blog??Anonymoushttps://www.blogger.com/profile/10336269965404042862noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-30984570483092481162015-04-21T11:45:40.004-07:002015-04-21T11:45:40.004-07:00Brilliant! Thank you for getting back to us. I'...Brilliant! Thank you for getting back to us. I've updated the post, and hope you find it accurate. I think the conversation above really underscores the subtleties in thinking about this area. Please do get in touch with any further thoughts you may have! JuvidJuvid Aryamanhttps://www.blogger.com/profile/05751288336964830816noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-81411182321804535732015-04-16T16:34:44.555-07:002015-04-16T16:34:44.555-07:00Dear Juvid, sorry not to respond sooner: it is not...Dear Juvid, sorry not to respond sooner: it is not lack of interest but abundance of commitments.<br /><br />Your simulations are really making me think! Let me share my thoughts, starting from your last point about the cut-off of ϕ<0.05.<br /><br />I don't believe that we should declare a hard and fast cut-off for statistics that measure association. I think the utility of statistics like ϕ, correlation, etc lies in helping us to _explore_ potential relationships, rather than adjudicate as to whether the relationships exist or not. So, on that point, I would encourage analysts to look at the pairs of data that have low ϕ values as an indicator of potential relationships. I think it is useful to look at Anscombe's quartet (http://en.wikipedia.org/wiki/Anscombe%27s_quartet) every now and again, to remind ourselves of the peril of relying solely upon statistics to summarise the relationships in data.<br /><br />Turning back to your simulation, my suspicion is that the ϕ statistic is no panacea for spurious correlation. As you say, the variable Z has much higher variance than X or Y. My guess is that if you increase the variance of Z further still,the ϕ statistic will get even lower. Remember too that the ϕ statistic is related to the slope and goodness of fit (correlation!) of logarithmically transformed data. And with respect to the goodness of fit, will be prone to the same kinds of issues as illustrated in Anscombe's quartet.<br /><br />Thanks again for your simulations. And for making me think very hard! If I have any new insights I will look to share them with you.<br />Cheers, DavidAnonymoushttps://www.blogger.com/profile/10336269965404042862noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-10218876294942326222015-04-02T01:26:29.355-07:002015-04-02T01:26:29.355-07:00Hi David, thanks for your comment! I've remove...Hi David, thanks for your comment! I've removed that part of the post for now, and ammended the definition of ϕ (sorry about that!).<br /><br />Using my notation of A = X/Z, B = Y/Z (where we only have access to A and B and not the underlying distributions X, Y and Z), would it be fair to say that ϕ = var(log(A/B))/var(log(A))? If so, my simulation yields ϕ = 0.11 (I could get a confidence interval on that by bootstrapping, if you think that would help). <br /><br />You have used the cut-off ϕ<0.05 in your paper, so that appears to be sufficient to declare that A is not proportional to B. Is this correct?Juvid Aryamanhttps://www.blogger.com/profile/05751288336964830816noreply@blogger.comtag:blogger.com,1999:blog-2973803697864979211.post-69467755750494170732015-04-01T06:59:15.047-07:002015-04-01T06:59:15.047-07:00Nice simulation Juvid.
Actually, ϕ is a bit differ...Nice simulation Juvid.<br />Actually, ϕ is a bit different to var(log(A/B)) for precisely the reason you point out (that it does not have a meaningful scale)<br />If you go to the end of the section "Measuring Proportionality" you will see that<br />ϕ(log x, log y) = var(log(x/y))/var(log x).<br />(...strictly speaking, you should use clr() instead of log(), but I don't think that'll make a big difference here).<br />I'd love to know what you get for ϕ now. Cheers, DavidAnonymoushttps://www.blogger.com/profile/10336269965404042862noreply@blogger.com