Originally posted by Right Divider
View Post
Announcement
Collapse
Creation Science Rules
This is a new section being rolled out to attract people interested in exploring the origins of the universe and the earth from a biblical perspective.
Debate is encouraged and opposing viewpoints are welcome to post but certain rules must be followed.
1. No abusive tagging  if abusive tags are found  they will be deleted and disabled by the Admin team
2. No calling the biblical accounts a fable  fairy tale ect. This is a Christian site, so members that participate here must be respectful in their disagreement.
Debate is encouraged and opposing viewpoints are welcome to post but certain rules must be followed.
1. No abusive tagging  if abusive tags are found  they will be deleted and disabled by the Admin team
2. No calling the biblical accounts a fable  fairy tale ect. This is a Christian site, so members that participate here must be respectful in their disagreement.
See more
See less
Does anyone believe in Evolution anymore?
Collapse
X


Originally posted by The Barbarian View PostYep. So that's how mutations cause an increase in information in a population genome.
Right. And as you have learned, evolution depends on mutations producing new information that sometimes turns out to be useful for survival of an organism. This is a good thing for populations of organisms, but not so good for someone trying to accurately transmit a precise message. You're getting closer to understanding why Shannon actually set up an entire process for information in genetics.
But there is no way to measure the proper rate with Shannon because Shannon doesn't measure that. Shannon measures information as a tool for sending and receiving messages accurately, not to decide if the amount of noise added is the right amount. In the context of Shannon, any amount of noise has to be dealt with even to check if certain message degradation can be ignored. And if it were possible to avoid all noise with the same cost as removing none of it, that is what Shannon would recommend.
You could agree with this much, no?Good things come to those who shoot straight.
Did you only want evidence you are not going to call "wrong"? Stripe
Comment

Originally posted by Yorzhik View PostYour claim that noise will give a population messages that work better also comes with an assertion that there is a range of mutation rates (noise rates, as it were) that work for a population.
Too fast and individuals would devolve faster than they could wait for betterworking messages
and if too slow a population could never move into a new environment.
Bull Math Biol. 2002 Nov;64(6):103343.
Optimal mutation rates in dynamic environments.
Nilsson M1, Snoad N.
Abstract
In this paper, we study the evolution of the mutation rate for simple organisms in dynamic environments. A model based on explicit population dynamics at the gene sequence level, with multiple fitness coding loci tracking a moving fitness peak in a random fitness background, is developed and an analytical expression for the optimal mutation rate is derived. The optimal mutation rate per genome is approximately independent of genome length, something that has been observed in nature. Furthermore, the optimal mutation rate is a function of the absolute, not relative, replication rate of the superior gene sequences. Simulations confirm the theoretical predictions.
But there is no way to measure the proper rate with Shannon because Shannon doesn't measure that.
In order to represent a DNA sequence on a computer, we need to be able to represent all 4 base pair possibilities in a binary format (0 and 1). These 0 and 1 bits are usually grouped together to form a larger unit, with the smallest being a “byte” that represents 8 bits. We can denote each base pair using a minimum of 2 bits, which yields 4 different bit combinations (00, 01, 10, and 11). Each 2bit combination would represent one DNA base pair. A single byte (or 8 bits) can represent 4 DNA base pairs. In order to represent the entire diploid human genome in terms of bytes, we can perform the following calculations:
6×10^9 base pairs/diploid genome x 1 byte/4 base pairs = 1.5×10^9 bytes or 1.5 Gigabytes, about 2 CDs worth of space! Or small enough to fit 3 separate genomes on a standard DVD!
https://bitesizebio.com/8378/howmuc...humangenome/
Shannon measures information as a tool for sending and receiving messages accurately, not to decide if the amount of noise added is the right amount.
In the context of Shannon, any amount of noise has to be dealt with even to check if certain message degradation can be ignored. And if it were possible to avoid all noise with the same cost as removing none of it, that is what Shannon would recommend.
Apparently, Shannon spent only a few months on the thesis. Perhaps if the work had been extended, either by him or by others, it might have led to significant discoveries. One gets the impression that he regarded this not as an end but as a beginning of a new methodology. Whether this is correct or not, Shannon went to work at the Bell Labs immediately after receiving his degree. There he found a stimulating environment with outstanding engineers, physicists, and mathematicians interested in communication. This got him started on a new career, and genetics was dropped. The thesis lay buried and unnoticed. In an interview in 1987, he said, “I set up an algebra which described this complicated process [of genetic changes in an evolving population]. One could calculate, if one wanted to (although not many people have wanted to in spite of my work), the kind of population you would have after a number of generations”
https://royalsocietypublishing.org/d...rsbm.2009.0015
You could agree with this much, no?This message is hidden because ...
Comment

Originally posted by The Barbarian View PostThere is no "devolve."
News flash. Even Shannon information can decrease.
Um, no.
Actually, yes. If the supposed mutations that natural selection works on to allow a population to adapt to an environment do not arise quickly enough, the population will not remain in the new environment.
One can mathematically determine the optimum mutation rate for specific situations.
The information produced by a new mutation can be accurately measured.
Information is the problem when you want to send a reliable message, because it's a measure of the uncertainty about the message.
Information is not a problem, unless you're concerned about losing meaning. But you don't want to admit that.
Shannon's application of information to messages is that you assure an accurate reception of the message, even over a very noisy channel, so long as you use the appropriate amount of redundancy.
You've been misled.
You have very little right and you're missing a lot as well.Where is the evidence for a global flood?
E≈mc^{2} "the best maths don't need no stinkin' numbers"
"The waters under the 'expanse' were under the crust."
Bob B.
Comment

Information theory as a model of genomic sequences
Chengpeng Bi and Peter K. Rogan
Encyclopedia of Genetics, Genomics, Proteomics and Bioinformatics edited by Shankar Subramaniam
Some of the most useful applications of molecular information theory have come from studies of binding sites (typically protein recognition sites) in DNA or RNA recognized by the same macromolecule, which typically contain similar but nonidentical sequences. Because average information measures the choices made by the system, the theory can comprehensively model the range of sequence variation present in nucleic sequences that are recognized by individual proteins or multisubunit complexes.
3Treating a discrete information source (i.e. telegraphy or DNA sequences) as a Markov process, Shannon defined entropy (H) to measure how much information is generated by such a process. The information source generates a series of symbols belonging to an alphabet with size J (e.g. 26 English letters or 4 nucleotides). If symbols are generated according to a known probability distribution p, the entropy function H(p1, p2, ..., pJ) can be evaluated. The units of H are in bits, where one bit is the amount of information necessary to select one of two possible states or choices. In this section we describe several important concepts regarding the use of entropy in genomic sequence analysis. Entropy is a measure of the average uncertainty of symbols or outcomes. Given a random variable X with a set of possible symbols or outcomes AX = {a1, a2, ..., aJ}, having probabilities {p1, p2, ..., pJ}, with P(x = ai) = pi, pi≥ 0 and ∑∈=XAxxP1)(, the Shannon entropy of X is defined by ∑∈=XAxxPxPXH)(1log)()(2 (1) Two important properties of the entropy function are: (a) H(X) ≥ 0 with equality for one x, P(x) = 1; and (b) Entropy is maximized if P(x) follows the uniform distribution. Here the uncertainty or surprisal,h(x), of an outcome (x) is defined by )(1log)(2xPxh= (bits) (2) For example, given a DNA sequence, we say each position corresponds to a random variable X with values AX = {A, C, G, T}, having probabilities {pa, pc, pg, pt}, with P(x = A) = pa, P(x = C) = pc and so forth. Suppose the probability distribution P(x) at a position of DNA sequence is P(x = A) = 1/2; P(x = C) = 1/4; P(x = G) = 1/8; P(x = T) = 1/8. The uncertainties (surprisals) in this case are h(A) = 1, h(C) = 2, h(G) = h(T) = 3 (bits). The entropy is the average of the uncertainties: H(X) = E[h(x)] = 1/2(1) + 1/4(2) + 1/8(3) + 1/8(3) = 1.75 bits. In a study of genomic DNA sequences, Schmitt and Herzel (1997) found that genomic DNA sequences are closer to completely random sequences than to written text, suggesting that higherorder interdependencies between neighboring or adjacent sequence positions make little contributions to the block entropy.This message is hidden because ...
Comment

Originally posted by The Barbarian View PostSomeofthe mostusefulapplicationsofmolecularinformation theoryhavecomefromstudiesofbindingsites (typicallyproteinrecognitionsitesinDNAorRNA recognizedbthesamemacromolecule,which typicallycontainsimilarbutnonidentical sequences.Becauseaverageinformation measuresthechoicesmadebythesystem,the theorycancomprehensivelymodeltherangeof sequencevariationpresentinnucleicsequences thatarerecognizedbyindividualproteinsor multisubunitcomplexes.3Treatingadiscrete informationsource(i.e.telegraphyorDNA sequences)asaMarkovprocess,Shannondefined entropy(H)tomeasurehowmuchinformationis generatedbysuchaprocess.Theinformation sourcegeneratesaseriesofsymbolsbelongingto analphabetwithsizeJ(e.g.26Englishlettersor4 nucleotides).Ifsymbolsaregeneratedaccordingto aknownprobabilitydistributionp,theentropy functionH(p1,p2,...,pJ)canbeevaluated.The unitsofHareinbits,whereonebitistheamountof informationnecessarytoselectoneoftwopossible statesorchoices.Inthissectionwedescribeseveral importantconceptsregardingtheuseofentropyin genomicsequenceanalysis.Entropyisameasure oftheaverageuncertaintyofsymbolsoroutcomes. GivenarandomvariableXwithasetofpossible symbolsoroutcomesAX={a1,a2,...,aJ},having probabilities{p1,p2,...,pJ},withP(x=ai)=pi,pi≥ 0and∑∈=XAxxP1)(,theShannonentropyofXis definedby∑∈=XAxxPxPXH)(1log)()(2 (1)Twoimportantpropertiesoftheentropy function Barbarian is a troll area)H(X)≥0 withequalityforonex,P(x)=1;and(b)Entropyis maximizedifP(x)followstheuniformdistribution. Heretheuncertaintyorsurprisal,h(x),ofan outcome(x)isdefinedby)(1log)(2xPxh=(bits) (2)Forexample,givenaDNAsequence,wesay eachpositioncorrespondstoarandomvariableX withvaluesAX={A,C,G,T},having probabilities {pa,pc,pg,pt},withP(x=A)=pa,P(x=C)=pcandso forth.SupposetheprobabilitydistributionP(x)ata positionofDNAsequenceisP(x=A)=1/2;P(x=C)= 1/4;P(x=G)=1/8;P(x=T)=1/8.The uncertainties(surprisals)inthiscaseareh(A)=1, h(C)=2,h(G)=h(T)=3(bits).Theentropyisthe averageoftheuncertainties:H(X)=E[h(x)]= 1/2(1)+1/4(2)+1/8(3)+1/8(3)=1.75bits.Ina studyofgenomicDNAsequences,Schmittand Herzel(1997)foundthatgenomicDNAsequences areclosertocompletelyrandomsequencesthanto writtentext,suggestingthathigherorder interdependenciesbetweenneighboringor adjacentsequencepositionsmakelittle contributionstotheblockentropy.
At least cite your sources when you cut and paste unresponsive clutter.Where is the evidence for a global flood?
E≈mc^{2} "the best maths don't need no stinkin' numbers"
"The waters under the 'expanse' were under the crust."
Bob B.
Comment

Originally posted by The Barbarian View Post
Information theory as a model of genomic sequences
Chengpeng Bi and Peter K. Rogan
Encyclopedia of Genetics, Genomics, Proteomics and Bioinformatics edited by Shankar Subramaniam
Some of the most useful applications of molecular information theory have come from studies of binding sites (typically protein recognition sites) in DNA or RNA recognized by the same macromolecule, which typically contain similar but nonidentical sequences. Because average information measures the choices made by the system, the theory can comprehensively model the range of sequence variation present in nucleic sequences that are recognized by individual proteins or multisubunit complexes.
3Treating a discrete information source (i.e. telegraphy or DNA sequences) as a Markov process, Shannon defined entropy (H) to measure how much information is generated by such a process. The information source generates a series of symbols belonging to an alphabet with size J (e.g. 26 English letters or 4 nucleotides). If symbols are generated according to a known probability distribution p, the entropy function H(p1, p2, ..., pJ) can be evaluated. The units of H are in bits, where one bit is the amount of information necessary to select one of two possible states or choices. In this section we describe several important concepts regarding the use of entropy in genomic sequence analysis. Entropy is a measure of the average uncertainty of symbols or outcomes. Given a random variable X with a set of possible symbols or outcomes AX = {a1, a2, ..., aJ}, having probabilities {p1, p2, ..., pJ}, with P(x = ai) = pi, pi≥ 0 and ∑∈=XAxxP1)(, the Shannon entropy of X is defined by ∑∈=XAxxPxPXH)(1log)()(2 (1) Two important properties of the entropy function are: (a) H(X) ≥ 0 with equality for one x, P(x) = 1; and (b) Entropy is maximized if P(x) follows the uniform distribution. Here the uncertainty or surprisal,h(x), of an outcome (x) is defined by )(1log)(2xPxh= (bits) (2) For example, given a DNA sequence, we say each position corresponds to a random variable X with values AX = {A, C, G, T}, having probabilities {pa, pc, pg, pt}, with P(x = A) = pa, P(x = C) = pc and so forth. Suppose the probability distribution P(x) at a position of DNA sequence is P(x = A) = 1/2; P(x = C) = 1/4; P(x = G) = 1/8; P(x = T) = 1/8. The uncertainties (surprisals) in this case are h(A) = 1, h(C) = 2, h(G) = h(T) = 3 (bits). The entropy is the average of the uncertainties: H(X) = E[h(x)] = 1/2(1) + 1/4(2) + 1/8(3) + 1/8(3) = 1.75 bits. In a study of genomic DNA sequences, Schmitt and Herzel (1997) found that genomic DNA sequences are closer to completely random sequences than to written text, suggesting that higherorder interdependencies between neighboring or adjacent sequence positions make little contributions to the block entropy.“Liberty cannot be established without morality, nor morality without faith.”
― Alexis de Tocqueville, Democracy in America
“One and God make a majority.”
― Frederick Douglass
Comment

Here's a simpler discussion of the issue:
Natural Selection Fails to Optimize Mutation Rates for LongTerm Adaptation on Rugged Fitness Landscapes
PLoS Comput Biol. 2008 Sep 26
Jeff Clune
Dusan Misevic,
Charles Ofria,
Richard E. Lenski,
Santiago F. Elena,
Rafael Sanjuán
Author Summary
Natural selection is shortsighted and therefore does not necessarily drive populations toward improved longterm performance. Some traits may evolve because they provide immediate gains, even though they are less successful in the long run than some alternatives. Here, we use digital organisms to analyze the ability of evolving populations to optimize their mutation rate, a fundamental evolutionary parameter. We show that when the mutation rate is constrained to be high, populations adapt considerably faster over the long term than when the mutation rate is allowed to evolve. By varying the fitness landscape, we show that natural selection tends to reduce the mutation rate on rugged landscapes (but not on smooth ones) so as to avoid the production of harmful mutations, even though this shortterm benefit limits adaptation over the long term.This message is hidden because ...
Comment

Originally posted by The Barbarian View PostHere's a simpler discussion of the issue.
The agenda of the Darwinist is to throw enough nonsense about so that they do not have to respond with a rational defense of their religion.Where is the evidence for a global flood?
E≈mc^{2} "the best maths don't need no stinkin' numbers"
"The waters under the 'expanse' were under the crust."
Bob B.
Comment

Sorry, I don't know how to make it easier to understand.
Experiments have shown that genotypes with increased mutation rates can be favored by selection if they face novel or changing environments [1], [13]–[21]. Similarly, recent work with RNA viruses has shown that certain highfidelity genotypes have diminished fitness and virulence in mice [22],[23], which might reflect their restricted ability to create the genetic variability needed to escape from immune surveillance. However, another recent study with an RNA virus failed to observe a positive association between mutation rate and the rate of adaptation to a novel environment
https://journals.plos.org/ploscompbi...l.pcbi.1000187This message is hidden because ...
Comment

Originally posted by The Barbarian View PostI don't know how to make it easier to understand.Where is the evidence for a global flood?
E≈mc^{2} "the best maths don't need no stinkin' numbers"
"The waters under the 'expanse' were under the crust."
Bob B.
Comment

Originally posted by The Barbarian View PostSorry, I don't know how to make it easier to understand.
Experiments have shown that genotypes with increased mutation rates can be favored by selection if they face novel or changing environments [1], [13]–[21]. Similarly, recent work with RNA viruses has shown that certain highfidelity genotypes have diminished fitness and virulence in mice [22],[23], which might reflect their restricted ability to create the genetic variability needed to escape from immune surveillance. However, another recent study with an RNA virus failed to observe a positive association between mutation rate and the rate of adaptation to a novel environment
https://journals.plos.org/ploscompbi...l.pcbi.1000187
You are using the tools of rational thinking against the evolution equivalents of flatearthers.
Comment
Comment