• This is a new section being rolled out to attract people interested in exploring the origins of the universe and the earth from a biblical perspective. Debate is encouraged and opposing viewpoints are welcome to post but certain rules must be followed. 1. No abusive tagging - if abusive tags are found - they will be deleted and disabled by the Admin team 2. No calling the biblical accounts a fable - fairy tale ect. This is a Christian site, so members that participate here must be respectful in their disagreement.

Does anyone believe in Evolution anymore?

ok doser

lifeguard at the cement pond
I am still somewhat surprised that anyone can believe that mistakes can make improvements. Because that is exactly what is being claimed.

The original "message" is being corrupted in some way and yet they think that this is "helping". :kookoo:

And even better yet.... that a long line of these corruptions can turn a simple one-celled life form into a man. :rotfl:

Barbie believes that a roomful of trained chimps banging away on typewriters, tasked with copying "Dick and Jane" will occasionally accidentally produce Shakespeare's works
 

Yorzhik

Well-known member
LIFETIME MEMBER
Hall of Fame
Yep. So that's how mutations cause an increase in information in a population genome.

Right. And as you have learned, evolution depends on mutations producing new information that sometimes turns out to be useful for survival of an organism. This is a good thing for populations of organisms, but not so good for someone trying to accurately transmit a precise message. You're getting closer to understanding why Shannon actually set up an entire process for information in genetics.
Your claim that noise will give a population messages that work better also comes with an assertion that there is a range of mutation rates (noise rates, as it were) that work for a population. Too fast and individuals would devolve faster than they could wait for better-working messages, and if too slow a population could never move into a new environment.

But there is no way to measure the proper rate with Shannon because Shannon doesn't measure that. Shannon measures information as a tool for sending and receiving messages accurately, not to decide if the amount of noise added is the right amount. In the context of Shannon, any amount of noise has to be dealt with even to check if certain message degradation can be ignored. And if it were possible to avoid all noise with the same cost as removing none of it, that is what Shannon would recommend.

You could agree with this much, no?
 

The Barbarian

BANNED
Banned
Your claim that noise will give a population messages that work better also comes with an assertion that there is a range of mutation rates (noise rates, as it were) that work for a population.

Right, so far...

Too fast and individuals would devolve faster than they could wait for better-working messages

There is no "devolve." That was a one-shot joke by a 1970s pop group.

and if too slow a population could never move into a new environment.

Um, no. One can mathematically determine the optimum mutation rate for specific situations. Learn about it here:

Bull Math Biol. 2002 Nov;64(6):1033-43.
Optimal mutation rates in dynamic environments.
Nilsson M1, Snoad N.
Abstract

In this paper, we study the evolution of the mutation rate for simple organisms in dynamic environments. A model based on explicit population dynamics at the gene sequence level, with multiple fitness coding loci tracking a moving fitness peak in a random fitness background, is developed and an analytical expression for the optimal mutation rate is derived. The optimal mutation rate per genome is approximately independent of genome length, something that has been observed in nature. Furthermore, the optimal mutation rate is a function of the absolute, not relative, replication rate of the superior gene sequences. Simulations confirm the theoretical predictions.



But there is no way to measure the proper rate with Shannon because Shannon doesn't measure that.

Turns out, that the information produced by a new mutation can be accurately measured considering 2 bits per base pair.

In order to represent a DNA sequence on a computer, we need to be able to represent all 4 base pair possibilities in a binary format (0 and 1). These 0 and 1 bits are usually grouped together to form a larger unit, with the smallest being a “byte” that represents 8 bits. We can denote each base pair using a minimum of 2 bits, which yields 4 different bit combinations (00, 01, 10, and 11). Each 2-bit combination would represent one DNA base pair. A single byte (or 8 bits) can represent 4 DNA base pairs. In order to represent the entire diploid human genome in terms of bytes, we can perform the following calculations:

6×10^9 base pairs/diploid genome x 1 byte/4 base pairs = 1.5×10^9 bytes or 1.5 Gigabytes, about 2 CDs worth of space! Or small enough to fit 3 separate genomes on a standard DVD!

https://bitesizebio.com/8378/how-much-information-is-stored-in-the-human-genome/

Shannon measures information as a tool for sending and receiving messages accurately, not to decide if the amount of noise added is the right amount.

Right. It turns out that information is the problem when you want to send a reliable message, because it's a measure of the uncertainty about the message. You see, Shannon's application of information to messages is that you assure an accurate reception of the message, even over a very noisy channel, so long as you use the appropriate amount of redundancy.

In the context of Shannon, any amount of noise has to be dealt with even to check if certain message degradation can be ignored. And if it were possible to avoid all noise with the same cost as removing none of it, that is what Shannon would recommend.

You've been misled about Shannon's work in biology.

Apparently, Shannon spent only a few months on the thesis. Perhaps if the work had been extended, either by him or by others, it might have led to significant discoveries. One gets the impression that he regarded this not as an end but as a beginning of a new methodology. Whether this is correct or not, Shannon went to work at the Bell Labs immediately after receiving his degree. There he found a stimulating environment with outstanding engineers, physicists, and mathematicians interested in communication. This got him started on a new career, and genetics was dropped. The thesis lay buried and unnoticed. In an interview in 1987, he said, “I set up an algebra which described this complicated process [of genetic changes in an evolving population]. One could calculate, if one wanted to (although not many people have wanted to in spite of my work), the kind of population you would have after a number of generations”
https://royalsocietypublishing.org/doi/abs/10.1098/rsbm.2009.0015

You could agree with this much, no?

See above. You have some things right, but you're missing some things as well.
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
There is no "devolve."

It's called a loss of information. You've just decided that information loss cannot happen by insisting that all information is Shannon information.

News flash. Even Shannon information can decrease. :chuckle:


It's easy to tell when Barbarian is flustered, he starts umming.

Actually, yes. If the supposed mutations that natural selection works on to allow a population to adapt to an environment do not arise quickly enough, the population will not remain in the new environment.

One can mathematically determine the optimum mutation rate for specific situations.
Which of course is just you making up something to respond to. Learn about it here.

The information produced by a new mutation can be accurately measured.
That's nice. Now try answering according to what was said. :thumb:

Information is the problem when you want to send a reliable message, because it's a measure of the uncertainty about the message.

That's entropy.

Information is not a problem, unless you're concerned about losing meaning. But you don't want to admit that.

Shannon's application of information to messages is that you assure an accurate reception of the message, even over a very noisy channel, so long as you use the appropriate amount of redundancy.

It's great that you're able to restate theory accurately sometimes, but it does nothing to show that you've understood the challenge to evolution.

You've been misled.

You have very little right and you're missing a lot as well.
 

The Barbarian

BANNED
Banned

Information theory as a model of genomic sequences
Chengpeng Bi and Peter K. Rogan
Encyclopedia of Genetics, Genomics, Proteomics and Bioinformatics edited by Shankar Subramaniam

Some of the most useful applications of molecular information theory have come from studies of binding sites (typically protein recognition sites) in DNA or RNA recognized by the same macromolecule, which typically contain similar but non-identical sequences. Because average information measures the choices made by the system, the theory can comprehensively model the range of sequence variation present in nucleic sequences that are recognized by individual proteins or multi-subunit complexes.
3Treating a discrete information source (i.e. telegraphy or DNA sequences) as a Markov process, Shannon defined entropy (H) to measure how much information is generated by such a process. The information source generates a series of symbols belonging to an alphabet with size J (e.g. 26 English letters or 4 nucleotides). If symbols are generated according to a known probability distribution p, the entropy function H(p1, p2, ..., pJ) can be evaluated. The units of H are in bits, where one bit is the amount of information necessary to select one of two possible states or choices. In this section we describe several important concepts regarding the use of entropy in genomic sequence analysis. Entropy is a measure of the average uncertainty of symbols or outcomes. Given a random variable X with a set of possible symbols or outcomes AX = {a1, a2, ..., aJ}, having probabilities {p1, p2, ..., pJ}, with P(x = ai) = pi, pi≥ 0 and ∑∈=XAxxP1)(, the Shannon entropy of X is defined by ∑∈=XAxxPxPXH)(1log)()(2 (1) Two important properties of the entropy function are: (a) H(X) ≥ 0 with equality for one x, P(x) = 1; and (b) Entropy is maximized if P(x) follows the uniform distribution. Here the uncertainty or surprisal,h(x), of an outcome (x) is defined by )(1log)(2xPxh= (bits) (2) For example, given a DNA sequence, we say each position corresponds to a random variable X with values AX = {A, C, G, T}, having probabilities {pa, pc, pg, pt}, with P(x = A) = pa, P(x = C) = pc and so forth. Suppose the probability distribution P(x) at a position of DNA sequence is P(x = A) = 1/2; P(x = C) = 1/4; P(x = G) = 1/8; P(x = T) = 1/8. The uncertainties (surprisals) in this case are h(A) = 1, h(C) = 2, h(G) = h(T) = 3 (bits). The entropy is the average of the uncertainties: H(X) = E[h(x)] = 1/2(1) + 1/4(2) + 1/8(3) + 1/8(3) = 1.75 bits. In a study of genomic DNA sequences, Schmitt and Herzel (1997) found that genomic DNA sequences are closer to completely random sequences than to written text, suggesting that higher-order interdependencies between neighboring or adjacent sequence positions make little contributions to the block entropy.
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
Someofthe mostusefulapplicationsofmolecularinformation theoryhavecomefromstudiesofbindingsites (typicallyproteinrecognitionsitesinDNAorRNA recognizedbthesamemacromolecule,which typicallycontainsimilarbutnon-identical sequences.Becauseaverageinformation measuresthechoicesmadebythesystem,the theorycancomprehensivelymodeltherangeof sequencevariationpresentinnucleicsequences thatarerecognizedbyindividualproteinsor multi-subunitcomplexes.3Treatingadiscrete informationsource(i.e.telegraphyorDNA sequences)asaMarkovprocess,Shannondefined entropy(H)tomeasurehowmuchinformationis generatedbysuchaprocess.Theinformation sourcegeneratesaseriesofsymbolsbelongingto analphabetwithsizeJ(e.g.26Englishlettersor4 nucleotides).Ifsymbolsaregeneratedaccordingto aknownprobabilitydistributionp,theentropy functionH(p1,p2,...,pJ)canbeevaluated.The unitsofHareinbits,whereonebitistheamountof informationnecessarytoselectoneoftwopossible statesorchoices.Inthissectionwedescribeseveral importantconceptsregardingtheuseofentropyin genomicsequenceanalysis.Entropyisameasure oftheaverageuncertaintyofsymbolsoroutcomes. GivenarandomvariableXwithasetofpossible symbolsoroutcomesAX={a1,a2,...,aJ},having probabilities{p1,p2,...,pJ},withP(x=ai)=pi,pi≥ 0and∑∈=XAxxP1)(,theShannonentropyofXis definedby∑∈=XAxxPxPXH)(1log)()(2 (1)Twoimportantpropertiesoftheentropy function Barbarian is a troll are:(a)H(X)≥0 withequalityforonex,P(x)=1;and(b)Entropyis maximizedifP(x)followstheuniformdistribution. Heretheuncertaintyorsurprisal,h(x),ofan outcome(x)isdefinedby)(1log)(2xPxh=(bits) (2)Forexample,givenaDNAsequence,wesay eachpositioncorrespondstoarandomvariableX withvaluesAX={A,C,G,T},having probabilities {pa,pc,pg,pt},withP(x=A)=pa,P(x=C)=pcandso forth.SupposetheprobabilitydistributionP(x)ata positionofDNAsequenceisP(x=A)=1/2;P(x=C)= 1/4;P(x=G)=1/8;P(x=T)=1/8.The uncertainties(surprisals)inthiscaseareh(A)=1, h(C)=2,h(G)=h(T)=3(bits).Theentropyisthe averageoftheuncertainties:H(X)=E[h(x)]= 1/2(1)+1/4(2)+1/8(3)+1/8(3)=1.75bits.Ina studyofgenomicDNAsequences,Schmittand Herzel(1997)foundthatgenomicDNAsequences areclosertocompletelyrandomsequencesthanto writtentext,suggestingthathigher-order interdependenciesbetweenneighboringor adjacentsequencepositionsmakelittle contributionstotheblockentropy.
:blabla:

At least cite your sources when you cut and paste unresponsive clutter.
 

Gary K

New member
Banned

Information theory as a model of genomic sequences
Chengpeng Bi and Peter K. Rogan
Encyclopedia of Genetics, Genomics, Proteomics and Bioinformatics edited by Shankar Subramaniam

Some of the most useful applications of molecular information theory have come from studies of binding sites (typically protein recognition sites) in DNA or RNA recognized by the same macromolecule, which typically contain similar but non-identical sequences. Because average information measures the choices made by the system, the theory can comprehensively model the range of sequence variation present in nucleic sequences that are recognized by individual proteins or multi-subunit complexes.
3Treating a discrete information source (i.e. telegraphy or DNA sequences) as a Markov process, Shannon defined entropy (H) to measure how much information is generated by such a process. The information source generates a series of symbols belonging to an alphabet with size J (e.g. 26 English letters or 4 nucleotides). If symbols are generated according to a known probability distribution p, the entropy function H(p1, p2, ..., pJ) can be evaluated. The units of H are in bits, where one bit is the amount of information necessary to select one of two possible states or choices. In this section we describe several important concepts regarding the use of entropy in genomic sequence analysis. Entropy is a measure of the average uncertainty of symbols or outcomes. Given a random variable X with a set of possible symbols or outcomes AX = {a1, a2, ..., aJ}, having probabilities {p1, p2, ..., pJ}, with P(x = ai) = pi, pi≥ 0 and ∑∈=XAxxP1)(, the Shannon entropy of X is defined by ∑∈=XAxxPxPXH)(1log)()(2 (1) Two important properties of the entropy function are: (a) H(X) ≥ 0 with equality for one x, P(x) = 1; and (b) Entropy is maximized if P(x) follows the uniform distribution. Here the uncertainty or surprisal,h(x), of an outcome (x) is defined by )(1log)(2xPxh= (bits) (2) For example, given a DNA sequence, we say each position corresponds to a random variable X with values AX = {A, C, G, T}, having probabilities {pa, pc, pg, pt}, with P(x = A) = pa, P(x = C) = pc and so forth. Suppose the probability distribution P(x) at a position of DNA sequence is P(x = A) = 1/2; P(x = C) = 1/4; P(x = G) = 1/8; P(x = T) = 1/8. The uncertainties (surprisals) in this case are h(A) = 1, h(C) = 2, h(G) = h(T) = 3 (bits). The entropy is the average of the uncertainties: H(X) = E[h(x)] = 1/2(1) + 1/4(2) + 1/8(3) + 1/8(3) = 1.75 bits. In a study of genomic DNA sequences, Schmitt and Herzel (1997) found that genomic DNA sequences are closer to completely random sequences than to written text, suggesting that higher-order interdependencies between neighboring or adjacent sequence positions make little contributions to the block entropy.

Wow. DNA doesn't read like a textbook? Who'd a thunk it. Just because a group of finite human beings can't see relationships between items created by an infinite God is not evidence that those relationships do not exist. This idea that humanity is smarter than God is ridiculous.
 

The Barbarian

BANNED
Banned
1bmpdy.jpg
 

The Barbarian

BANNED
Banned
Here's a simpler discussion of the issue:


Natural Selection Fails to Optimize Mutation Rates for Long-Term Adaptation on Rugged Fitness Landscapes
PLoS Comput Biol. 2008 Sep 26
Jeff Clune
Dusan Misevic,
Charles Ofria,
Richard E. Lenski,
Santiago F. Elena,
Rafael Sanjuán

Author Summary
Natural selection is shortsighted and therefore does not necessarily drive populations toward improved long-term performance. Some traits may evolve because they provide immediate gains, even though they are less successful in the long run than some alternatives. Here, we use digital organisms to analyze the ability of evolving populations to optimize their mutation rate, a fundamental evolutionary parameter. We show that when the mutation rate is constrained to be high, populations adapt considerably faster over the long term than when the mutation rate is allowed to evolve. By varying the fitness landscape, we show that natural selection tends to reduce the mutation rate on rugged landscapes (but not on smooth ones) so as to avoid the production of harmful mutations, even though this short-term benefit limits adaptation over the long term.
 

The Barbarian

BANNED
Banned
Sorry, I don't know how to make it easier to understand.

Experiments have shown that genotypes with increased mutation rates can be favored by selection if they face novel or changing environments [1], [13]–[21]. Similarly, recent work with RNA viruses has shown that certain high-fidelity genotypes have diminished fitness and virulence in mice [22],[23], which might reflect their restricted ability to create the genetic variability needed to escape from immune surveillance. However, another recent study with an RNA virus failed to observe a positive association between mutation rate and the rate of adaptation to a novel environment
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000187
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
I don't know how to make it easier to understand.

That's because you're determined to pretend things cannot be understood. How about you stop, take a good think through your approach, go back and respond sensibly to the challenge presented instead of pretending that we said something else. :up:
 

chair

Well-known member
Sorry, I don't know how to make it easier to understand.

Experiments have shown that genotypes with increased mutation rates can be favored by selection if they face novel or changing environments [1], [13]–[21]. Similarly, recent work with RNA viruses has shown that certain high-fidelity genotypes have diminished fitness and virulence in mice [22],[23], which might reflect their restricted ability to create the genetic variability needed to escape from immune surveillance. However, another recent study with an RNA virus failed to observe a positive association between mutation rate and the rate of adaptation to a novel environment
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000187

If someone refuses at all costs to listen, then it makes no difference how clearly you explain.
You are using the tools of rational thinking against the evolution equivalents of flat-earthers.
 

Right Divider

Body part
If someone refuses at all costs to listen, then it makes no difference how clearly you explain.
You are using the tools of rational thinking against the evolution equivalents of flat-earthers.
Oh the irony! :rotfl:

Common descentists are far more like flat earthers. Both are true believers in a myth.
 
Last edited:

The Barbarian

BANNED
Banned
Oh the irony! :rotfl:

Common descentists are far more like flat earthers. Both are true believers in a myth.

The evidence for common descent is overwhelming. Linnaeus first realized that all organisms on Earth fit into a nested family tree. He assumed that God just made things that way, and was surprised to find that other things, like minerals could not be arranged in a family tree.

Darwin realized why it appeared to be a family tree. It was a family tree. But he had only anatomical data to show it was true.

Later on, when genes were discovered, it was predicted that organisms close to each other on the tree would be genetically more alike. When the function of DNA was realized, it was possible to test that prediction. And it was repeatedly verified.

But there were holes in the diagram,where not connecting organisms were known. So scientists predicted that there must have been all sorts of traditional forms that had lived at one time, and died out. And as time went on, more and more of the predicted transitional forms were found. Which is massive confirmation of common descent. But even more convincing, there were never any transitionals that shouldn't exist. Only those that fit into the family tree first found by Linnaeus.

Common descent is so well-demonstrated, even many creationists now admit a limited amount of it. Both the Institute for Creation research and Answers in Genesis now admit to common descent of species, genera, and even families.

If they retreat a little farther, we won't have anything to argue about.
 

Stripe

Teenage Adaptive Ninja Turtle
LIFETIME MEMBER
Hall of Fame
The evidence for common descent is overwhelming.

:rotfl:

Linnaeus first realized that all organisms on Earth fit into a nested family tree.

Question begging. A logical fallacy.

He assumed that God just made things that way, and was surprised to find that other things, like minerals could not be arranged in a family tree.

:darwinsm:

Rocks don't reproduce.

Darwin realized why it appeared to be a family tree. It was a family tree.

Ooh. A stunning, evidence-based keep of faith then?

But he had only anatomical data to show it was true.

So, no compelling evidence at all then. :chuckle:

Later on, when genes were discovered, it was predicted that organisms close to each other on the tree would be genetically more alike. When the function of DNA was realized, it was possible to test that prediction. And it was repeatedly verified.

Except that it wasn't.

But there were holes in the diagram,where not connecting organisms were known.

:rotfl: Exactly.

So scientists predicted that there must have been all sorts of traditional forms that had lived at one time, and died out. And as time went on, more and more of the predicted transitional forms were found. Which is massive confirmation of common descent. But even more convincing, there were never any transitionals that shouldn't exist. Only those that fit into the family tree first found by Linnaeus.

That's because you retrofit predictions to fit ideas you make up about things that do not exist.

Common descent is so well-demonstrated
And utterly bunk.

Even many creationists now admit a limited amount of it.
:rotfl:

Yeah. All people are commonly descended from Adam and Eve.

You're pathetic.

Both the Institute for Creation research and Answers in Genesis now admit to common descent of species, genera, and even families.

Nope. "Kinds."

If they retreat a little farther, we won't have anything to argue about.
We already don't. We don't disagree with you that populations change. When you're willing to put your theory forward for rational, scientific analysis, then you'll be eligible to join a sensible discussion.
 
Top