Home Research Projects Publications Qualifications Activities Events & News Contact

    gen_20.1.gif
gen_14.1.gif
 

 

Quoting  C. Adami. What is complexity?  Bioessays, 24(12):1085–1094, 2002

Randomness is in some ways the "flip side" of information, and is called entropy in information theory (15). Entropy is a measure of potential knowledge, or if applied to a sequence, a measure of how much information a sequence could hold, and thus quantifies our uncertainty about the genetic identity of a randomly selected individual from a pool. It is useful to think of sequence entropy as the length of a tape, while information is the length of tape containing recordings. Measurement (i.e., recording) turns empty tape into filled tape; entropy into information. As we shall see, this is what happens during adaptation, and it is the force that drives the increase of complexity.

Information is a statistical form of correlation, and thus requires, mathematically and intuitively, a reference to the system that the information is about. The sequence on your information-filled tape allows you to make predictions about the state of the system that the sequence is information about. This predictive capability implies that your sequence and the system have something in common, that they are correlated. Your sequence will most likely not make predictions about any other system (unless the systems are very similar). If you do not know which system your sequence refers to, then whatever is on it cannot be considered information. Instead, it is potential information (a.k.a. entropy). This is the fundamental difference between entropy and information, often misrepresented in the literature (16).

15. Shannon C.E, Weaver W. The Mathematical Theory of Communication. Urbana: University of Illinois Press. 1949.

16. Adami C. Information theory in molecular biology. 2002.

 


 
|Home| |Research| |Projects| |Publications| |Qualifications| |Activities| |Events & News| |Contact|