Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you use a Kolmogorov measure of complexity, the complexity of a function and/or its output is defined to be the description of that function using some universal language. So a function cannot, by definition, generate data that is more complex than the function itself.

Pi can be described by a relatively small formula/function. It can generate very long sequences of seemingly random digits. The complexity in all of those infinity digits is still just that of the simplest formula that generates Pi.



Poor phrasing on my part.

Pi is a great example.

The full length of Pi cannot be compressed using traditional means into the same number of bytes that the algorithm that generates Pi can exist in.

The point is that a function can generate a dataset that is complex, without redundant information in the data. A dataset that cannot be reduced to a smaller dataset.

The idea that a complex function can only generate a simple data set with 'high redundancy' is obviously nonsense.

My point being that the OP is suggesting the DNA that is the functional generator for the brain generates highly redundant physical structures because there is a limited amount of information in the functional generator (DNA).

While the DNA may generate redundant or repeated structures, it does so (if it does) because of the function. Not because of the information density of the DNA.

As your example of Pi shows, a trivial information density can generate complex datasets without repetition.


I don't think your argument about "traditional means of compression" holds.

The existence of redundant data can be proven by the ability to store the same information with less data.

Pi's information density is in fact no larger than the small function that generates Pi's digits -- even if traditional compressors fail to identify it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: