The surprise theory of everything

 

Forget quantum physics, forget relativity. Inklings of an ultimate theory might emerge from an unexpected place

AS REVOLUTIONS go, its origins were haphazard. It was, according to the ringleader Max Planck, an "act of desperation". In 1900, he proposed the idea that energy comes in discrete chunks, or quanta, simply because the smooth delineations of classical physics could not explain the spectrum of energy re-radiated by an absorbing body.

Yet rarely was a revolution so absolute. Within a decade or so, the cast-iron laws that had underpinned physics since Newton's day were swept away. Classical certainty ceded its stewardship of reality to the probabilistic rule of quantum mechanics, even as the parallel revolution of Einstein's relativity displaced our cherished, absolute notions of space and time. This was complete regime change.

Except for one thing. A single relict of the old order remained, one that neither Planck nor Einstein nor any of their contemporaries had the will or means to remove. The British astrophysicist Arthur Eddington summed up the situation in 1915. "If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation," he wrote.

In this essay, I will explore the fascinating question of why, since their origins in the early 19th century, the laws of thermodynamics have proved so formidably robust. The journey traces the deep connections that were discovered in the 20th century between thermodynamics and information theory - connections that allow us to trace intimate links between thermodynamics and not only quantum theory but also, more speculatively, relativity. Ultimately, I will argue, those links show us how thermodynamics in the 21st century can guide us towards a theory that will supersede them both.

In its origins, thermodynamics is a theory about heat: how it flows and what it can be made to do (see diagram). The French engineer Sadi Carnot formulated the second law in 1824 to characterise the mundane fact that the steam engines then powering the industrial revolution could never be perfectly efficient. Some of the heat you pumped into them always flowed into the cooler environment, rather than staying in the engine to do useful work. That is an expression of a more general rule: unless you do something to stop it, heat will naturally flow from hotter places to cooler places to even up any temperature differences it finds. The same principle explains why keeping the refrigerator in your kitchen cold means pumping energy into it; only that will keep warmth from the surroundings at bay.

A few decades after Carnot, the German physicist Rudolph Clausius explained such phenomena in terms of a quantity characterising disorder that he called entropy. In this picture, the universe works on the back of processes that increase entropy - for example dissipating heat from places where it is concentrated, and therefore more ordered, to cooler areas, where it is not.

That predicts a grim fate for the universe itself. Once all heat is maximally dissipated, no useful process can happen in it any more: it dies a "heat death". A perplexing question is raised at the other end of cosmic history, too. If nature always favours states of high entropy, how and why did the universe start in a state that seems to have been of comparatively low entropy? At present we have no answer, and later I will mention an intriguing alternative view.

Perhaps because of such undesirable consequences, the legitimacy of the second law was for a long time questioned. The charge was formulated with the most striking clarity by the British physicist James Clerk Maxwell in 1867. He was satisfied that inanimate matter presented no difficulty for the second law. In an isolated system, heat always passes from the hotter to the cooler, and a neat clump of dye molecules readily dissolves in water and disperses randomly, never the other way round. Disorder as embodied by entropy does always increase.

Maxwell's problem was with life. Living things have "intentionality": they deliberately do things to other things to make life easier for themselves. Conceivably, they might try to reduce the entropy of their surroundings and thereby violate the second law.

Information is power

Such a possibility is highly disturbing to physicists. Either something is a universal law or it is merely a cover for something deeper. Yet it was only in the late 1970s that Maxwell's entropy-fiddling "demon" was laid to rest. Its slayer was the US physicist Charles Bennett, who built on work by his colleague at IBM, Rolf Landauer, using the theory of information developed a few decades earlier by Claude Shannon. An intelligent being can certainly rearrange things to lower the entropy of its environment. But to do this, it must first fill up its memory, gaining information as to how things are arranged in the first place.

This acquired information must be encoded somewhere, presumably in the demon's memory. When this memory is finally full, or the being dies or otherwise expires, it must be reset. Dumping all this stored, ordered information back into the environment increases entropy - and this entropy increase, Bennett showed, will ultimately always be at least as large as the entropy reduction the demon originally achieved. Thus the status of the second law was assured, albeit anchored in a mantra of Landauer's that would have been unintelligible to the 19th-century progenitors of thermodynamics: that "information is physical".

But how does this explain that thermodynamics survived the quantum revolution? Classical objects behave very differently to quantum ones, so the same is presumably true of classical and quantum information. After all, quantum computers are notoriously more powerful than classical ones (or would be if realised on a large scale).

The reason is subtle, and it lies in a connection between entropy and probability contained in perhaps the most profound and beautiful formula in all of science. Engraved on the tomb of the Austrian physicist Ludwig Boltzmann in Vienna's central cemetery, it reads simply S = k log W. Here S is entropy - the macroscopic, measurable entropy of a gas, for example - while k is a constant of nature that today bears Boltzmann's name. Log W is the mathematical logarithm of a microscopic, probabilistic quantity W - in a gas, this would be the number of ways the positions and velocities of its many individual atoms can be arranged.

On a philosophical level, Boltzmann's formula embodies the spirit of reductionism: the idea that we can, at least in principle, reduce our outward knowledge of a system's activities to basic, microscopic physical laws. On a practical, physical level, it tells us that all we need to understand disorder and its increase is probabilities. Tot up the number of configurations the atoms of a system can be in and work out their probabilities, and what emerges is nothing other than the entropy that determines its thermodynamical behaviour. The equation asks no further questions about the nature of the underlying laws; we need not care if the dynamical processes that create the probabilities are classical or quantum in origin.

There is an important additional point to be made here. Probabilities are fundamentally different things in classical and quantum physics. In classical physics they are "subjective" quantities that constantly change as our state of knowledge changes. The probability that a coin toss will result in heads or tails, for instance, jumps from ½ to 1 when we observe the outcome. If there were a being who knew all the positions and momenta of all the particles in the universe - known as a "Laplace demon", after the French mathematician Pierre-Simon Laplace, who first countenanced the possibility - it would be able to determine the course of all subsequent events in a classical universe, and would have no need for probabilities to describe them.

In quantum physics, however, probabilities arise from a genuine uncertainty about how the world works. States of physical systems in quantum theory are represented in what the quantum pioneer Erwin Schrödinger called catalogues of information, but they are catalogues in which adding information on one page blurs or scrubs it out on another. Knowing the position of a particle more precisely means knowing less well how it is moving, for example. Quantum probabilities are "objective", in the sense that they cannot be entirely removed by gaining more information.

That casts in an intriguing light thermodynamics as originally, classically formulated. There, the second law is little more than impotence written down in the form of an equation. It has no deep physical origin itself, but is an empirical bolt-on to express the otherwise unaccountable fact that we cannot know, predict or bring about everything that might happen, as classical dynamical laws suggest we can. But this changes as soon as you bring quantum physics into the picture, with its attendant notion that uncertainty is seemingly hardwired into the fabric of reality. Rooted in probabilities, entropy and thermodynamics acquire a new, more fundamental physical anchor.

It is worth pointing out, too, that this deep-rooted connection seems to be much more general. Recently, together with my colleagues Markus Müller of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and Oscar Dahlsten at the Centre for Quantum Technologies in Singapore, I have looked at what happens to thermodynamical relations in a generalised class of probabilistic theories that embrace quantum theory and much more besides. There too, the crucial relationship between information and disorder, as quantified by entropy, survives (arxiv.org/1107.6029).

One theory to rule them all

As for gravity - the only one of nature's four fundamental forces not covered by quantum theory - a more speculative body of research suggests it might be little more than entropy in disguise (see "Falling into disorder"). If so, that would also bring Einstein's general theory of relativity, with which we currently describe gravity, firmly within the purview of thermodynamics.

Take all this together, and we begin to have a hint of what makes thermodynamics so successful. The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe - among other things, to construct theories to further our understanding of it. Thermodynamics is, in Einstein's term, a "meta-theory": one constructed from principles over and above the structure of any dynamical laws we devise to describe reality's workings. In that sense we can argue that it is more fundamental than either quantum physics or general relativity.

If we can accept this and, like Eddington and his ilk, put all our trust in the laws of thermodynamics, I believe it may even afford us a glimpse beyond the current physical order. It seems unlikely that quantum physics and relativity represent the last revolutions in physics. New evidence could at any time foment their overthrow. Thermodynamics might help us discern what any usurping theory would look like.

For example, earlier this year, two of my colleagues in Singapore, Esther Hänggi and Stephanie Wehner, showed that a violation of the quantum uncertainty principle - that idea that you can never fully get rid of probabilities in a quantum context - would imply a violation of the second law of thermodynamics. Beating the uncertainty limit means extracting extra information about the system, which requires the system to do more work than thermodynamics allows it to do in the relevant state of disorder. So if thermodynamics is any guide, whatever any post-quantum world might look like, we are stuck with a degree of uncertainty (arxiv.org/abs/1205.6894).

My colleague at the University of Oxford, the physicist David Deutsch, thinks we should take things much further. Not only should any future physics conform to thermodynamics, but the whole of physics should be constructed in its image. The idea is to generalise the logic of the second law as it was stringently formulated by the mathematician Constantin Carathéodory in 1909: that in the vicinity of any state of a physical system, there are other states that cannot physically be reached if we forbid any exchange of heat with the environment.

James Joule's 19th century experiments with beer can be used to illustrate this idea. The English brewer, whose name lives on in the standard unit of energy, sealed beer in a thermally isolated tub containing a paddle wheel that was connected to weights falling under gravity outside. The wheel's rotation warmed the beer, increasing the disorder of its molecules and therefore its entropy. But hard as we might try, we simply cannot use Joule's set-up to decrease the beer's temperature, even by a fraction of a millikelvin. Cooler beer is, in this instance, a state regrettably beyond the reach of physics.

God, the thermodynamicist

The question is whether we can express the whole of physics simply by enumerating possible and impossible processes in a given situation. This is very different from how physics is usually phrased, in both the classical and quantum regimes, in terms of states of systems and equations that describe how those states change in time. The blind alleys down which the standard approach can lead are easiest to understand in classical physics, where the dynamical equations we derive allow a whole host of processes that patently do not occur - the ones we have to conjure up the laws of thermodynamics expressly to forbid, such as dye molecules reclumping spontaneously in water.

By reversing the logic, our observations of the natural world can again take the lead in deriving our theories. We observe the prohibitions that nature puts in place, be it on decreasing entropy, getting energy from nothing, travelling faster than light or whatever. The ultimately "correct" theory of physics - the logically tightest - is the one from which the smallest deviation gives us something that breaks those taboos.

There are other advantages in recasting physics in such terms. Time is a perennially problematic concept in physical theories. In quantum theory, for example, it enters as an extraneous parameter of unclear origin that cannot itself be quantised. In thermodynamics, meanwhile, the passage of time is entropy increase by any other name. A process such as dissolved dye molecules forming themselves into a clump offends our sensibilities because it appears to amount to running time backwards as much as anything else, although the real objection is that it decreases entropy.

Apply this logic more generally, and time ceases to exist as an independent, fundamental entity, but one whose flow is determined purely in terms of allowed and disallowed processes. With it go problems such as that I alluded to earlier, of why the universe started in a state of low entropy. If states and their dynamical evolution over time cease to be the question, then anything that does not break any transformational rules becomes a valid answer.

Such an approach would probably please Einstein, who once said: "What really interests me is whether God had any choice in the creation of the world." A thermodynamically inspired formulation of physics might not answer that question directly, but leaves God with no choice but to be a thermodynamicist. That would be a singular accolade for those 19th-century masters of steam: that they stumbled upon the essence of the universe, entirely by accident. The triumph of thermodynamics would then be a revolution by stealth, 200 years in the making.

Falling into disorder

While thermodynamics seems to float above the precise content of the physical world it describes, whether classical, quantum or post-quantum (see main story), its connection with the other pillar of modern physics, general relativity, might be more direct. General relativity describes the force of gravity. In 1995, Ted Jacobson of the University of Maryland in College Park claimed that gravity could be a consequence of disorder as quantified by entropy.

His mathematical argument is surprisingly simple, but rests on two disputed theoretical relationships. The first was argued by Jacob Bekenstein in the early 1970s, who was examining the fate of the information in a body gulped by a black hole. This is a naked challenge to the universal validity of thermodynamics: any increase in disorder in the cosmos could be reversed by throwing the affected system into a black hole.

Bekenstein showed that this would be countered if the black hole simply grew in area in proportion to the entropy of the body it was swallowing. Then each tiny part of its surface would correspond to one bit of information that still counts in the universe's ledger. This relationship has since been elevated to the status of a principle, the holographic principle, that is supported by a host of other theoretical ideas – but not as yet by any experiment.

The second relationship is a suggestion by Paul Davies and William Unruh, also first made in the 1970s, that an accelerating body radiates tiny amounts of heat. A thermometer waved around in a perfect vacuum, where there are no moving atoms that can provide us with a normal conception of temperature, will record a non-zero temperature. This is an attractive yet counter-intuitive idea, but accelerations far beyond what can presently be achieved are required to generate enough radiation to test it experimentally.

Put these two speculative relations together with standard, undisputed connections between entropy, temperature, kinetic energy and velocity, and it is possible to construct a quantity that mathematically looks like gravity, but is defined in terms of entropy. Others have since been tempted down the same route, most recently Erik Verlinde of the University of Amsterdam in the Netherlands.

Such theories, which are by no means universally accepted, suggest that when bodies fall together it is not the effect of a separate fundamental force called gravity, but because the heating that results best fulfils the thermodynamic diktat that entropy in the universe must always increase.

Vlatko Vedral is a professor of quantum information theory at the University of Oxford and the Centre for Quantum Technologies, Singapore. He is the author of Decoding Reality (Oxford University Press, 2010)

Issue 2886 of New Scientist magazine
  • From issue 2886 of New Scientist magazine, page 32-37.
  • As a subscriber, you have unlimited access to our online archive.
  • Why not browse past issues of New Scientist magazine?
 
 
print
send

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

Have your say
<textarea id="commentForm_comment_body" name="comment.body" rows="5" cols="50"></textarea>
 
Comments 1 | 2

Begging The Question

Thu Oct 11 13:38:32 BST 2012 by Eric Kvaalen

"...classical physics, where the dynamical equations we derive allow a whole host of processes that patently do not occur - the ones we have to conjure up the laws of thermodynamics expressly to forbid, such as dye molecules reclumping spontaneously in water."

They only apparently do not occur! If you give it enough time, the dye will "diffuse" back to where it started.

"In thermodynamics, meanwhile, the passage of time is entropy increase by any other name."

Not at all. Time can go on with no increase in entropy, as happens in any static system. And the entropy will eventually go back down once in a while -- at least if we talk about it in terms of classical physics. (In quantum physics, we can't really say what specific process will occur.)

Saying that entropy always increases because "that's the highest principle" has a false premiss and begs the question as well.

Begging The Question

Mon Oct 15 09:05:26 BST 2012 by markjb

On Entropy I see no mystery, no reason even to call it a law. It's just a fact that (if something is made of parts) then for it to dissipate or come apart, the parts each can go in any of infinite directions to infinite destinations, whereas for something to come together or reassemble, all the parts have to go to a particular destination.

Coming apart is thus ever more likely than coming together.

Has science learnt nothing from Humpty Dumpty!?

 

This comment breached our terms of use and has been removed.

 

This comment breached our terms of use and has been removed.

Begging The Question

Tue Oct 16 13:29:57 BST 2012 by Oji

"If you give it enough time, the dye will "diffuse" back to where it started."

Was it Poincare who proved that? Or someone else ...

Begging The Question

Tue Oct 16 17:05:52 BST 2012 by Eric Kvaalen

Begging The Question

Wed Oct 17 09:01:15 BST 2012 by markjb

It's wrong of course, because the amount of time it tales will be far lomger than it takes any theoretical /real external conditions to disrupt destroy or entropically erode the tank of dye

Begging The Question

Sat Oct 20 05:38:43 BST 2012 by Robert

Enough time, lol. The dye will decompose, the water evaporate, the sun will nova and the universe will devolve to black holes and not much else before enough time is achieved.

Begging The Question

Sat Oct 20 11:18:43 BST 2012 by markjb

Exactly.. thus disproving Poincare

 

This comment breached our terms of use and has been removed.

Begging The Question

Sat Oct 20 05:42:21 BST 2012 by Robert

In thermodynamics the is no such thing as a static system, except locally, and only partially at that

Begging The Question

Sat Oct 20 07:23:29 BST 2012 by Eric Kvaalen

By a static system I mean a system which has reached a state of equilibrium. Macroscopically it looks as though nothing is happening. It's at a minimum of Gibbs free energy.

This is not a local thing -- the whole system is apparently static. In fact, it's at the local level that one sees that it's not really static -- for instance, with a microscope one can see Brownian motion.

 

This comment breached our terms of use and has been removed.

Anthropomorphism

Fri Oct 12 03:48:42 BST 2012 by Damir Ibrisimovic
http://home.pacific.net.au/~damir-dsl/

Dear Vlatko Vedral,

I might be a simpleton, but I do like to keep things simple and straight to the point. Unfortunately, you do seem to like to complicate beyond any reason --- at least regarding the use of the concept of information...

Admittedly, the confusion about what is information has a long history and it was not all of your doing --- but you are trying to add "scientific" weight to it. I will, therefore, suggest a simple check of math...

Shannon was a very practical person. During World War II he was working on encrypting techniques and methods of transmitting messages that will enable reconstruction of signals lost in the noise. In short, he was working on building into the message a sufficient redundancy --- to counter the message erosion caused by the forces of information entropy, i.e. noise...

And all of his efforts had a very practical purpose --- to enable human receiver of the message without any loss in signal that could alter its meaning. It is true that Shannon himself did not really differentiate between non-informative and informative messages, but the human receiver was always, at least implicitly, present in his work...

Wiener changed all of that --- and signs in Shannon's equations. (That is the math I would like you to check.) In short, Wiener eliminated receiver and started to calculate the "value of information" at its source...

Now, this simply beats a common sense. If I were to send a message to myself --- it would tell me nothing really new; i.e. it will not be very informative. The same message could also be little informative to some and very informative to others. In other words, the informative value of a message can only be assessed in relation to receiver's knowledge...

While I can infer that other living forms also may receive messages that alter their knowledge --- I am quite reluctant to extend this capacity to inanimate matter. While archaeological layers may resemble a memory --- it is very hard to even imagine that inanimate matter possesses anything like knowledge (system of beliefs)...

Now, I do understand physicists' fondness for the picture of the world without us. Such picture gives an illusion (or delusion) that physicists are talking about "real reality". And yet, I do not know of any physicist who is not human...

Are you really sure that your mental acrobatics really reflect what is out there? Are you really sure that you are not anthropomorphising particles and the entire universe? We do seem to be falling in this trap too often lately...

Have a nice day,

Damir Ibrisimovic

Anthropomorphism

Sun Oct 14 21:33:57 BST 2012 by Damir Ibrisimovic
http://home.pacific.net.au/~damir-dsl/

Dear Vlatko Vedral,

I was hoping that you will descent from your ivory castle in clouds --- and explain to us the deep, real reality of universe of information without us. But then, this would probably be a waste of effort. Mere mortals are notorious for not comprehending divine truths...

We are likely to continue to live in two worlds --- entirely divorced from each other. And, frankly speaking, I do prefer illusions of us, mere mortals. I do prefer the illusion that I do have free will, for example...

There is a snag, though. The science should, in theory, be one --- with physics as one among other disciplines. And all these disciplines should, in theory, speak in one voice...

Have a nice day,

Damir Ibrisimovic

Anthropomorphism

Tue Oct 16 08:27:28 BST 2012 by Damir Ibrisimovic
http://home.pacific.net.au/~damir-dsl/

Dear James,

"...matter led me to think down the path a living being taking entropy out of a system (hence borrowing it), and upon death returning the knowledge to the system and balancing out the levels of entropy."

I would not really word it this way --- but it might be a step in the right direction. To better answer your question, we will need to visit disciplines like biology, psychology and anthropology --- disciplines physicists often ignore while trying to impress us with big words like consciousness, for example. (Needless to say that there is usually very little, if any, substance behind their big words...)

But, before we visit these, to physicists esoteric, disciplines --- we need to address the physical part of your question. In many ways, physicists still maintain the grand picture of universe of 17th century --- the picture of universe as clock, devised and put in motion by God. They did remove God from the picture, but not something out of nothing. Under the pressure of anomalies in their observations, they have also abandoned purely deterministic picture of the universe --- publicly rather than privately. Now, their last hopes in constancy of constants are also evaporating in the heat of recent discovery that the value of fine structure constant (α) increases with distance in one direction --- and decreases in the opposite...

Even heat death of our universe --- looked nicer on paper then what we actually observed. Even if we imagine ideally evenly distributed (gaseous) molecules in space --- we cannot discount a possibility of gravitational pulls bringing few molecules closer and pulling together other, more distant, molecules. In time, gravity may pull enough matter together to start thermonuclear processes. When we add to this picture black holes and changing α --- the concept of entropy does not seem to be well understood...

Now, let's have a brief look into disciplines that are dealing with phenomenon we call life. The phenomenon is notoriously difficult to define --- however, the huge volumes of empirical evidence do allow for some generalisations. The first one is that every living form must adapt to its environment or perish. In this, past experiences (memory) may help to the living form to survive in the present --- but there is always some room for improvement. So, we can say that the primary function of our memories is rather to support us in the present and future --- than to accurately record our past. This idea is not really new --- as New Scientist's contributors and editors would like us to believe:

(long URL - click here)

Consequently, each memory we invoke in our present is adjusted to the present with almost complete disregard for accuracy. Memories that are not regularly refreshed by the present are, therefore, in danger of becoming obsolete and forgotten. In this, we may lose some "nuggets", but...

Our elusive cultures also sustain our experiences/memories long after our passing. So, even when our bodies turn to dust --- our memories continue to live on. Long time ago we had only short time living messages engraved as sounds. But we did have oral traditions, few of which survived till present. The classic example is Vedas. Even if a Brahmin does not really understand what verses mean ---- he will utter them with hard to believe accuracy. Later, we started to encode our messages (memories) into more durable mediums, like papyrus or stone. However, the longevity of our cultures and civilisations stifled our memories on account of adaptability. This is a common denominator of all civilisations that rose and fell...

Now, in the context of the above, I can give you a short answer: Our memories can hold entropy at bay --- as long they are relevant to our present and future.

This is also of survival importance to physicists and their culture...

Have a nice day,

Damir Ibrisimovic

Anthropomorphism

Mon Oct 15 17:28:05 BST 2012 by James Buckler

Your comment and subsequently its subject matter led me to think down the path a living being taking entropy out of a system (hence borrowing it), and upon death returning the knowledge to the system and balancing out the levels of entropy.

What made me start to speculate what I'm thinking was your remark about sending oneself's a message and how that information would actually be of little help...But...If the message was one that was a recorded message about a vast amount of information pertaining to a relatively complicated subject (one in which you were afraid certain nuggets of knowledge would be forgotten, do to complexity) and years later you are stuck on a problem and go back to listen to the message..And there is information there you had forgotten, and henceforth is re-kindled in your memory....My question is though ...Would the new information being re-kindled from the recorded message be the original entropy borrowed from the system, or would in be considered new?? If it is considered new than that would mean entropy was borrowed from the system in redundancy, and therefore when the ultimate knowledge was returned the system would have a excess of entropy..??

Anthropomorphism

Tue Oct 16 10:05:41 BST 2012 by Duster

"If I were to send a message to myself --- it would tell me nothing really new; i.e. it will not be very informative. The same message could also be little informative to some and very informative to others. In other words, the informative value of a message can only be assessed in relation to receiver's knowledge..."

I think you have lead yourself astray here. You treat memory and thus information stored there as an ideal. However, very few living people have absolutely perfect memeroies and very few have no capacity for memory at all. So, if you sent a message to yourelf and read it immediately afterward certainly any difference between the contents and you memory of what was written should be quite small. You might be surprised at a misspelling or a peculiar fault in punctuation.

However, given a year or more, and a normal memory, the accuracy of any memory you had of the contents is very likely to have drifted well apart from the actual substance of what you wrote. This would be particularly true for more substantive memoirs. Your mind is a system and a thermodynamic one at that. Thus there is systemic noise and the cumulative effect over time will be to see a degradation in the accuracy of the memory.

Anthropomorphism

Tue Oct 16 17:09:44 BST 2012 by Eric Kvaalen

Maybe. Have you heard about life review? http://en.wikipedia.org/wiki/Life_review

Anthropomorphism

Wed Oct 17 00:54:00 BST 2012 by Damir Ibrisimovic

Dear Duster,

"I think you have lead yourself astray here. You treat memory and thus information stored there as an ideal."

The comment you are citing from --- does not address the case of delayed messages to oneself. James Buckler raised this scenario and received a replay before you formulated your criticism. However, he did not assume and jump to a conclusion as you did...

In short, I did not speak about "perfect" memories in original comment and will assume that you did not notice my reply to James. Under this assumption, I'll repeat...

No matter how amazing accuracy of memories a savant might have --- the savant's memories are still not perfect. We, who are not savants, have much less accurate memories --- even when they are still fresh. The reason is quite simple. No matter how many billions of neurons we hide under our skulls --- resources are still limited. And that means remembering details --- we asses (consciously or nonconsciously) as important --- and skip over the rest...

When we remember, we always invoke a memory from our past in the context of our present. And this means that we are likely to add new details to our memory from the past or alter some of the old. With this, each time we invoke our old memories they are altered. Consequently, after years of altering our memories --- an accurate record made in our past may surprise us...

However, there are not only accurate records, like pictures, from our past. There are also others who were likely to remember some of the details we skipped over. So, we do have means to verify accuracy of our old memories --- up to a point...

As you can see, forgetting is integral part of forming new memories --- and recalling old ones. Memories that are not recalled often enough --- lose their importance and are easily dropped/forgotten. But not because of our mind being a thermodynamic system as you believe. We simply need room for new memories...

"Your mind is a system and a thermodynamic one at that. Thus there is systemic noise and the cumulative effect over time will be to see a degradation in the accuracy of the memory."

Biological processes are well guarded by redundancies --- from DNA in single-cell organisms to complete repetitions of DNA in multicellular organisms. To this we may add numerous structural redundancies between groups of specialised cells and multicellular organism as a whole. We can talk about degradation only when an organism perishes...

You have made a statement of belief --- as many physicists do --- completely disregarding empirical evidence in biology, for example...

Have a nice day,

Damir Ibrisimovic

Anthropomorphism

Sat Oct 20 21:52:24 BST 2012 by Robert

Every time you set your alarm clock you are sending yourself a message, and hopefully it will have significance when received. That also brings up the problem even for very accurate memories - the ability to recall the information when needed.

Econophysics

Sat Oct 13 14:50:52 BST 2012 by Dave Marsay
http://djmarsay.wordpress.com

Quants were blamed for the 200/8 difficulties, due to their misapplication of some physics-based methods. Econophysics is trying to correct things using more sophisticated notions. But when Vedral says "The question is whether we can express the whole of physics simply by enumerating possible and impossible processes in a given situation. This is very different from how physics is usually phrased ... " I am reminded not only of scenario planning but of Keynes' approach to economics. This may not be altogether surprising, since he worked with Whitehead, Russell and others who helped to underpin modern physics. But it would be ironic if, economists having adopted the methods of old physicists, physics became more like old economics.

Having said that, I still find the notion of 'information' a challenge.

Comments 1 | 2

All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.

If you are having a technical problem posting a comment, please contact technical support.

print
send
<i>(Images: Yosuke/Imagezoo/Getty Images)</i>

(Images: Yosuke/Imagezoo/Getty Images)

1 more image

 

Recycled photons set fresh quantum computing record

13:15 23 October 2012

Reusing old bits has allowed an optical quantum computer to factorise a larger number than ever before using Shor's algorithm

Tractor beam built from rings of laser light

18:05 19 October 2012

A two-laser system can move microscopic objects such as biological cells or space dust

Phoenix universe could rebuild itself after cosmic rip

18:00 17 October 2012

Far in the future, once dark energy has ripped even atoms apart, new structures may yet arise from the ashes of destruction

Destroying drug cartels, the mathematical way

16:05 17 October 2012

Killing drug lords gets headlines, but complexity analysis suggests they are the wrong people to target to bring down a cartel

TWITTER

New Scientist is on Twitter

Get the latest from New Scientist: sign up to our Twitter feed

Brophy Tuesday 23 October 2012 - 09:54 am | | Meta Reality

No comments

(optional field)
(optional field)
Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.