Tuesday 24 May 2016

A Brief History of Time, Stephen Hawking

The Uncertainty Principle
Laplace argued that universe was deterministic i.e. we can predict the changes in the state of the universe provided we know the current state of the universe. However, Heisenberg's uncertainty principle showed that the more accurate we try to measure the location of an object the less accurate the result would be. As a result, it is difficult to measure the state of the universe at any given point of time.

Plank's quantum hypothesis and Heisenberg's uncertainty principle led to the theory of quantum mechanism where position of an object is defined in terms of probabilities i.e. an object would be at position A in time B with some probability C.

The dual nature of light is an implication of quantum mechanics. Quantum hypothesis said that light energy, which was thought to be composed of waves, was dissipated in terms of particles called quanta. The uncertainty principle said that particles may seem to be occurring at multiple positions based on the measurement. Interference of waves as well as particles (double-slit experiment) was observed.

The interference of particles helped physicists in understanding the nature of orbits of electrons in an atom. There are only a finite number of valid orbits in an atom because of the positive interference of electrons around the nucleus. The negative interference of electrons leads to the unavailability of certain orbits around the nucleus.

Einstein's general theory of relativity (classical theory) does not take the theory of quantum mechanism into consideration. It is necessary to combine both the theories in order to have a general, unified, consistent theory.

Roger Schank Blog

Roger Schank Blog 

Roger Schank in the latest article in his blog argues the AI is way more than just keyword matching and that an AI program should be able to exchange thoughts, hypotheses and solutions with other programs/humans.
 
He was commenting on the current state of AI research in the context of the massive attention media is giving to the news about a Georgia Tech professor announcing at the end of his course that one of his TAs was actually an “AI”. In fact, the “AI” he was referring to was nothing better than programs such as MARGIE, ELIZA and PARRY which were written in the 1970s and performed simple keyword matching. 

He predicts of an impending AI winter 2.0 due to the skewed perception media and, in turn, people have about the real potential of AI. He says there are important questions to consider in the field of AI such as:
  • Can we build machines that think, wonder, remember, feel and understand?
  • Is it enough if we continue to build such machines which do not perform any of the above actions but are still useful in one way or the other?