Friday, December 21, 2012

Artificial Intelligence Creates A Holiday-Themed Video Game

artificilal intelligence creates video game

 Artificial Intelligence
Earlier this year, researcher Michael cook created video games using Angelina, an artificial intelligence system.  Now Angelina has created a new game for the holidays, A Puzzling Present, which has broadened the features Angelina is working with to include advanced mechanisms like anti-gravity controls that improve game-play.
Earlier this year, an artificial intelligence system called, Angelina created a lot of buzz when it created a video game.

Created by PhD student Michael Cook Angelina ("A Novel Game-Evolving Labrat I've Named ANGELINA") was special because she was creating games from scratch with a little help from her human counterparts. By dividing the concept of a computer game up into three defined “species,” or sub-tasks—maps, layouts, and rule sets—Cook and his compatriots at Imperial College in London helped their system auto-generate some simple arcade games.

Now, Cook has released a game called A Puzzling Present which is a lot more complex than Angelina's previous games. Instead of a simple interface with maps and obstacles, the latest game now include new mechanics for the player at each level. In the new game, users help Santa escape some fiendish and fun puzzles.

For example, in the first level of A Puzzling Present, hitting x (or a touch-screen b key in the mobile version) gives the main character, Santa an anti-gravity power that sends him to the top of the screen and hitting x again sends him back down. These mechanics, Cook says, were created artificially by Angelina for this particular game as part of a new system he developed called Mechanic Miner.

On his blog, Cook described how Mechanic Miner works: start with a level you can't normally solve and invent a new mechanism that could help solve this level (like the ability to jump very high, or bounce off walls), and then test the level for playability.

Gangnam Style First YouTube Video To Reach One Billion Hits

Gangnam Style First YouTube Video To Reach One Billion Hits
 Internet Life
You've seen it and done the horse dance too, now Korean singer Psy's Gangnam Style has become the first YouTube video to reach one billion views.
In an event that may have been predicted by the Mayans, Korean sensation Psy's Gangnam Style has become the first video ever on YouTube to reach the one billion view mark.

The song which ironically pokes fun at the rich socialites living in the Gangnam area in central Seoul, has also had its title added to the Collins Dictionary as one of the phrases of the year.

The K-pop (Korean pop) song and particularly its video – with its "horseriding" dance – has made 34-year-old Psy an international star who has since performed with Madonna, and inspired flashmobs: one is scheduled for new year's eve at the Brandenburg Gate in Berlin where hundreds of thousands could turn up.

The South Korean singer had released five studio albums, but had never had a hit in the west until Gangnam style was released on 15 July.

Progress was slow until on 28 July, the video was shared on the social site Reddit – and remarked on by British performer Robbie Williams on his personal blog. At that point it took off, and in late November passed Justin Bieber's Baby as the most-watched video of all time. Ranked second, Baby has a mere 813 million views.

By 33rd SquareSubscribe to 33rd Square

New Kind Of Quantum Magnetism Discovered

Quantum Physics
MIT researchers have now demonstrated experimentally the existence of a fundamentally new kind of magnetic behavior, adding to the two previously known states of magnetism. The experiments demonstrate ‘quantum spin liquid,’ which could have applications in new computer memory storage.
Carrying on with earlier theoretical hypotheses, MIT researchers have now demonstrated experimentally the existence of a fundamentally new kind of magnetic behavior, adding to the two previously known states of magnetism.

Ferromagnetism — the simple magnetism of a bar magnet or compass needle — has been known for centuries. In a second type of magnetism, antiferromagnetism, the magnetic fields of the ions within a metal or alloy cancel each other out. In both cases, the materials become magnetic only when cooled below a certain critical temperature. The prediction and discovery of antiferromagnetism — the basis for the read heads in today's computer hard disks — won Nobel Prizes in physics for Louis Neel in 1970 and for MIT professor emeritus Clifford Shull in 1994.

The new discovery, it is thought, can possibly lead to new computer data storage technologies.

"We're showing that there is a third fundamental state for magnetism," says MIT professor of physics Young Lee. The experimental work showing the existence of this new state, called a quantum spin liquid (QSL), is reported this week in the journal Nature, with Lee as the senior author and Tianheng Han, who earned his PhD in physics at MIT earlier this year, as lead author.

The QSL is a solid crystal, but its magnetic state is described as liquid: Unlike the other two kinds of magnetism, the magnetic orientations of the individual particles within it fluctuate constantly, resembling the constant motion of molecules within a true liquid.

Thursday, December 20, 2012

Rolf Pfeiffer And His Team Working On Crowd Funded, Open-Source Humanoid Robot

Roboy artificial intelligence robot
Building on the work done for the ECCEROBOT, Dr. Rolf Pfeifer and his colleagues at the Artificial Intelligence Laboratory in Zurich, Switzerland have introduced Roboy, a crowd funded robot that incorporates the latest design principles of embodied artificial intelligence.
As director of the Artificial Intelligence Lab at the University of ZurichDr. Rolf Pfeifer has long argued that embodiment is one of the best methods for attaining artificial general intelligence (AGI)

The embodiment hypothesis, is based on the idea that human intelligence is largely derived from our motor abilities, and therefore to create artificial general intelligence, a robotic body that interacts with the physical environment is crucial.

Previously Pfeifer worked to this end via the humanoid robot ECCEROBOT,  that was also referred to as Cronos. 

Now Pfeifer and his team of of researchers, have stated the ambitious goal of building a new humanoid robot, Roboy, in a record nine months.

To speed up decisions and cut down on development time, Roboy will be financed with private funds and constructed by private companies. This means that Roboy will be financed by exclusive sponsors and through crowd funding via the website In return for a contribution, every supporter will have his name or logo engraved on ROBOY and will receive another token of appreciation.

DARPA's Alpha Dog Can Now Follow The Leader

Boston Dynamics LS3 Alpha Dog Robot
Military Robots
Boston Dynamics and DARPA had previously announced that the Alpha Dog LS3 quadruped robot was being upgraded to follow people via voice commands and was installed with much quieter motor systems. Now new video shows how well the system is progressing.
In September Boston Dynamics and DARPA had announced that the Alpha Dog LS3 quadruped robot was being upgraded to follow people via voice commands and was installed with much quieter motor systems.  Now new video shows how well the system is progressing.

LS3 is all about solving the equipment load problem: it's not uncommon for soldiers or Marines in Afghanistan to be carrying 100 pounds or more (45 kg) of gear, often over rough terrain, which is hard work and can lead to injury.

Alpha Dog's primary purpose is to act as a mule, carrying equipment (up to 400 pounds, or 180 kg) for up to 20 miles over 24 hours. Furthermore, it has to do all of this autonomously, without requiring human intervention (if necessary), so that it doesn't just become another hassle that soldiers have to constantly manage.

Now, Alpha Dog, has been trained to play follow the leader, either in the leader's footsteps or with its own chosen path. LS3 is now able to understand and obey about ten different kinds of voice commands that can be combined in different ways, like "follow," "stop," "sit," "stay."

Video from the testing shows the robot negotiating diverse terrain including ditches, streams, wooded slopes and simulated urban environments. The video also shows the map of the Alpha Dog perception system creates to determine the path it takes.

Wednesday, December 19, 2012

Astronomers Discover Habitable Earth-Like Planets Closeby

Habitable Exoplanets
An international team of astronomers has discovered that Tau Ceti, the closest single star like our Sun, has planets just like our solar system. But more importantly, one of these planets orbits in the so-called habitable zone around the star.  Moreover, at only 12 light years away, Tau Ceti is relatively very close to Earth. 
An international team of astronomers has discovered that Tau Ceti, one of the closest and most Sun-like stars, may host five planets, including one in the star's habitable zone.

At a distance of twelve light years from Earth and visible to the naked eye in the evening sky, Tau Ceti is the closest single star that has the same spectral classification as our Sun. Its five exoplanets are estimated to have masses between two and six times the mass of the Earth, making it the lowest-mass planetary system yet detected. One of the planets lies in the habitable zone of the star and has a mass around five times that of Earth, making it the smallest planet found to be orbiting in the habitable zone of any Sun-like star.

The team presented its findings in a paper that has been accepted for publication in Astronomy & Astrophysics.

The international team of astronomers from the United Kingdom, Chile, United States, and Australia, combined more than six-thousand observations from three different instruments and intensively modeled the data. Using new techniques, the team has found a method to detect signals half the size previously thought possible. This greatly improves the sensitivity of searches for small planets and suggests that Tau Ceti is not a lone star but has a planetary system.

"This discovery is in keeping with our emerging view that virtually every star has planets, and that the galaxy must have many such potentially habitable Earth-sized planets," said coauthor Steve Vogt, a professor of astronomy and astrophysics at UC Santa Cruz.

IBM Says Computers Will Have All Five Senses In Five Years

IBM predicts the future
Cognitive Computing
This year IBM has presented 5 in 5 in five sensory categories, through innovations that will touch our lives and see us into the future.
A ccording to this year's IBM 5 in 5, in the upcoming era of cognitive computing, systems will learn instead of passively relying on programming. As a result, emerging technologies will continue to push the boundaries of human limitations to enhance and augment our senses with machine learning, artificial intelligence (AI), advanced speech recognition and more. —

This is the seventh year that IBM has produced the 5 in 5 series.  Through these releases, the company points out where their researchers are focused, and what the possible outcomes of their innovations and inventions might be.

IBM's goals with cognitive computing are to get a computer to behave, think and interact the way humans do.
This year, the series explores how, in the next five years, technology will increasingly use and provide sensory feedback much as we do.  They break down the discussion into the five senses: touch, hearing,  smell, sight and taste.

Tuesday, December 18, 2012

MRI Scans Found To Reveal Brain Injuries Not Present On CT Scans

Focal Lesions - MRI
Medical Technology
Hospital MRIs may be better at predicting long-term outcomes for people with mild traumatic brain injuries than CT scans, the standard technique for evaluating such injuries in the emergency room, according to a clinical trial led by researchers at the University of California, San Francisco and the San Francisco General Hospital and Trauma Center.
Using magnetic resonance imaging (MRI) scanners may be better at predicting long-term outcomes for people with mild traumatic brain injuries than CT scans. Computed Tomography (CT scans) are also called X-ray computed tomography, or computed axial tomography (CAT scan) have been
the standard technique for evaluating such injuries in the emergency room, but potentially do not work as well for diagnosis, according to a clinical trial led by researchers at the University of California, San Fransisco and the San Francisco General Hospital and Trauma Center (SFGH).

Published this month in the journal Annals of Neurology, the study led by UCSF neuroradiologist Esther Yuh, MD, PhD, followed 135 people treated for mild traumatic brain injuries at one of three urban hospitals with level-one trauma centers.  The study, called TRACK-TBI (Transforming Research and Clinical Knowledge in Traumatic Brain Injury) ran for the past two years.

All 135 patients with mild traumatic brain injuries received CT scans when they were first admitted, and all were given MRIs about a week later. Most of them (99) had no detectable signs of injury on a CT scan, but more than a quarter (27/99) who had a “normal” CT scans also had detectable spots on their MRI scans called “focal lesions,” which are signs of microscopic bleeding in the brain.

Spotting these focal lesions helped the doctors predict whether the patients were likely to suffer persistent neurological problems. About 15 percent of people who have mild traumatic brain injuries do suffer long-term neurological consequences, but doctors currently have no definitive way of predicting whether any one patient will or not.

John Hagel On Rethinking Race Against The Machines

John Hagel
Race Against The Machine
John Hagel says we have designed jobs in the U.S. that tend to be "tightly scripted," "highly standardized," that leave no room for "individual initiative or creativity." In short, these are the types of jobs that machines can perform much better at than human beings. That is how we have put a giant target sign on the backs of American workers, Hagel says.
Entrepreneur, management consultant and author of The Only Sustainable Edge: Why Business Strategy Depends On Productive Friction And Dynamic SpecializationJohn Hagel says we have designed jobs in the U.S. that tend to be "tightly scripted," "highly standardized," that leave no room for "individual initiative or creativity." These are the types of jobs that machines can perform much better at than human beings. That is how we have put a giant target sign on the backs of American workers, Hagel says.

Hagel, author of The Power of Pull, says Brynjolfsson and McAfee miss the reason why these jobs are so vulnerable to technology in the first place.
According to Hagel, Race Against The Machine from Erik Brynjolfsson and Andrew McAfee is a very interesting book. It's gotten a lot of popularity because it's targeting an issue that is front and center for a lot of people in the United States and around the world, namely the issue of jobs creation and technological unemployment. The goal of the book is to introduce technology as a key engine of the changes that we're looking at in terms of unemployment and job creation.

Technology is advancing at a very rapid pace and it is becoming more and more able to take over activities that we, as humans, have been performing. Hagel thinks the issue is, though, by framing the challenge as a technology challenge, what Brynjolfsson and McAfee miss is the reason why these jobs are so vulnerable to technology.

Monday, December 17, 2012

Doug Wolens Documentary On The Singularity Now Available

The Singularity
In his documentary film, The Singularity, director Doug Wolens interviewed leading futurists, computer scientists, artificial intelligence experts, and philosophers examine the question of what will humanity look like in the future.
In his documentary film, The Singularity, director Doug Wolens interviewed leading futurists, computer scientists, artificial intelligence experts, and philosophers examine the question of what will humanity look like in the future.

The film has just been released on iTunes.

According to Wolens, he first learned about the Singularity concept in 2000 while on the road self-distributing his documentary Butterfly (about the young woman who sat in an ancient redwood tree for two years preventing it from being cut down).

Ray Kurzweil’s argument that the rate of technology advances exponentially sounded reasonable to Wolens. As he read more about the science involved he became inspired with the arguments, especially having grown up in an age when "we were taught that science could solve all problems."

Those who insist this paradigm shift is only decades away emphasize that we’re on the cusp of creating nanotechnology that will patrol our bloodstream and repair cellular damage, athletes with jacked-up genetic code who sprint like gazelles, the Internet will interact directly with your  mind, and medical labs with computer-replicated brains working by the thousands to cure disease.

The Singularity Documentary

In the film, Wolens interviews: Kurzweil, Leon Panetta, Richard A. Clarke, Bill McKibben, David Chalmers, Christof Koch, Aubrey De Grey, Ralph Merkle, Brad Templeton, Cynthia Breazeal, Marshall Brain, Glenn Zorpette and many others.

New Disease Insights From High Throughput Screening Of Cancer Cells

cancer stem cells (CSCs)
 Cancer Stem Cells
In this guest post, Kate Whelan looks into the history of cancer stem cells (CSCs) and addresses how over the past decade, a great deal of research into CSCs and the CSC concept has revealed new information that has enabled scientists to explore new potential cancer therapies, and even investigate innovative new approaches to cancer treatments.
Cancer stem cells (CSCs) are sub-populations of tumor cells that can self-renew, they are thought to form tumors and contribute to metastasis. These cells have special properties, including resistance to current cancer treatments (chemotherapy and radiotherapy), therefore there is a critical need to find new therapies to target these cells.

Recent developments in the field, including high throughput screening assays that have been made to test a variety of potential therapies against these cells. These studies are simultaneously providing a wealth of new insights relating to the molecular mechanisms of cancer and cell biology of stem cells.

The idea that cancer originates from adult stem cells was initially put forward in the mid-nineteenth century, and as stem cell research developed during the late twentieth century, scientists realized that stem cells and cancer cells share significant properties [Lapidot et al. 1994; reviewed in Behbod & Rosen, 2004]. This led to development of the ‘cancer stem cell concept’ [Reya et al. 2001], which proposes that a specific subpopulation of highly malignant stem cells exist within a tumor [Al-hajj et al. 2003; Singh et al. 2003; Stingl & Caldas 2007].

These cancer stem cells (CSCs) share features that distinguish stem cells from other types of cells, including the ability for infinite self-renewal, through which they are responsible for the tumor growth and for driving metastasis. CSCs and stem cells also share similar signalling pathways that regulate self-renewal.

Further research over the past decade has characterized CSCs and established some important differences between these cells and stem cells. CSCs behave differently from normal stem cells, particularly with regards to their malignant phenotype, and this is largely due to changes in regulation of CSCs’ self-renewal capability. These include differences in cell division, cell cycle properties, and handling of DNA damage; changes in the activation and inactivation of cancer-specific molecular pathways are also thought to contribute to this phenotype [reviewed by Al-Hajj & Clarke 2004; Falzacappa et al. 2012].

Researchers Develop New Model Of The Cell

cell model
Researchers at the University of California, San Diego School of Medicine and colleagues have proposed a new method that creates a computational model of the cell from large networks of gene and protein interactions, discovering how genes and proteins connect to form higher-level cellular machinery.
The great challenge of bioinformatics involves distilling the vast amounts of genomic data into meaningful information about the cell, with major implications for human biology and medicine. Now, researchers at the University of California, San Diego School of Medicine and colleagues have proposed a new method that creates a computational model of the cell from the large networks of gene and protein interactions, discovering how genes and proteins connect to form higher-level cellular machinery.

The findings are published in the December 16 advance online publication of Nature Biotechnology.

"Our method creates ontology, or a specification of all the major players in the cell and the relationships between them," said first author Janusz Dutkowski, PhD, postdoctoral researcher in the UC San Diego Department of Medicine. It uses knowledge about how genes and proteins interact with each other and automatically organizes this information to form a comprehensive catalog of gene functions, cellular components, and processes.

"What's new about our ontology is that it is created automatically from large datasets. In this way, we see not only what is already known, but also potentially new biological components and processes – the bases for new hypotheses," said Dutkowski.

Originally devised by philosophers attempting to explain the nature of existence, ontologies are now broadly used to encapsulate everything known about a subject in a hierarchy of terms and relationships. Some artificial intelligence systems, such as Apple's Siri, are built on ontologies to enable reasoning about the real world. Ontologies are also used by scientists to structure knowledge about subjects like taxonomy, anatomy and development, bioactive compounds, disease and clinical diagnosis.

A Gene Ontology (GO) exists as well, constructed over the last decade through a joint effort of hundreds of scientists. It is considered the gold standard for understanding cell structure and gene function, containing 34,765 terms and 64,635 hierarchical relations annotating genes from more than 80 species.

Sunday, December 16, 2012

What Will Happen, Now That Ray Kurzweil Works At Google?

Ray Kurzweil, Google's Director of Engineering

 Ray Kurzweil
Ray Kurzweil and Google announced on Friday that the inventor, futurist and author of the book , The Singularity is Near, will now head up Google's engineering in formal role. This will role will provide Kurzweil with the capital and resources to test the prescription of his latest book, How To Create A Mind, and shows that Google's founders are firmly working towards making Kurweil's predictions of the Technological Singularity a reality. 
Ray Kurzweil, the famed inventor and futurist, said on Friday that he would join Google, starting Monday, to work on "some of the hardest problems in computer science."

Kurzweil's title will be Director of Engineering. In a statement on his Web site, he said he would focus on machine learning and language processing:

"I've been interested in technology, and machine learning in particular, for a long time: when I was 14, I designed software that wrote original music, and later went on to invent the first print-to-speech reading machine for the blind, among other inventions. I've always worked to create practical systems that will make a difference in people's lives, which is what excites me as an inventor.

"In 1999, I said that in about a decade we would see technologies such as self-driving cars, and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast-forward a decade -- Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones. It's easy to shrug our collective shoulders as if these technologies have always been around, but we're really on a remarkable trajectory of quickening innovation, and Google is at the forefront of much of this development.

"I'm thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade's 'unrealistic' visions into reality."

Google confirmed the news and said Kurzweil's long history of invention would prove useful.

Researchers Discuss Artificial Retina Development

retinal prosthetic

 Bionic Eye
Recently, a team from Lawrence Livermore National labs describes how the nervous system works and how neurons communicate then discusses the first long-term retinal prosthesis that can function for years inside the harsh biological environment of the eye.
Millions of people worldwide suffer from ocular diseases that degrade the retina, the light-processing component of the eye, causing blindness.

Now, a team from Lawrence Livermore National labs describes how the nervous system works and how neurons communicate then discusses the first long-term retinal prosthesis that can function for years inside the harsh biological environment of the eye.

For researchers such as Satinderpall Pannu of Livermore’s Engineering Directorate, integrating nanometer-size devices with biological systems and studying the interface between them is a daily reality. 

In 2009, he and his team set the stage for a new generation of neural implants with the design and fabrication of an artificial retina. The device, a fully implantable neural prosthetic actually restores a sense of vision to people who have lost their sight because of ocular disease. Pannu and his team have continued to enhance the technology, which promises to dramatically improve the lives of patients with debilitating conditions caused by injury or neurological disease.

Saturday, December 15, 2012

3D Printing Of Electronic Sensors Now Possible With Carbomorph Material

3D Printed circuits

 3D Printing
Scientists are developing new materials which could one day allow people to print out custom-designed personal electronics such as games controllers which perfectly fit their hand shape. The University of Warwick researchers have created a simple and inexpensive conductive plastic composite that can be used in 3D printers, allowing even home-based users to print out their own devices complete with microelectronics embedded.   
Researches at the University of Warwick have printed working electronic devices for the first time using a standard 3D printer fitted with a new type of plastic that conducts electricity.

“This technology could revolutionize the way we produce the world around us,” said Dr Simon Leigh, who led the research team at the School of Engineering at the University of Warwick.

Carbomorph is a carbon-rich composite material that can be used in existing 3D printers to print electronic circuits.  The results of the research are published in PLOS One.

With Carbomorph injected alongside a regular plastic in multi-headed 3D printers this could allow the printing of the physical forms plus the electronic innards of objects such as mobile phones and remote controls in one operation.

Until now, the exterior form and interior workings of electronic devices have had to be manufactured and printed separately.

“It’s always great seeing the complex and intricate models of devices such as mobile phones or television remote controls that can be produced with 3D printing,” Leigh said. “But that’s it, they are invariably models that don’t really function.”

He added: “We set about trying to find a way in which we could actually print out a functioning electronic device from a 3D printer.”

Friday, December 14, 2012

Apple Ramps Up R&D Spending In A Big Way

Since last year, Apple has dramatically spent more on research and development.  Speculation is that they are moving to start producing their own chips as CEO Tim Cook pushes for integration across the company's value chain.
Last year in their financial quarter ending June 2011, Apple spent less than $1 billion on property, plants, and equipment. By March 2012, the number had spiked beyond $2 billion, beyond $3 billion, and approached $4 billion.

Horace Dediu thinks that number will zoom past $4 billion in 2013.  According to him this represents an "extraordinary evidence of an extraordinary shift in strategy."  Apple is now spending along the lines of Samsung for research.

According tot the Business Insider, the interesting part about all this massive spending is that no one outside of Apple knows where it's going.

"The capital is being deployed almost silently and, though vast in scale, barely gets a mention from analysts," writes Dediu. "Not even a single question has been raised at any earnings call about this spending."

His theory is that Apple, which prefers an "integrated" approach in everything it does, will soon make more of the components inside its gadgets, like chips.  This falls in line with the company's CEO, Tim Cook.

That would explain why Apple has been so busy hiring former Texas Instruments employees, for example.

To be honest, Apple is a very secretive company and it doesn't have to say, specifically, where it's spending that money. However, everyone is well aware that Apple is always working on products that would cannibalize its current lineup.

According to Dediu the pattern represents "increasing commitment and engagement in parts of the value chain as part of a continuous evolution of Apple’s role. Furthermore, it’s something that should be seen as a signal of a new era in how technology companies operate. We see hints of “vertical integration” with Microsoft building hardware, and Google buying Motorola and Amazon selling devices. Apple did all these things and now it casts an eye over the next frontier: components."

SOURCE  Business Insider Top Chart - ASYMCO

By 33rd SquareSubscribe to 33rd Square

How To Test If We Are Living In A Computer Simulation

universe as simulation

 Simulation Argument
Physicists from the University of Washington have come up with a plan to test the idea, posed by Nick Bostrom, that the universe we are living in is actually a simulation.  Using a technique called lattice quantum chromodymamics, the researchers expect that a signature of a simulation would be in how cosmic rays would behave in a universal simulation.
Adecade ago, Oxford philosopher Nick Bostrom put forth the notion that the universe we live in might in fact be a computer simulation. His premise, the Simulation Argument, is based on a series of assumptions and now a team of physicists at the University of Washington has come up with a potential test of the idea.

In the his paper, Bostrom argued that at least one of three possibilities is true:

  • -The human species is likely to go extinct before reaching a “posthuman” stage.
  • -Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
  • -We are almost certainly living in a computer simulation.

Bostrom also held that “the belief that there is a significant chance that we will one day become post-humans who run ancestor simulations is false, unless we are currently living in a simulation.”

Extrapolating from Moore's Law, it will be decades before scientists will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.

Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a University of Washington physics professor.

Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

Thursday, December 13, 2012

US National Intelligence Council Publishes Outlook For 2030

City of the Future
Political Futurism
The National Intelligence Council released a report documenting projected major geopolitical trends and technological developments for the next 20 years. The NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing issues with water, food and energy. 
In a recently released report titled, Global Trends 2030: Alternative Worlds the US National Intelligence
Council (NIC) presents the fifth installment in a series aimed at providing a framework for thinking about the future. The report mentions how implants, prosthetics and powered exoskeletons will become regular fixtures of our lives and could result in augmented improvements to our abilities.

The NIC was founded in 1979 as an intelligence body focused on the long term strategic thinking for American intelligence agencies.  The council reports to the Director of National Intelligence, James Clapper.

The report is intended to stimulate strategic thinking by identifying critical trends and potential issues for the future. It focuses on the mega-trends that are more likely to occur based on the authors' research. The introduction mentions that the diversity and complexity of various factors and disruptive technologies has increased, so the report has focused on scenario planning of potential alternative worlds of the future.
According to the authors:
We are at a critical juncture in human history, which could lead to widely contrasting futures. It is our contention that the future is not set in stone, but is malleable, the result of an interplay among megatrends, game-changers and, above all, human agency. Our effort is to encourage decisionmakers—whether in government or outside—to think and plan for the long term so that negative futures do not occur and positive ones have a better chance of unfolding.
Of the many choices, the report focuses on four main megatrends: individual empowerment, diffusion of power, demographics and resource issues.  

Researchers Develop Underwater Sensor That May Help Robots Swim Like Fish

Underwater robot

 Sensor Technology
Singapore's Nanyang Technological University researchers have invented a new underwater sensor array, similar to a string of ‘feelers’ found on the bodies of the blind cave fish, which enables the fish to sense their surrounding and so navigate easily. The system has potential applications for underwater robots, or AUVs as well as other sea vessels.
Scientists at Singapore's Nanyang Technological University have invented a new underwater sensor array, similar to a string of ‘feelers’ found on the bodies of the blind cave fish, which enables the fish to sense their surrounding and so navigate easily.

Using a combination of water pressure and computer vision technology, the sensory device is able to generate a 3D image of nearby objects and map its surroundings.

Some of the possible applications of this fish-inspired sensor are enormous. The sensor can potentially replace the expensive ‘eyes and ears’ on Autonomous Underwater Vehicles (AUVs), submarines and boats that currently rely on cameras and sonar to gather information about the environment around them.

The revolutionary, low-powered sensor is superior to a camera which cannot see in dark or murky waters; or sonar whose sound waves pose harm to some marine animals.

The new sensors require much less power to operate than other systems, and the researchers are also working on developing a sensor version that is powered by the water moving past it, which could virtually eliminate the need for a battery altogether.

 Furthermore, at around $100 to produce, the new sensors are much cheaper than cameras or sonar systems.
Blind Cave Fish
The researchers were inspired to create the sensor by the blind cave fish.
These extremely small sensors (each sensor is 1.8mm x 1.8mm) are now being used in AUVs developed by researchers from Singapore-MIT Alliance for Research and Technology (SMART), a research centre funded by the National Research Foundation. The centre is developing a new generation of underwater ‘stingray-like’ robots and autonomous surface vessels.

The new sensors, made using Microelectromechanical Systems (MEMS) technology, will make such robots smarter and prolong their operational time as battery power is conserved.

Associate Professor Miao Jianmin from the School of Mechanical and Aerospace Engineering, and his team of four have spent the last five years in collaboration with SMART to develop micro-sensors that mimic the row of ‘feelers’ on both sides of the Blind cave fish’s body.

Wednesday, December 12, 2012

Christopher Nolan And Johnny Depp Sign On To Film About The Singularity

Christopher Nolan
Director Christopher Nolan has signed on to executive produce a new movie about the Singularity starring Johnny Depp. Transcendence is reportedly about a scientist who uploads his brain after being assassinated by anti-technology terrorists and is rumored to include many other references to Singularity theory including nanotechnology.
The Singularity is near and now ready to hit the silver screen, with The Dark Knight Trilogy and Inception director Christopher Nolan signed on to executive produce.

The Wrap reports that the science fiction film, titled Transcendence will based on a story by Jack Paglen. Johnny Depp “will play a scientist whose brain is uploaded into a supercomputer” while trying to create the first ever sentient computer. Nanotechnology and the Singularity are also said to play pivotal roles in the film.

Depp will be playing the main character, Will, who is assassinated by anti-technology terrorists. When his wife Evelyn uploads his brain, Will begins to respond to her queries through the computer. By connecting his uploaded brain to the Internet, he can continue his scientific research.

According to The Wrap:

Will asks Evelyn to connect a microphone and a camera up to the computer so he can see and speak to her as well. Will creates a backup of himself to every computer in the world, and furthers his work through accessing online indexes.

The Wrap warns that the script summary it received could be out of date, but honestly as long as the film includes brain supercomputers and Christopher Nolan, we are optimistic. Ray Kurzweil is also rumored to be an advisor to this the movie.
SOURCE  The Wrap

By 33rd SquareSubscribe to 33rd Square

Federico Pistono Shares Why It Is OK For Robots To Steal Your Job

 Technological Unemployment
Federico Pistono knows automation threatens jobs.  As the author of Robots Will Steal Your Jobs, But That's OK, Pistono suggests that in order to deal with it, he thinks we need massive social change.
We have covered the work of the young Italian thinker, Federico Pistono before. He has authored the insightful, Robots Will Steal Your Job, But That's OK, and earlier this year attended Singularity University.

In his book, Pistono points to the power of exponential technology on the economy and how artificial intelligence and robots like Rethink Robotic's Baxter, pictured above are destined to increase unemployment. This work compliments Martin Ford's, The Lights in the Tunnel and Race Against The Machine, by Andrew McAfee and Erik Brynjolfsson.  Unlike those books though, Pistono suggests quite a radical way of dealing with technological unemployment.

Quoting Arhur C. Clarke who said, "The goal of the future is full unemployment so we can play.  That’s why we have to destroy the present politico-economic system," Pistono points out that while his ideas may sound radical, they are actually possible with technology, resources and a bold vision.

According to Pistono, for hundreds of years there was a correlation of growth and quality of life, because you needed to go from having nothing to having a good standard of living. Once a certain point is reached, this correlation no longer holds. This is because there was no causation; it was just a correlation. One did not directly cause the other. Once some enabling factors come in you have this decoupling of growth and happiness and quality of life.

Tuesday, December 11, 2012

Kevin Warwick Discusses Creating Intelligence From Biological Neural Networks

Robots with biological neurons
Biological Robots
Kevin Warwick studies the relatively new area of culturing neural tissue and embodying them into robot platforms—essentially giving a robot a biological brain.  This work has a potential major impact with regard to society and ethical issues.  In a lecture recorded last year, Warwick explores the initial issues of the research and looks to the potential consciousness of such a brain.
Kevin Warwick is well known for his work in integrating electronics with biology.  He has even implanted devices on himself for his Project Cyborg.   His research now centers around using biological neurons to control robots.

His team at the University of Reading anticipates that the behavior of the rat neurons controlling robots will provide insight into how brains store data, which could lead to a better understanding of disorders such as Alzheimer’s Disease, Parkinson’s Disease, and strokes.

The rat neurons are housed in a small vat of nutrients and antibiotics, where they make connections and generate electrical signals. A multi-electrode array (MEA), equipped with approximately 60 electrodes, picks up the signals and transmits them to the robot via Bluetooth.

Information about the robot’s surroundings is collected from an ultrasound sensor, and communicated to the neurons via the MEA. When the robot nears an obstacle, the MEA stimulates the neurons, causing them to react. Their reaction is transmitted back to the robot, moving it left or right. By applying different signals when the robot moves into a predefined location, it is hoped the neurons will begin to manifest signs of memory creation.

Kenshiro Robot Uses Biomimicry To Copy Human Form

Kenshiro Robot
Researchers at the University of Tokyo are taking bio-inspired robots to new heights with Kenshiro, their new human-like musculoskeletal robot. Kenshiro’s underlying structures are the closest biomechanical creation to the human's form created.
Rolf Pfeifer and only a very few researchers have been able to mimic the human body down to muscles and bones in robotic form.

Now researchers at the University of Tokyo are taking bio-inspired robots to new heights with Kenshiro, their new human-like musculoskeletal robot revealed at the Humanoids conference this month.

They have added more muscles and more motors to their Kojiro robot from 2010, making Kenshiro’s underlying structure the closest to a human's form seen in robotics.

Kenshiro mimics the body of the average Japanese 12-year-old male, standing at 158 centimeters tall and weighing 50 kilograms. Kenshiro’s body mirrors almost all the major muscles in a human, with 160 pulley-like "muscles"—50 in the legs, 76 in the trunk, 12 in the shoulder, and 22 in the neck.

The robot has the most muscles of any other bio-inspired humanoid according to IEEE Spectrum.

IBM Breakthrough Will Use Light To Move Big Data

IBM Silicon nanophotonics
Computer Technology
IBM has announced a breakthrough optical communication technology which has been verified in a manufacturing environment. The technology – called “silicon nanophotonics” – uses light instead of electrical signals to transfer information for future computing systems, thus allowing large volumes of data to be moved fast between computer chips in servers, large data-centers, and supercomputers via pulses of light.
Announcing from San Fransisco this week, IBM has claimed a major breakthrough in the ability to use light instead of electrical signals to transmit information for future computing. The technology – called “silicon nanophotonics” – allows the integration of different optical components side-by-side with electrical circuits on a single silicon chip using, for the first time, sub-100nm semiconductor technology.

Silicon nanophotonics takes advantage of pulses of light for communication and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

As seen in the image above, the IBM 90nm Silicon Integrated Nanophotonics technology is capable of integrating a photodetector (red feature on the left side of the cube) and modulator (blue feature on the right side) fabricated side-by-side with silicon transistors ( red sparks on the far right). Silicon Nanophotonics circuits and silicon transistors are interconnected with nine levels of yellow metal wires.

 “This technology breakthrough is a result of more than a decade of pioneering research at IBM,” said Dr. John E. Kelly, Senior Vice President and Director of IBM Research. “This allows us to move silicon nanophotonics technology into a real-world manufacturing environment that will have impact across a range of applications.”

The amount of data being created and transmitted over enterprise networks continues to grow due to an explosion of new applications and services. Silicon nanophotonics, now primed for commercial development, can enable the industry to keep pace with increasing demands in chip performance and computing power.

Researchers Create Brain Cells From Urine

 Regenerative Medicine
Chinese scientists have been able to reprogram kidney cells harvested from urine samples into neural cell progenitors--immature brain cells that can develop into various types of glial cells and neurons. This type of reprogramming has been done before, but not with cells gleaned from urine and not from a method this direct. The technique could prove extremely helpful to those pursuing treatments for neurodegenerative disorders like Parkinson’s and Alzheimer’s.
Chinese scientists have devised a new technique for reprogramming cells from human urine into immature brain cells that can form multiple types of functioning neurons and glial cells. The technique, published in the journal Nature Methods, could prove useful for studying the cellular mechanisms of neurodegenerative conditions such as Alzheimer's and Parkinson's and for testing the effects of new drugs that are being developed to treat them.

Stem cells offer the hope of treating these debilitating diseases, but embryonic stem cells have always posed an ethical dilemma. It is now possible that cells taken from the adult human body can be made to revert to a stem cell-like pluripotent state and then transformed into virtually any other type of cell.

For instance, red blood cells have recently been used to create induced pluripotent stem cells (iPS), and researchers have been able to do the same with skin and other cells.  Furthermore, grafts of patients' own cells do not elicit an immune response, so this approach may eventually lead to effective cell transplantation therapies. At this stage, the methods for generating iPS cells are not foolproof though – it appears that the reprogramming process destabilizes the genome and causes mutations, and that iPS cells potentially contain genetic defects that render them useless.

Last year, Duanqing Pei of the Chinese Academy of Sciences and his colleagues reported that human urine contains skin-like cells from the lining of the kidney tubules which can be efficiently reprogrammed, via the pluripotent state, into neurons, glia, liver cells and heart muscle cells. Now they have improved on the approach, making it quicker, more efficient and possibly less prone to errors.

Monday, December 10, 2012

Alexander Bard On The Internet Revolution

Alexander Bard
Alexander Bard proposes to look into predicting the  future of the information society, we first need to reassess our previous view of history and define new revolutions in terms of information.
At the NEXT Berlin 2012 Conference earlier this year, Alexander Bard talked about the fourth revolution, where the internet dramatically changing society, culture and economy. Even though his conclusions sometimes seem obvious, his talk (embedded below) is enlightening if not entertaining.

In the lecture, Bard proposes to look into the future of the information society, we first need to reassess our previous view of history and define new revolutions in terms of information.  In his view, the first revolution was the development of speech, the second was the development of writing, and the third was the rise of printed publishing after Gutenberg.

The fourth revolution was the development of the internet.  The information explosion brought about by the internet revolution means that information becomes less centralized, as Bard suggests was a consequence of the previous revolutions.

As with Jason Silva's latest video, Bard suggests that attention is the currency of the internet age.  He clarifies the equation as awareness multiplied by credibility.  He says:
We moved from the countryside to the city, now we've moved from the city to cyberspace.  We've moved from capital as the driving force to attention as the guiding force and everything is now about people having attention connecting with other people that have attention networks.