Friday, 21 December 2012

Virtual reality and robotics in neurosurgery: Promise and challenges

Dec. 20, 2012 — Robotic technologies have the potential to help neurosurgeons perform precise, technically demanding operations, together with virtual reality environments to help them navigate through the brain, according to researchers.

The topic is the focus of a special supplement to Neurosurgery (http://www.neurosurgery-online.com/), official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

"Virtual Reality (VR) and robotics are two rapidly expanding fields with growing application within neurosurgery," according to an introductory article by Garnette Sutherland, MD. The 22 reviews, commentaries, and original studies in the special supplement provide an up-to-the-minute overview of "the benefits and ongoing challenges related to the latest incarnations of these technologies."

Robotics and VR in Neurosurgery -- What's Here and What's Next Virtual reality and robotic technologies present exciting opportunities for training, planning, and actual performance of neurosurgical procedures. Robotic tools under development or already in use can provide mechanical assistance, such as steadying the surgeon's hand or "scaling" hand movements. "Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue," writes Dr. Sutherland.

Virtual reality technologies play an important role, providing "spatial orientation" between robotic instruments and the surgeon. Virtual reality environments "recreate the surgical space" in which the surgeon works, providing 3-D visual images as well as haptic (sense of touch) feedback. The ability to plan, rehearse, and "play back" operations in the brain could be particularly valuable for training neurosurgery residents -- especially since recent work hour changes have limited opportunities for operating room experience.

The special supplement to Neurosurgery presents authoritative updates by experts working in the field of surgical robotics and VR technology, drawn from a wide range of disciplines. Topics include robotic technologies already in use, such as the "neuroArm" image-guided neurosurgical robot; reviews of progress in areas such as 3-D neurosurgical planning and virtual endoscopy; and new thinking on the best approaches to development, evaluation, and clinical uses of VR and robotic technologies.

But numerous and daunting technical challenges remain to be met before robotic and VR technologies become widely used in clinical neurosurgery. For example, VR environments require extremely fast processing times to provide the surgeon with continuously updated sensory information -- equal to or faster than the brain's ability to perceive it.

Economic challenges include the high costs of developing and implementing VR and robotic technologies, especially in terms of showing that the costs are justified by benefits to the patient. Continued progress in miniaturization will play an important role both in overcoming the technical challenges and in making the technology cost-effective.

The editors of Neurosurgery hope their supplement will stimulate interest and further progress in the development and practical implementation of VR and robotic technologies for neurosurgery. Dr. Sutherland adds, "Collaboration between the fields of medicine, engineering, science, and technology will allow innovations in these fields to converge in new products that will benefit patients with neurosurgical disease."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Wolters Kluwer Health, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, 19 December 2012

The jacket that talks to Facebook in an emergency

Sep. 17, 2012 — In an emergency situation, we cannot expect rescue crews to do their jobs while fumbling with a tiny mobile phone when they need to read and send messages. Instead the scientists decided to create a prototype jacket that could communicate with Facebook.

Collision. Fire. Accidents. Chaos. In a rescue operation, it's no use trying to communicate via a small mobile phone display. But a jacket -- now you're talking!

The EU Societies project is all about technology and communication in extreme situations, such as rescue operations following major accidents. ICT researchers at SINTEF have been working on this topic for a long time, and the idea of developing a physical user interface for social media came from seeing how limited a normal mobile phone is as an aid during a chaotic emergency situation.

Screen is a minus in an emergency Most of our focus when we use computers is on the screen. We communicate via a display. But in an emergency situation, we cannot expect rescue crews to do their jobs while fumbling with a tiny mobile phone when they need to read and send messages. It doesn't just require full concentration -- it also requires two hands.

Similarly, a firefighter called out to an emergency doesn't have time to put on anything special -- he or she just grabs a jacket and helmet, and runs. "Crews therefore need devices with a much simpler user interface. That was the basic idea behind making the jackets," says researcher Babak Farshchian of SINTEF ICT.

BlueTooth A group of students at the Norwegian University of Science and Technology's (NTNU) Department of Computer and Information Science (IDI) decided to create a prototype jacket that could communicate with Facebook, and have been working on the assignment for the last six months. They decided to use the Arduino platform to create the physical user interface with social media. Arduino is a popular system used to develop physical prototypes that integrate with ICT. The platform that supports the jacket communicates with an ordinary Android mobile phone via BlueTooth. This means that the user does not get tangled in cables.

Keyboard in the sleeve They bought a simple lined jacket from a popular sports retailer called XXL, and inserted the cables and sensors between the inner and outer layers. Then they put a battery-operated circuit in the pocket, which controls the sensors and microphone. All the cables and electronics are concealed from the user. Instead of a telephone display, the jacket sleeve has a display sewn into it, showing a line of rolling text. The user will also feel a vibration in his or her neck, made via a small vibrator inserted in the collar. A vibration means that the person has received a message, which he or she can read by lifting an arm and looking at the display. Rescue work is often carried out in large groups, with professionals from different units and organizations that need to communicate and coordinate their actions efficiently during a rescue operation. "By using social media technology, we can enable these groups to communicate, and this jacket with a similar, customized user interface makes it easy and practical to use more advanced ICT in demanding rescue work," says Farshchian.

Better adapted to needs Easier access to social media is an idea that could be of interest to those with sight and hearing impairments, since these groups have problems using a screen. Being able to dictate and hear messages would not only be more user friendly, but also better adapted to their needs.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by SINTEF, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Liquid that thinks:' Swarm of ping-pong-ball-sized robots created

Dec. 14, 2012 — University of Colorado Boulder Assistant Professor Nikolaus Correll likes to think in multiples. If one robot can accomplish a singular task, think how much more could be accomplished if you had hundreds of them.

Correll and his computer science research team, including research associate Dustin Reishus and professional research assistant Nick Farrow, have developed a basic robotic building block, which he hopes to reproduce in large quantities to develop increasingly complex systems.

Recently the team created a swarm of 20 robots, each the size of a Ping Pong ball, which they call "droplets." When the droplets swarm together, Correll said, they form a "liquid that thinks."

To accelerate the pace of innovation, he has created a lab where students can explore and develop new applications of robotics with basic, inexpensive tools.

Similar to the fictional "nanomorphs" depicted in the "Terminator" films, large swarms of intelligent robotic devices could be used for a range of tasks. Swarms of robots could be unleashed to contain an oil spill or to self-assemble into a piece of hardware after being launched separately into space, Correll said.

Correll plans to use the droplets to demonstrate self-assembly and swarm-intelligent behaviors such as pattern recognition, sensor-based motion and adaptive shape change. These behaviors could then be transferred to large swarms for water- or air-based tasks.

Correll hopes to create a design methodology for aggregating the droplets into more complex behaviors such as assembling parts of a large space telescope or an aircraft.

In the fall, Correll received the National Science Foundation's Faculty Early Career Development award known as "CAREER." In addition, he has received support from NSF's Early Concept Grants for Exploratory Research program, as well as NASA and the U.S. Air Force.

He also is continuing work on robotic garden technology he developed at the Massachusetts Institute of Technology in 2009. Correll has been working with Joseph Tanner in CU-Boulder's aerospace engineering sciences department to further develop the technology, involving autonomous sensors and robots that can tend gardens, in conjunction with a model of a long-term space habitat being built by students.

Correll says there is virtually no limit to what might be created through distributed intelligence systems.

"Every living organism is made from a swarm of collaborating cells," he said. "Perhaps some day, our swarms will colonize space where they will assemble habitats and lush gardens for future space explorers."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Colorado at Boulder.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Biology-friendly robot programming language: Training your robot the PaR-PaR way

Oct. 23, 2012 — Teaching a robot a new trick is a challenge. You can't reward it with treats and it doesn't respond to approval or disappointment in your voice. For researchers in the biological sciences, however, the future training of robots has been made much easier thanks to a new program called "PaR-PaR."

Nathan Hillson, a biochemist at the U.S. Department of Energy (DOE)'s Joint BioEnergy Institute (JBEI), led the development of PaR-PaR, which stands for Programming a Robot. PaR-PaR is a simple high-level, biology-friendly, robot-programming language that allows researchers to make better use of liquid-handling robots and thereby make possible experiments that otherwise might not have been considered.

"The syntax and compiler for PaR-PaR are based on computer science principles and a deep understanding of biological workflows," Hillson says. "After minimal training, a biologist should be able to independently write complicated protocols for a robot within an hour. With the adoption of PaR-PaR as a standard cross-platform language, hand-written or software-generated robotic protocols could easily be shared across laboratories."

Hillson, who directs JBEI's Synthetic Biology program and also holds an appointment with the Lawrence Berkeley National Laboratory (Berkeley Lab)'s Physical Biosciences Division, is the corresponding author of a paper describing PaR-PaR that appears in the American Chemical Society journal Synthetic Biology. The paper is titled "PaR-PaR Laboratory Automation Platform." Co-authors are Gregory Linshiz, Nina Stawski, Sean Poust, Changhao Bi and Jay Keasling.

Using robots to perform labor-intensive multi-step biological tasks, such as the construction and cloning of DNA molecules, can increase research productivity and lower costs by reducing experimental error rates and providing more reliable and reproducible experimental data. To date, however, automation companies have targeted the highly-repetitive industrial laboratory operations market while largely ignoring the development of flexible easy-to-use programming tools for dynamic non-repetitive research environments. As a consequence, researchers in the biological sciences have had to depend upon professional programmers or vendor-supplied graphical user interfaces with limited capabilities.

"Our vision was for a single protocol to be executable across different robotic platforms in different laboratories, just as a single computer software program is executable across multiple brands of computer hardware," Hillson says. "We also wanted robotics to be accessible to biologists, not just to robot specialist programmers, and for a laboratory that has a particular brand of robot to benefit from a wide variety of software and protocols."

Hillson, who earlier led the development of a unique software program called "j5" for identifying cost-effective DNA construction strategies, says that beyond enabling biologists to manually instruct robots in a time-effective manner, PaR-PaR can also amplify the utility of biological design automation software tools such as j5.

"Before PaR-PaR, j5 only outputted protocols for one single robot platform," Hillson says. "After PaR-PaR, the same protocol can now be executed on many different robot platforms."

The PaR-PaR language uses an object-oriented approach that represents physical laboratory objects -- including reagents, plastic consumables and laboratory devices -- as virtual objects. Each object has associated properties, such as a name and a physical location, and multiple objects can be grouped together to create a new composite object with its own properties.

Actions can be performed on objects and sequences of actions can be consolidated into procedures that in turn are issued as PaR-PaR commands. Collections of procedural definitions can be imported into PaR-PaR via external modules.

"A researcher, perhaps in conjunction with biological design automation software such as j5, composes a PaR-PaR script that is parsed and sent to a database," Hillson says. "The operational flow of the commands are optimized and adapted to the configuration of a specific robotic platform. Commands are then translated from the PaR-PaR meta-language into the robotic scripting language for execution."

Hillson and his colleagues have developed PaR-PaR as open-source software freely available through its web interface on the public PaR-PaR webserver http://parpar.jbei.org.

"Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work," Hillson says. "PaR-PaR accomplishes all of these objectives and is intended to benefit a broad segment of the biological research community, including non-profits, government agencies and commercial companies."

This work was primarily supported by the DOE Office of Science.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Gregory Linshiz, Nina Stawski, Sean Poust, Changhao Bi, Jay D. Keasling, Nathan J. Hillson. PaR-PaR Laboratory Automation Platform. ACS Synthetic Biology, 2012; : 121009112212000 DOI: 10.1021/sb300075t

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Bioinspired robot meets fish: Robotic fish research swims into new ethorobotics waters

Nov. 20, 2012 — New research is illuminating the emerging field of ethorobotics -- the study of bioinspired robots interacting with animal counterparts. They studied how real-time feedback attracted or repelled live zebrafish. The fish were more attracted to robots with tail motions that mimicked the live fish. The researchers hope that robots eventually may steer live animal or marine groups from danger.

Researchers at the Polytechnic Institute of New York University (NYU-Poly) have published findings that further illuminate the emerging field of ethorobotics -- the study of bioinspired robots interacting with live animal counterparts.

Maurizio Porfiri, associate professor of mechanical and aerospace engineering at NYU-Poly, doctoral candidates Vladislav Kopman and Jeffrey Laut and research scholar Giovanni Polverino studied the role of real-time feedback in attracting or repelling live zebrafish in the presence of a robotic fish.

Their findings, published in the Journal of the Royal Society Interface, show that zebrafish demonstrate increased attraction to robots that are able to modulate their tail motions in accordance with the live fishes' behavior.

The researchers deployed image-based tracking software to analyze the movement of the live zebrafish and provide real-time feedback to the robot. Porfiri and his colleagues found that zebrafish were most attracted to the robotic member when its tail beating motion replicated the behavior of "informed fish" attempting to lead "naive fish." When the robotic fish increased its tail beat frequency as a live fish approached, the zebrafish were likeliest to spend time near the robot.

This study shows the effectiveness of real-time visual feedback in efforts to use robots to influence live animal behavior. The findings may have particular application in wildlife conservation, where robotic members may be utilized to steer live animal or marine groups out of harms way.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Polytechnic Institute of New York University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

V. Kopman, J. Laut, G. Polverino, M. Porfiri. Closed-loop control of zebrafish response using a bioinspired robotic-fish in a preference test. Journal of The Royal Society Interface, 2012; 10 (78): 20120540 DOI: 10.1098/rsif.2012.0540

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Dragon readies for operational delivery flight

Oct. 5, 2012 — SpaceX is set to launch the first of a dozen operational missions for NASA to deliver more than 1,000 pounds of supplies to the International Space Station on Oct. 7. Launch time is 8:35 p.m. from Space Launch Complex 40 at Cape Canaveral Air Force Station in Florida, just a few miles south of the space shuttle launch pads. The spacecraft will be joined to the station three days later.

The flight, known as CRS-1, will launch and perform the same rendezvous with the station as a previous SpaceX craft.

The SpaceX Dragon capsule will ride into space on the strength of the company's Falcon 9 rocket and the booster's nine first stage kerosene- and oxygen-powered Merlin engines. The Falcon 9's second stage uses a single Merlin engine to boost the Dragon into its final orbit.

Eleven minutes after launch, when the Dragon capsule is safely in orbit, a pair of solar arrays will deploy from the sides of the Dragon and controllers on Earth will begin testing rendezvous sensors.

The mission is similar to the demonstration flight in May when a Dragon was grappled by the station's robotic arm to complete the first rendezvous and berthing by a private spacecraft at the space station.

The SpaceX craft will spend about three weeks connected to the station then it will be released to return to Earth.

A major difference for this mission is that the Dragon will be filled with an amount of cargo suitable for an operational mission. The prior flight carried just enough items to prove the capsule would do its job as a cargo hauler. This time, the manifest will include a freezer for the station's scientific samples, a powered middeck locker with an experiment inside along with a variety of materials for the astronauts living and working on the space station.

The supply flight is part of NASA's Commercial Resupply Services contract, which is paying SpaceX for 12 cargo runs to the orbiting laboratory. The station also is serviced by Russian Progress cargo capsules, European-made and launched Automated Transfer Vehicles, or ATVs, and Japanese-produced H-II Transfer Vehicles, or HTVs. All the cargo ships operate without astronauts or crew members aboard.

Once the spacecraft arrive at the station, the astronauts and cosmonauts onboard unload them and fill them with used materials or unneeded equipment before releasing them.

Here, SpaceX again does something unique. The Dragons are built with heat shields to survive a plunge through the atmosphere and splashdown safely in the ocean under billowing parachutes. The other cargo craft do not carry heat shields, so they just burn up in the atmosphere.

On its return trip, the Dragon capsule will carry more than a ton of scientific samples collected during space station research, along with the freezer the samples have been stored in. Astronauts also will load used station hardware into the capsule for return to Earth where engineers can get a firsthand look at it.

A second kind of American cargo craft is also being developed. The Orbital Sciences' Cygnus spacecraft and Antares rocket are due to make a demonstration flight later this year.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

NASA's Ironman-like exoskeleton could give astronauts, paraplegics improved mobility and strength

Oct. 12, 2012 — Marvel Comic's fictional superhero, Ironman, uses a powered armor suit that allows him superhuman strength. While NASA's X1 robotic exoskeleton can't do what you see in the movies, the latest robotic, space technology, spinoff derived from NASA's Robonaut 2 project may someday help astronauts stay healthier in space with the added benefit of assisting paraplegics in walking here on Earth.

NASA and The Florida Institute for Human and Machine Cognition (IHMC) of Pensacola, Fla., with the help of engineers from Oceaneering Space Systems of Houston, have jointly developed a robotic exoskeleton called X1. The 57-pound device is a robot that a human could wear over his or her body either to assist or inhibit movement in leg joints.

In the inhibit mode, the robotic device would be used as an in-space exercise machine to supply resistance against leg movement. The same technology could be used in reverse on the ground, potentially helping some individuals walk for the first time.

"Robotics is playing a key role aboard the International Space Station and will continue to be critical as we move toward human exploration of deep space," said Michael Gazarik, director of NASA's Space Technology Program. "What's extraordinary about space technology and our work with projects like Robonaut are the unexpected possibilities space tech spinoffs may have right here on Earth. It's exciting to see a NASA-developed technology that might one day help people with serious ambulatory needs begin to walk again, or even walk for the first time. That's the sort of return on investment NASA is proud to give back to America and the world."

Worn over the legs with a harness that reaches up the back and around the shoulders, X1 has 10 degrees of freedom, or joints -- four motorized joints at the hips and the knees, and six passive joints that allow for sidestepping, turning and pointing, and flexing a foot. There also are multiple adjustment points, allowing the X1 to be used in many different ways.

X1 currently is in a research and development phase, where the primary focus is design, evaluation and improvement of the technology. NASA is examining the potential for the X1 as an exercise device to improve crew health both aboard the space station and during future long-duration missions to an asteroid or Mars. Without taking up valuable space or weight during missions, X1 could replicate common crew exercises, which are vital to keeping astronauts healthy in microgravity. In addition, the device has the ability to measure, record and stream back, in real-time, data to flight controllers on Earth, giving doctors better feedback on the impact of the crew's exercise regimen.

As the technology matures, X1 also could provide a robotic power boost to astronauts as they work on the surface of distant planetary bodies. Coupled with a spacesuit, X1 could provide additional force when needed during surface exploration, improving the ability to walk in a reduced gravity environment, providing even more bang for its small bulk.

Here on Earth, IHMC is interested in developing and using X1 as an assistive walking device. By combining NASA technology and walking algorithms developed at IHMC, X1 has the potential to produce high torques to allow for assisted walking over varied terrain, as well as stair climbing. Preliminary studies using X1 for this purpose have already started at IHMC.

"We greatly value our collaboration with NASA," said Ken Ford, IHMC's director and CEO. "The X1's high-performance capabilities will enable IHMC to continue performing cutting-edge research in mobility assistance while expanding into the field of rehabilitation."

The potential of X1 extends to other applications, including rehabilitation, gait modification and offloading large amounts of weight from the wearer. Preliminary studies by IHMC have shown X1 to be more comfortable, easier to adjust, and easier to put on than previous exoskeleton devices. Researchers plan on improving on the X1 design, adding more active joints to areas such as the ankle and hip, which will, in turn, increase the potential uses for the device.

Designed in only a few years, X1 came from technology developed for Robonaut 2 and IHMC's Mina exoskeleton.

NASA's Game Changing Development Program, part of NASA's Space Technology Program, funds the X1 work. NASA's Space Technology Program focuses on maturing advanced space technologies that may lead to entirely new approaches for space missions and solutions to significant national needs.

For additional information about IHMC, visit: http://www.ihmc.us

For information about the X1 and Robonaut, visit: http://www.nasa.gov/robonaut

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Robotic scarless gallbladder surgery

Dec. 11, 2012 — Yassar Youssef, M.D., is the first surgeon in Baltimore City to perform gallbladder surgery using just one incision and the da Vinci® Surgical System. Because the single incision of about an inch is made in the patient's navel, he or she is left without a noticeable scar.

Additional patient benefits are less pain, less blood loss and a faster recovery compared even with minimally invasive gallbladder removal that requires multiple incisions. This is good news for the one million Americans who need their gallbladders removed each year, most of whom are candidates for this single-site, robotic approach.

"Neither robotic surgery or single-incision surgery is new, but combining the two to remove the gallbladder requires special training and equipment," says Youssef. "To be one of the first hospitals to offer this technically advanced surgery demonstrates Sinai Hospital's leadership in providing patients with the most up-to-date minimally invasive surgical options."

More than any other hospital in Maryland, Sinai Hospital has made technologic investments in its da Vinci Surgical System; in addition to having da Vinci Single-Site™ instruments that enable Youssef to perform gallbladder removal, the hospital has two da Vinci units, an extra console allowing two surgeons to operate in tandem on a patient, and other advanced instruments. Sinai's sister hospital, Northwest, also has its own da Vinci Surgical System. Youssef has plans to train other surgeons on the da Vinci, including those in Sinai's surgical residency program.

Sinai Hospital is a part of LifeBridge Health, one of the largest, most comprehensive providers of health services in northwest Baltimore. LifeBridge Health also includes Northwest Hospital, Levindale Hebrew Geriatric Center and Hospital, Courtland Gardens Nursing & Rehabilitation Center, and related subsidiaries and affiliates.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by LifeBridge Health.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sociable trash box: Proxemics in dynamic interactions

Sep. 24, 2012 — Toyohashi Tech researchers use 'robotic social trash boxes' to investigate interactions between humans and robots for improving robot-to-human communications.

This report is featured in the September issue of the Toyohashi Tech eNewsletter: http://www.tut.ac.jp/english/newsletter/

Humans regulate their interactions according to different contexts, the degree of the relationship, cultural factors, gender, age, and so on. These factors can be utilized as an interpersonal boundary-control mechanism which is totally dependent on encouraging or discouraging another person's interactions. Humans are expected to dynamically optimize the above mechanism according to the interpersonal distances and personal spaces (proxemics).

Michio Okada and colleagues at Toyohashi University of Technology were interested in determining what kind of distances (spheres), effective social cue, and behaviors that an sociable trash box (STB) requires with children in order to convey its intention to acquire child assistance in collecting trash from the environment as a child-dependent robot.

The experiments were carries out at the Developmental Center for Children at Toyohashi City, and evaluated the validity and effectiveness of the approach through different interactive scenarios. The experiments on naturally interacting with the STBs were conducted with the participation of 108 children aged 4 and 11 years old).

The results of the proxemics showed that when the STBs moved individually in the environment and moved in a swarm (three STBs), the children established different spaces (according to distance and interactive time) to interact with the STB.

These extracted spaces can be utilized in the STB decision process (moving with distances, staying time and so on) to convey its intention to collect trash with assistance from children. This will be the basis of our future plans to extend our study in order to develop a decision hierarchy inside of the STBs.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Toyohashi University of Technology, via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Yuto Yamaji, Taisuke Miyake, Yuta Yoshiike, P. Ravindra S. Silva, Michio Okada. STB: Child-Dependent Sociable Trash Box. International Journal of Social Robotics, 2011; 3 (4): 359 DOI: 10.1007/s12369-011-0114-y

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Smart as a bird: Flying rescue robot will autonomously avoid obstacles

Oct. 30, 2012 — Cornell researchers have created an autonomous flying robot that is as smart as a bird when it comes to maneuvering around obstacles.

Able to guide itself through forests, tunnels or damaged buildings, the machine could have tremendous value in search-and-rescue operations. Small flying machines are already common, and GPS technology provides guidance. Now, Ashutosh Saxena, assistant professor of computer science, and his team are tackling the hard part: how to keep the vehicle from slamming into walls and tree branches. Human controllers can't always react swiftly enough, and radio signals may not reach everywhere the robot goes.

The test vehicle is a quadrotor, a commercially available flying machine about the size of a card table with four helicopter rotors. Saxena and his team have already programmed quadrotors to navigate hallways and stairwells using 3-D cameras. But in the wild, these cameras aren't accurate enough at large distances to plan a route around obstacles. So, Saxena is building on methods he previously developed to turn a flat video camera image into a 3-D model of the environment using such cues as converging straight lines, the apparent size of familiar objects and what objects are in front of or behind each other -- the same cues humans unconsciously use to supplement their stereoscopic vision.

Graduate students Ian Lenz and Mevlana Gemici trained the robot with 3-D pictures of such obstacles as tree branches, poles, fences and buildings; the robot's computer learns the characteristics all the images have in common, such as color, shape, texture and context -- a branch, for example, is attached to a tree. The resulting set of rules for deciding what is an obstacle is burned into a chip before the robot flies. In flight the robot breaks the current 3-D image of its environment into small chunks based on obvious boundaries, decides which ones are obstacles and computes a path through them as close as possible to the route it has been told to follow, constantly making adjustments as the view changes. It was tested in 53 autonomous flights in obstacle-rich environments -- including Cornell's Arts Quad -- succeeding in 51 cases, failing twice because of winds. The results were presented at the International Conference on Intelligent Robots and Systems in Portugal Oct. 7-12.

Saxena plans to improve the robot's ability to respond to environment variations such as winds, and enable it to detect and avoid moving objects, like real birds; for testing purposes, he suggests having people throw tennis balls at the flying vehicle.

The project is supported by a grant from the Defense Advanced Research Projects Agency.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Cornell University. The original article was written by Bill Steele.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wireless networks: Mobile devices keep track

Nov. 21, 2012 — A more sensitive technique for determining user position could lead to improved location-based mobile services.

Many mobile-phone applications (apps) use spatial positioning technology to present their user with location-specific information such as directions to nearby amenities. By simultaneously predicting the location of the mobile-user and the data access points, or hotspots, improved accuracy of positioning is now available, thanks to an international research team including Sinno Jialin Pan from the A*STAR Institute for Infocomm Research1. Software developers expect that such improvements will enable a whole new class of apps that can react to small changes in position.

Traditionally, device position was determined by the Global Positioning System (GPS) that uses satellites to triangulate approximate location, but its accuracy falters when the mobile device is indoors. An alternative approach is to use the 'received signal strength' (RSS) from local transmitters. Attenuation of radio waves by walls can limit accuracy; and, it is difficult to predict signals in complex, obstacle-filled environments.

Software developers have tried to circumvent these problems by using so-called 'learning-based techniques' that identify correlations between RSS values and access-point placement. Such systems do not necessarily require prior knowledge of the hotspot locations; rather they 'learn' from data collected on a mobile device. This also has drawbacks: the amount of data can be large, making calibration time consuming. Changes in the environment can also outdate the calibration.

Pan and his co-workers reduced this calibration effort in an experimental demonstration of a protocol that calculates both the positions of the device and the access points simultaneously -- a process they call colocalization. "Integrating the two location-estimation tasks into a unified mathematical model means that we can fully exploit the correlations between mobile-device and hotspot position," explains Pan.

First, the researchers trained a learning-based system with the signal-strength values received from access points at selected places in the area of interest. They used this information to calibrate a probabilistic 'location-estimation' system. Then, they approximated the location from the learned model using signal strength samples received in real-time from the access points.

Experimental trials showed that this approach not only required less calibration, but it was more accurate than other state-of-the-art systems. "We next want to apply the method to a larger-scale environment," says Pan. "We also want to find ways to make use of the estimated locations to provide more useful information, such as location-based advertising." As this technique could help robots navigate by themselves, it may also have important implications for the burgeoning field of robotics.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jeffrey Junfeng Pan, Sinno Jialin Pan, Jie Yin, Lionel M. Ni, Qiang Yang. Tracking Mobile Users in Wireless Networks via Semi-Supervised Colocalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012; 34 (3): 587 DOI: 10.1109/TPAMI.2011.165

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Green Brain' project to create an autonomous flying robot with a honey bee brain

Oct. 1, 2012 — Scientists at the Universities of Sheffield and Sussex are embarking on an ambitious project to produce the first accurate computer models of a honey bee brain in a bid to advance our understanding of Artificial Intelligence (AI), and how animals think.

The team will build models of the systems in the brain that govern a honey bee's vision and sense of smell. Using this information, the researchers aim to create the first flying robot able to sense and act as autonomously as a bee, rather than just carry out a pre-programmed set of instructions.

If successful, this project will meet one of the major challenges of modern science: building a robot brain that can perform complex tasks as well as the brain of an animal. Tasks the robot will be expected to perform, for example, will include finding the source of particular odours or gases in the same way that a bee can identify particular flowers.

It is anticipated that the artificial brain could eventually be used in applications such as search and rescue missions, or even mechanical pollination of crops.

Dr James Marshall, leading the £1 million EPSRC1 funded project in Sheffield, said: "The development of an artificial brain is one of the greatest challenges in Artificial Intelligence. So far, researchers have typically studied brains such as those of rats, monkeys, and humans, but actually 'simpler' organisms such as social insects have surprisingly advanced cognitive abilities."

Called "Green Brain," and partially supported with hardware donated by NVIDIA Corporation, the project invites comparison with the IBM-sponsored Blue Brain initiative, which is developing brain modeling technologies using supercomputers with the ultimate goal of producing an accurate model of a human brain.

The hardware provided by NVIDIA is based on high-performance processors called "GPU accelerators" that generate the 3D graphics on home PCs and games consoles and power some of the world's highest-performance supercomputers. These accelerators provide a very efficient way of performing the massive calculations needed to simulate a brain using a standard desktop PC -- rather than on a large, expensive supercomputing cluster.

"Using NVIDIA's massively parallel GPU accelerators for brain models is an important goal of the project as they allow us to build faster models than ever before," explained Dr Thomas Nowotny, the leader of the Sussex team. "We expect that in many areas of science this technology will eventually replace the classic supercomputers we use today."

Green Brain's researchers anticipate that developing a model of a honey bee brain will offer a more accessible method of driving forward our knowledge of how a brain's cognitive systems work, leading to advances in understanding animal and human cognition. "Because the honey bee brain is smaller and more accessible than any vertebrate brain, we hope to eventually be able to produce an accurate and complete model that we can test within a flying robot," said Dr Marshall.

"Not only will this pave the way for many future advances in autonomous flying robots, but we also believe the computer modelling techniques we will be using will be widely useful to other brain modelling and computational neuroscience projects," added Dr Nowotny.

Alongside this, the research is expected to provide a greater understanding of the honey bee itself. Because of their role as pollinators, honey bees are vital to many ecosystems, yet their declining population in recent years has given scientists cause for concern. Green Brain's modelling could help scientists to understand why honey bee numbers are dwindling and also contribute to the development of artificial pollinators, such as those being researched by the National Science Foundation-funded Robobees project, led by Harvard University.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Sheffield, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Robotic surgery through the mouth safe for removing tumors of the voice box, study shows

Sep. 25, 2012 — Robotic surgery though the mouth is a safe and effective way to remove tumors of the throat and voice box, according to a study by head and neck cancer surgeons at the Ohio State University Comprehensive Cancer Center -- Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSUCCC -- James).

This is the first report in the world literature illustrating the safety and efficacy of transoral robotic surgery for supraglottic laryngectomy, the researchers say.

The preliminary study examined the outcomes of 13 head and neck cancer patients with tumors located in the region of the throat between the base of the tongue and just above the vocal cords, an area known as the supraglottic region.

The study found that the use of robot-assisted surgery to remove these tumors through the mouth took about 25 minutes on average, and that blood loss was minimal -- a little more than three teaspoons, or 15.4 milliliters, on average, per patient. No surgical complications were encountered and 11 of the 13 patients could accept an oral diet within 24 hours.

If, on the other hand, these tumors are removed by performing open surgery on the neck, the operation can take around 4 hours to perform, require 7 to 10 days of hospitalization on average and require a tracheostomy tube and a stomach tube, the researchers say.

The findings were published recently in the journal Head and Neck.

"The transoral robotic technique means shorter surgery, less time under anesthesia, a lower risk of complications and shorter hospital stays for these patients," says first author Dr. Enver Ozer, clinical associate professor of otolaryngology at the OSUCCC -- James.

"It also means no external surgical incisions for the patient and better 3-D visualization of the tumor for the surgeon," says Ozer, a head and neck surgeon who specializes in robot-assisted techniques.

The cases examined in this study were part of a larger prospective study of 126 patients undergoing transoral robotic surgery between 2008 and 2011.

Other Ohio State researchers involved in this study were Bianca Alvarez, Kiran Kakarala, Kasim Durmus, Ted N. Teknos and Ricardo L. Carrau.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Ohio State University Medical Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Enver Ozer, Bianca Alvarez, Kiran Kakarala, Kasim Durmus, Theodoros N. Teknos, Ricardo L. Carrau. Clinical outcomes of transoral robotic supraglottic laryngectomy. Head & Neck, 2012; DOI: 10.1002/hed.23101

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sensor detects bombs on sea floor

Nov. 25, 2012 —  Scientists have developed a sensor to detect undetonated explosives on the sea floor, based on a technology used to find mineral deposits underground.

The sensor was developed as part of a project with US Government agency, the Strategic Environmental Research and Development Program (SERDP) and US-based research organisation Sky Research.

The method for finding undetonated underwater explosives is very similar to that used to detect underground mineral deposits, says CSIRO electrical engineer Dr Keith Leslie.

“Our highly sensitive sensor – the high temperature superconducting tensor gradiometer – delivers significantly more information about the target’s magnetic field than conventional sensors used for this type of detection,” he said.

“It provides data on the location, characterisation and magnetic qualities of a target – whether it is a gold deposit or an explosive.”

Over 10 million acres of coastal waters are contaminated by undetonated explosives, according to SERDP. Typically these small explosives rust and corrode at sea, making them even more dangerous.

“The marine environment is difficult to sample due to electrical currents produced by waves, which affect underwater magnetic fields,” Dr Leslie said.

“In mineral exploration, near surface deposits are being exhausted, leading our search for minerals deeper underground, where targets are more difficult to detect with traditional surface and airborne measurements.”

Our sensor can provide valuable geological information that discriminates between prospective and non-prospective areas or targets. It avoids unnecessary drilling and minimises the risk of overlooking valuable mineral deposits.

“Our sensor has a critical advantage for small targets such as undetonated explosives, where only one or two measurements may be near the target,” Dr Leslie said.

“In mineral exploration, a string of measurements of the gradients of the magnetic field down a drill hole can determine the direction to the target.”

Eventually the technology may renew exploration efforts at abandoned sites where drilling programs were based on insufficient or inaccurate information. It also has the potential to help clear landmines.

The sensor has been proved in a stationary laboratory environment. Trials have been conducted to prove it in motion, in preparation for anticipated underwater trials.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by CSIRO Australia.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New technique for minimally invasive robotic kidney cancer surgery

Dec. 13, 2012 — Urologists at Henry Ford Hospital have developed a new technique that could make minimally invasive robotic partial nephrectomy procedures the norm, rather than the exception for kidney cancer patients. The technique spares the kidney, eliminates long hospital stays and provides better outcomes by giving the surgeon more time to perform the procedure.

Dubbed ICE for Intracorporeal Cooling and Extraction, the technique may allow more kidney cancer patients to avoid conventional open surgery -- now used in the vast majority of cases -- and its possible complications, including infection, blood loss, and extended hospital stays.

The Henry Ford study was published this week in European Urology, the official journal of the European Association of Urology.

"The study demonstrated that there's a two-pronged benefit," says Craig G. Rogers, M.D., director of Renal Surgery at Henry Ford Hospital's Vattikuti Urology Institute.

"Our goal was to protect the kidney from damage during a minimally invasive partial nephrectomy, and from a cancer standpoint, we have the added security that we've removed more of the tumor."

In the latest research, Henry Ford surgeons used robotic techniques to operate on seven kidney cancer patients between April and September 2012. In each case, they performed a partial nephrectomy, in which just the cancerous portion of the kidney is removed.

"What we've done is utilized a special type of device called a GelPoint trocar, that makes it easier to pass large things in and out of the abdomen through small incisions during minimally invasive surgery," says Dr. Rogers.

"Through the gel point, we take a syringe that's been modified so we can pack and deliver ice through the body to the kidney. So when we clamp the blood supply to the kidney, it's packed in ice just as it would be in an open surgery.

"Once the tumor is removed, instead of setting it aside in the body so you can sew up the kidney, we can remove the tumor as soon as it's excised through the gel point, look at it and decide if we're happy with what's been removed. If there's any doubt, I can go right back in and cut more out."

Dr. Rogers wrote in the study that others have tried various techniques to cool the kidney in minimally invasive surgery, but they "require specific equipment or expertise and are too complex or impractical for routine use."

"Unfortunately, the majority of people today diagnosed with kidney cancer get their entire kidney removed," Dr. Rogers says. "Not only that, they're getting it removed through an open approach, though a large incision that often requires removal a rib, when there are minimally invasive approaches, such as robotic surgery, available."

One of the reasons more patients aren't given the option of a partial nephrectomy is because for the surgeon, it's technically challenging to do and much more difficult to perform than taking the entire kidney out.

Dr. Rogers explains: "In order to safely perform a partial nephrectomy, surgeons often have to clamp off the blood supply to the kidney to allow them to see the tumor and cut it out in a bloodless field. But once the blood supply is cut off to the kidney, there's only about 30 minutes before the kidney can experience irreversible damage. That means the surgeon has to be very technically skilled to remove the tumor, and sew the kidney back together in a very short time.

"Time is a barrier for many surgeons to offer the partial nephrectomy procedure to their patients. Or, for those who do, they'll offer the open approach to a partial nephrectomy, which means a bigger operation for the patient. There are two things surgeons who perform open nephrectomies have been able to claim, up until now, that they could do which minimally invasive surgeons could not.

First, when the surgeon is holding the kidney in the open approach and the clamp goes on the kidney to stop the blood flow, he can pack the kidney on ice to cool it, called renal hypothermia. It allows the surgery to extend the window of time he has to work without kidney damage," says Dr. Rogers.

The other claim is that when the tumor is cut out, the surgeon can hold it and analyze it, to make sure that he is happy with what's been removed before sewing the kidney back up.

"Before, with a minimally invasive partial nephrectomy, it was very hard to get ice in through the small incision and to get the ice to stay where you want it. And then once the tumor's cut out, you really can't take it out of the body right away and look at it," says Dr. Rogers.

"The reason why I'm so excited about what we've discovered and innovated at Henry Ford is that it cuts to the core of the real problem. We can now offer more patients partial nephrectomy through a minimally invasive approach. So any technology that allows patients to get a better surgery and a better outcome in a less invasive way, is going to be something that will benefit everyone."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Henry Ford Health System, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Craig G. Rogers, Khurshid R. Ghani, Ramesh K. Kumar, Wooju Jeong, Mani Menon. Robotic Partial Nephrectomy with Cold Ischemia and On-clamp Tumor Extraction: Recapitulating the Open Approach. European Urology, 2012; DOI: 10.1016/j.eururo.2012.11.029

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Voice prostheses can help patients regain their lost voice

Oct. 24, 2012 — Help is on the way for people who suffer from vocal cord dysfunction. Researchers are developing methods that will contribute to manufacturing voice prostheses with improved affective features. For example, for little girls who have lost their voices, the improved artificial voice devices can produce age-appropriate voices, instead of the usual voice of an adult male.

These advances in artificial voice production have been made possible by results achieved in a research project led by Professor Samuli Siltanen, results that are good news indeed for the approximately 30,000 Finns with vocal cord problems. Siltanen's project is part of the Academy of Finland's Computational Science Research Programme (LASTU).

One of the fundamental problems of speech signal analysis is to find the vocal cord excitation signal from a digitally recorded speech sound and to determine the shape of the vocal tract, i.e. the mouth and the throat. This so-called glottal inverse filtering of the speech signal requires a highly specialised form of computer calculation. With traditional techniques, inverse filtration is only possible for low-pitch male voices. Women's and children's voices are trickier cases as the higher pitch comes too close in frequency to the lowest resonance of the vocal tract. The novel inverse calculation method developed by Siltanen and his team significantly improves glottal inverse filtering in these cases.

Besides in speech synthesis, inverse filtering is needed in automatic speech recognition. In speech synthesis, a computer will transform text into synthetic speech. The old-fashioned way is to record individual words and play them one after the other, but this seldom produces natural-sounding speech.

"Most speech sounds are a result of a specific process. The air flowing between the vocal folds makes them vibrate. This vibration, if we could hear it, would produce a weird buzzing sound. However, as it moves through the vocal tract, that buzz is transformed into some familiar vowel," explains Siltanen.

Singing, says Siltanen, is a perfect example of this interplay between the vocal cord response and the vocal tract: "When we sing the vowel 'a' in different pitches, our vocal tracts remain unchanged but the frequency of the vocal cord excitation changes. On the other hand, we can also sing different vowels in the same pitch, whereby the shape of the tract changes and the excitation stays the same."

Speech recognition is widely used, for example, in mobile phones and automatic telephone services. High-quality glottal inverse filtering improves the success rate of speech recognition in noisy environments.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Suomen Akatemia (Academy of Finland), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scotch tape finds new use as grasping 'smart material'

Nov. 20, 2012 — Scotch tape, a versatile household staple and a mainstay of holiday gift-wrapping, may have a new scientific application as a shape-changing "smart material."

Researchers used a laser to form slender half-centimeter-long fingers out of the tape. When exposed to water, the four wispy fingers morph into a tiny robotic claw that captures water droplets.

The innovation could be used to collect water samples for environmental testing, said Babak Ziaie, a Purdue University professor of electrical and computer engineering and biomedical engineering.

The Scotch tape -- made from a cellulose-acetate sheet and an adhesive -- is uniquely suited for the purpose.

"It can be micromachined into different shapes and works as an inexpensive smart material that interacts with its environment to perform specific functions," he said.

Doctoral student Manuel Ochoa came up with the idea. While using tape to collect pollen, he noticed that it curled when exposed to humidity. The cellulose-acetate absorbs water, but the adhesive film repels water.

"So, when one side absorbs water it expands, the other side stays the same, causing it to curl," Ziaie said.

A laser was used to machine the tape to a tenth of its original thickness, enhancing this curling action. The researchers coated the graspers with magnetic nanoparticles so that they could be collected with a magnet.

"Say you were sampling for certain bacteria in water," Ziaie said. "You could drop a bunch of these and then come the next day and collect them."

Findings will be detailed in a presentation during a meeting of the Materials Research Society in Boston from Nov. 25 to Nov. 30. Experiments at Purdue's Birck Nanotechnology Center were conducted by Ochoa, doctoral student Girish Chitnis and Ziaie.

The grippers close underwater within minutes and can sample one-tenth of a milliliter of liquid.

"Although brittle when dry, the material becomes flexible when immersed in water and is restored to its original shape upon drying, a crucial requirement for an actuator material because you can use it over and over," Ziaie said. "Various microstructures can be carved out of the tape by using laser machining. This fabrication method offers the capabilities of rapid prototyping and batch processing without the need for complex clean-room processes."

The materials might be "functionalized" so that they attract specific biochemicals or bacteria in water.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Purdue University. The original article was written by Emil Venere.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thought-controlled prosthesis changing lives of amputees

Nov. 28, 2012 — The world's first implantable robotic arm controlled by thoughts is being developed by Chalmers researcher Max Ortiz Catalan. The first operations on patients will take place this winter.

Every year, thousands of people across the world lose an arm or a leg.

"Our technology helps amputees to control an artificial limb, in much the same way as their own biological hand or arm, via the person's own nerves and remaining muscles. This is a huge benefit for both the individual and to society," says Max Ortiz Catalan, industrial doctoral student at Chalmers University of Technology in Sweden.

Ever since the 1960s, amputees have been able to use prostheses controlled by electrical impulses in the muscles. Unfortunately, however, the technology for controlling these prostheses has not evolved to any great extent since then. For example, very advanced electric hand prostheses are available, but their functionality is limited because they are difficult to control.

"All movements must by pre-programmed," says Max Ortiz Catalan. "It's like having a Ferrari without a steering wheel. Therefore, we have developed a new bidirectional interface with the human body, together with a natural and intuitive control system."

Today's standard socket prostheses, which are attached to the body using a socket tightly fitted on the amputated stump, are so uncomfortable and limiting that only 50 percent of arm amputees are willing to use one at all.

This research project is using the world-famous Brånemark titanium implant instead (OPRA Implant System), which anchors the prosthesis directly to the skeleton through what is known as osseointegration.

"Osseointegration is vital to our success. We are now using the technology to gain permanent access to the electrodes that we will attach directly to nerves and muscles," says Max Ortiz Catalan.

Currently, in order to pick up the electrical signals to control the prosthesis, electrodes are placed over the skin. The problem is that the signals change when the skin moves, since the electrodes are moved to a different position. Additionally, the signals are also affected when we sweat, since the resistance on the interface changes.

In this project, the researchers are planning to implant the electrodes directly on the nerves and remaining muscles instead. Since the electrodes are closer to the source and the body acts as protection, the bio-electric signals become much more stable. Osseointegration is used to enable the signals inside the body to reach the prosthesis. The electrical impulses from the nerves in the arm stump are captured by a neural interface, which sends them to the prostheses through the titanium implant. These are then decoded by sophisticated algorithms that allow the patient to control the prosthesis using his or her own thoughts.

In existing prostheses, amputees use only visual or auditory feedback. This means, for example, that you have to look at or hear the motors in the prosthesis in order to estimate the grip force applied to a cup if you want to move it around. With the new method, patients receive feedback as the electrodes stimulate the neural pathways to the patient's brain, in the same way as the physiological system. This means that the patient can control his or her prosthesis in a more natural and intuitive way. This has not been possible previously.

"Many of the patients that we work with have been amputees for more than 10 years, and have almost never thought about moving their missing hand during this time," says Max Ortiz Catalan. "When they arrived here, they got to test our virtual-reality environment or our more advanced prostheses in order to evaluate the decoding algorithms. We placed electrodes on their amputation stumps, and after a few minutes, they were able to control the artificial limbs in ways that they didn't know they could, most of the times. This made the patients very excited and enthusiastic."

The first operations on patients will take place this winter.

"By testing the method on a few patients, we can show that the technology works and then hopefully get more grants to continue clinical studies and develop the technology further. This technology can then become a reality for lots of people. We want to leave the lab and become part of the patients' everyday life. If the first operations this winter are successful, we will be the first research group in the world to make 'thought-controlled prostheses' a reality for patients to use in their daily activities, and not only inside research labs."

About osseointegration

Osseointegration (osseo=bone) is a method for anchoring prostheses directly to the skeleton, and it was developed in the 1960s by Professor Per-Ingvar Brånemark. He discovered that titanium is not rejected by the body, but is integrated into the surrounding bone tissue. In the beginning, the method was used to treat tooth loss using dental titanium implants. Since then, the method has been further developed and is also used today for leg, arm and face prostheses as well as for anchoring hearing aids. Since 1990, over 200 amputees have been treated using this method (OPRA Implant System) and have gained increased movement and enhanced quality of life.

About the artificial hand

The artificial hand can mimic a living hand. The motors in each finger can be controlled individually and simultaneously, for example, with a turning motion of the wrist. It is possible to demonstrate how the system works by using electrodes which capture myoelectric signals on the surface of the arm.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Chalmers University of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Automation systems become flexible when robots make their own decisions

Dec. 5, 2012 — Researchers at University West in Sweden have created an automation system where machines and robots make their own decisions and adapt to external circumstances. They continue to work even when something goes wrong. You can reprogram them every day and easily vary equipment and manufactured products.

Automation Scientists Fredrik Danielsson and Bo Svensson have demonstrated that this works in reality. The tests are performed on an automated production line which contains three robots, two metal cutting machines, a transportation system, a material handling system and a measuring station.

Normally automated production of this kind functions just as long as nothing goes wrong. This is because the system is hierarchical. The master control system gives orders about what should be done. Only when the control system is told that the order has been completed, the next order is placed.

"A single error somewhere makes everything stop. For example, if a sheet metal is damaged an operator has so take it out and then reset and restart everything," says Bo Svensson.

In Fredrik Danielsson's and Bo Svensson's new model, all robots and machines work independently. Each robot, conveyor and machine is equipped with an agent, a small intelligent program that does not require signals from a master control system to act.

"The agents know what neighbours they should communicate with and make small local decisions," says Fredrik Danielsson.

An agent is triggered by what is happening next to it. The start signal for a machine may be that someone puts a sheet metal in it. Then it knows that it must drill. Things do not have to happen in a certain order. If a sheet metal is lost the system continues to work with other sheets. The operator can also insert a new part in the middle of the flow without disturbing the system.

It may take up to a year to create a traditional automation system and it is very difficult, time consuming and expensive to adapt it to changing demands. In the system built of agents, however, you can easily insert and remove both equipment and operators. And it can produce an array of product variants, as it is easily reprogrammed. Agents are automatically generated in minutes by the software P-SOP developed by Fredrik Danielsson and Bo Svensson. The operator gives P-SOP instructions, in the form of a PowerPoint sketch, of how the system should work.

"Then he presses a button and P-SOP spits out a bunch of small agents for different machines. I think this may be the next big step in automation," says Fredrik Danielsson.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University West, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Field geologists (finally) going digital

Nov. 5, 2012 — Not very long ago a professional geologist's field kit consisted of a Brunton compass, rock hammer, magnifying glass, and field notebook. No longer. In the field and in the labs and classrooms, studying Earth has undergone an explosive change in recent years, fueled by technological leaps in handheld digital devices, especially tablet computers and cameras.

Geologist Terry Pavlis' digital epiphany came almost 20 years ago when he was in a museum looking at a 19th-century geology exhibit that included a Brunton compass. "Holy moly!" he remembers thinking, "We're still using this tool." This is despite the fact that technological changes over the last 10 years have not only made the Brunton compass obsolete, but swept away paper field notebooks as well (the rock hammer and hand-lens magnifier remain unchallenged, however).

The key technologies that replace the 19th-century field tools are the smart phone, PDA, handheld GPS, and tablet PC and iPad. Modern tablets, in particular, can do everything a Brunton compass can, plus take pictures and act as both a notebook and mapping device, and gather precise location data using GPS. They can even be equipped with open-source GIS software.

Pavlis, a geology professor at The University of Texas at El Paso, and Stephen Whitmeyer of James Madison University will be presenting the 21st-century way to do field geology on Monday, 5 Nov., at the meeting of the Geological Society of America (GSA) in Charlotte, N.C. The presentations are a part of a digital poster Pardee Keynote Symposium titled, "Digital Geology Speed-Dating: An Innovative Coupling of Interactive Presentations and Hands-On Workshop."

"I had a dream we would not be touching paper anymore," says Pavlis. "I'm now sort of an evangelist on this subject."

That's not to say that the conversion to digital field geology is anywhere near complete. The new technology is not quite catching on in some university field courses because the technology is more expensive and becomes obsolete quickly, says Pavlis.

"Field geology courses are expensive enough for students," he notes. As a result, the matter of teaching field geology with digital tools is actually rather controversial among professors.

Meanwhile, on the classroom side of earth science education, there are new digital tools that bring the field into the classroom. One of them is GigaPans -- gigantic panorama images.

"A GigaPan is basically a really big picture that's made of lots of full-resolution zoomed-in photos," explains geologist Callan Bentley of Northern Virginia Community College. To make a GigaPan, you need a GigaPan Robot that looks at the scene and breaks it into a grid, then shoots the grid. That can result in hundreds or even thousands of images. The GigaPan system then stitches them together. The resulting stitched image is uploaded to the GigaPan.org website where everybody can see it.

"In geology, we look at things in multiple scales," says Bentley. "A well-composed GigaPan is very useful." Bentley will be presenting GigaPans at the same GSA meeting session as Pavlis, along with others using the latest technology to study and teach geology.

GigaPans were developed by Google, NASA, and the robotics lab at Carnegie Mellon University. Bentley got involved when the "Fine Outreach for Science" program recruited him. Since then, he has documenting geology of the Mid-Atlantic region.

"I have used some of it in the classroom," said Bentley. "I have students look at a scene, make a hypothesis then look closer to test the hypothesis."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Geological Society of America.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Robotic-assisted radical bladder surgery potentially benefits bladder cancer patients

Dec. 19, 2012 — About 30 percent of the more than 70,000 bladder cancer cases expected in 2012 are muscle invasive. In such cases, radical cystectomy is the preferred treatment. In a pilot trial, a team of investigators assessed the efficacy of open radical cystectomy (ORC) vs. robotic-assisted laparoscopic radical cystectomy (RARC). While there were no significant differences in treatment outcomes, RARC resulted in decreased estimated blood loss and shorter hospital stay compared to ORC.

The results are published in the February 2013 issue of The Journal of Urology.

"In the last decade minimally invasive approaches including robotic-assisted approaches have emerged as viable surgical options for many urological malignancies with the promise of decreased morbidity with shorter hospital stays, faster recovery, and less narcotic analgesic requirements," says lead investigator Dipen J. Parekh, MD, Professor and Chairman of the University of Miami Miller School of Medicine's Department of Urology and Director of robotic surgery; formerly at the University of Texas Health Science Center at San Antonio.

The goal of the clinical trial was to provide preliminary data from a single institution's randomized trial that evaluated the benefits of robotic-assisted vs. open surgery in patients with invasive bladder cancer. The trial, conducted between July 2009 and June 2011, involved 47 patients and was performed at the University of Texas Health Science Center at San Antonio. Primary eligibility was based on candidacy for an open or robotic approach at the discretion of the treating surgeon. Forty patients were randomized individually and equally to either an ORC or RARC group using a computer randomization program. Each of the two study groups was similar in distribution of age, gender, race, body mass index, previous surgeries, operative time, postoperative complications, and final pathological stage.

Investigators evaluated five surgery outcome factors: Estimated blood loss, operative time from incision to closure, transfusion requirements, time to return of bowel function, and length of stay.

The robotic group experienced significantly decreased blood loss, accompanied by a trend toward faster return of bowel function, fewer hospitalizations beyond five days, and fewer transfusions.

"The strength of our study is the prospective randomized nature that eliminates selection biases that may have been present in prior retrospective analyses," says Dr. Parekh. "We also believe that our study demonstrates that a prospective randomized trial comparing traditional open and robotic approaches in bladder cancer is possible."

This investigative team has joined with several institutions nationally to build on its study and has started an advanced randomized clinical trial among multiple institutions to further compare and assess open vs. robotic-assisted radical cystectomy among patients with invasive bladder cancer. It plans to collect intermediate and long-term survival data from these same patients as well as data on quality of life, daily living activities, handgrip strength, and mobility.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Elsevier, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Dipen J. Parekh, Jamie Messer, John Fitzgerald, Barbara Ercole, Robert Svatek. Perioperative Outcomes and Oncologic Efficacy from a Pilot Prospective Randomized Clinical Trial of Open versus Robotic Assisted Radical Cystectomy. The Journal of Urology, 2012; DOI: 10.1016/j.juro.2012.09.077

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Micro sensors help underwater robots swim like fish

Dec. 12, 2012 — NTU scientists have invented a 'sense-ational' device, similar to a string of 'feelers' found on the bodies of the Blind Cave Fish, which enables the fish to sense their surrounding and so navigate easily.

Using a combination of water pressure and computer vision technology, the sensory device is able to give users a 3-D image of nearby objects and map its surroundings. The possible applications of this fish-inspired sensor are enormous. The sensor can potentially replace the expensive 'eyes and ears' on Autonomous Underwater Vehicles (AUVs), submarines and boats that currently rely on cameras and sonars to gather information about the environment around them.

The revolutionary, low-powered sensor is unlike cameras which cannot see in dark or murky waters; or sonars whose sound waves pose harm to some marine animals.

These extremely small sensors (each sensor is 1.8mm x 1.8mm) are now being used in AUVs developed by researchers from Singapore-MIT Alliance for Research and Technology (SMART), a research centre funded by the National Research Foundation. The centre is developing a new generation of underwater 'stingray-like' robots and autonomous surface vessels.

The new sensors, made using Microelectromechanical Systems (MEMS) technology, will make such robots smarter and prolong their operational time as battery power is conserved.

Associate Professor Miao Jianminfrom the School of Mechanical and Aerospace Engineering, and his team of four have spent the last five years in collaboration with SMART to develop micro-sensors that mimic the row of 'feelers' on both sides of the Blind cave fish's body.

Associate Prof Miao said the line of sensors present on the fish's body is the reason why it can sense objects around it and still travel at high speeds without colliding with any underwater obstacles.

"To mimic nature, our team created microscopic sensory pillars wrapped in hydrogel -- a material which is similar to the natural neuromasts of the blind cave fish -- into an array of two rows of five sensors," Prof Miao said.

"This array of micro-sensors will then allow AUVs to locate, identify, and classify obstacles and objects in water through water pressure and also to optimise its movement in water by sensing the water flow."

The new sensor array which costs below S$100 to make, is also more affordable than sonars, which can detect faraway objects but not nearby objects and cost thousands of dollars.

Partnering Prof Miao to develop the sensors and to adopt it for use on AUVs is Professor Michael Triantafyllou from SMART. Prof Triantafyllou, from SMART's Centre for Environmental Sensing and Modeling (CENSAM), is one of the world's foremost experts on creating underwater robots modelled after aquatic animals like fish.

Current problems with AUVs include poor navigation in murky or cloudy waters such as those off the coast of Singapore, as underwater cameras can only see a short distance, Prof Triantafyllou said.

"Other methods like underwater lights and cameras, acoustic navigation, and sonars also work, but they are very expensive and require very high levels of power that drain the batteries. The new sensors are much cheaper and only require small amounts of power. Also, sensors like sonar are loud and invasive and they may harm aquatic animals that also use sound waves to navigate," the Massachusetts Institute of Technology professor added.

The aim of the AUVs is for environmental sensing, to detect environmental pollution, contaminants and to monitor the overall water quality in Singapore's waters. The AUVs will have chemical sensors installed to detect the chemical condition of water (dissolved oxygen, nutrients, metals, oils, and pesticides), and biological sensors to monitor water conditions such as harmful bacteria and pathogens.

Other potential application of these MEMS sensors, which specialises in near-field detection include defence applications. These can detect nearby submarines without the use of sonar thatgives away one's location.

This collaborative research resulted in two breakthrough papers being accepted for presentation at a MEMS conference next January in Taiwan, organised by the Institute of Electrical and Electronics Engineers (IEEE).

One paper is for the development of the piezoelectric sensor which does not require any energy as it generates an electric voltage when water flows past the 'feelers'. The second paper focuses on a low-powered biomimetic sensor which can detect underwater objects even when there is little water flowing past it.

To further improve the sensor, Prof Miao's team is now looking to develop a hybrid sensor which will combine both the zero-energy piezoelectric sensor's high accuracy with the low-powered static sensor's ability to detect objects in still water.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Nanyang Technological University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Major breakthrough in high-precision indoor positioning

Dec. 18, 2012 — Cell phones are getting ever smarter today, savvy enough to tell you where to go and what to buy in shopping centers or department stores. Although still in nascent stages, indoor positioning and navigation using mobile phones will be arriving anytime soon.

People widely rely on the Global Positioning System (GPS) for location information, but unlike outdoor environments, GPS does not work well in indoor spaces or urban canyons with streets cutting through dense blocks of high-rise buildings and structures. GPS requires a clear view to communicate with satellites because its signals become attenuated or scattered by roofs, walls, and other objects. In addition, GPS is only one-third as accurate in the vertical direction as it is in the horizontal, thus impossible to locate a person or an object in the floors of skyscrapers.

For indoor positioning, location-based service providers including mobile device makers have mostly used a combination of GPS and wireless network system such as WiFi, cellular connectivity, Ultra Wide Band (UWB), or Radio-frequency Identification (RFID). For example, the WiFi Positioning System (WPS) collects both GPS and WiFi signals, and many companies including Google and Apple utilize this technology to provide clients with location information services.

Professor Dong-Soo Han from the Department of Computer Science, KAIST, explained, "WPS is helpful to a certain extent, but it is not sufficient because the technology needs GPS signals to tag the location of WiFi fingerprints collected from mobile devices. Therefore, even if you are surrounded in rich WiFi signals, they can be useless unless they are accompanied with GPS signals. Our research team tried to solve this problem, and finally we came up with a radio map that is created based on WiFi fingerprints only."

Professor Han and his research team have recently developed a new method to build a WiFi radio map that does not require GPS signals. WiFi fingerprints are a set of WiFi signals captured by a mobile device and the measurements of received WiFi signal strengths (RSSs) from surrounding access points at the device. A WiFi radio map shows RSSs of WiFi access points (APs) at different locations in a given environment. Therefore, each WiFi fingerprint on the radio map is connected to location information.

The KAIST research team collected fingerprints from users' smartphones every 30 minutes through the modules embedded in mobile platforms, utilities, or applications and analyzed the characteristics of the collected fingerprints. As a result, Professor Dong-Soo Han said, "We discovered that mobile devices such as cell phones are not necessarily on the move all the time, meaning that they have locations where they stay for a certain period of time on a regular basis. If you have a full-time job, then your phone, at least, have a fixed location of home and office."

By taking smartphone users' home and office address as location reference, Professor Han classified fingerprints collected from the phones into two groups: home and office. He then converted each home and office address into geographic coordinates (with the help of Google's geocoding) to obtain the location of the collected fingerprints. The WiFi radio map has both the fingerprints and coordinates whereby the location of the phones can be identified or tracked.

For evaluation, the research team selected four areas in Korea (a mix of commercial and residential locations), collected 7,000 WiFi fingerprints at 400 access points in each area, and created a WiFi radio map, respectively. The tests, conducted in each area, showed that location accuracy becomes hinged on the volume of data collected, and once the data collection rate hits over 50%, the average error distance is within less than 10m.

Professor Han added, "Although there seems to be many issues like privacy protection that has to be cleared away before commercializing this technology, it is no doubt that we will face a greater demand for indoor positioning system in the near future. People will eventually want to know where they are indoors just as much as outdoors."

Once the address-based radio map is fully developed for commercial use, identifying locations at the home and office level will be possible, thereby opening a new door for further applications such as emergency rescue services or indoor location-based services like finding lost cell phones, restaurants, stores, and missing persons, as well as providing information on sales and discount

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Korea Advanced Institute of Science and Technology (KAIST), via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here