science stuff from the wise mammoth

Where science and occasional humor meet.

jackslenderman:

WaterBOB 100gal bathtub water jug.
In an emergency with water shortages people are told to fill their bathtubs to have water on hand. but most bathtubs are not clean in an emergency, and the water will evaporate when left open to the air over time.
WaterBOB solves those problems! Holds 100gal of water, fits any bathtub, and has a hand pump to pump out water as you need without wasting any. Keeps the water clean and fresh, FDA approved material and BPA free. Costs less than 20$ and available on amazon.

jackslenderman:

WaterBOB 100gal bathtub water jug.


In an emergency with water shortages people are told to fill their bathtubs to have water on hand. but most bathtubs are not clean in an emergency, and the water will evaporate when left open to the air over time.

WaterBOB solves those problems! Holds 100gal of water, fits any bathtub, and has a hand pump to pump out water as you need without wasting any. Keeps the water clean and fresh, FDA approved material and BPA free. Costs less than 20$ and available on amazon.

txchnologist:

The world’s first 3-D printed car took to the streets this weekend after being built in an amazingly short 44 hours. The vehicle, called Strati, was designed by Italian designer Michele Anoé, who won an international competition held by crowdsourcing carmaker Local Motors.  It was printed and rapidly assembled by a Local Motors team during a manufacturing technology show held last week in Chicago, then went on a drive on Saturday. 

Strati’s chassis and body were made in one piece out of a carbon fiber-impregnated plastic on a large-area 3-D printer. The machine put down layer after layer of the material at a rate of 40 pounds per hour.

Read More

mindblowingscience:

This squishy tentacle robot may haunt your dreams, but it could also help you in a disaster

Robots are metallic and hard, right? Wrong. A new robot built at MIT’s Computer Science and Artificial Intelligence Lab is rubbery and wriggly, and built to squirm around tight corners.

The creation is meant to be an arm for what are known as soft robots — machines that use compressed air to move their soft body parts, making them safe to be around humans and capable of feats with which hard robots might struggle. It’s inspired by octopus tentacles and moves by puffing up different segments of its body.

Unlike many other soft robots, the tentacle really is made of 100 percent soft material — silicone rubber.

“Designing away all the hard components forces us to think about the more difficult questions. Is it possible to do useful manipulation with a robot that’s as soft as chewing gum?” team lead Andrew Marchese said in a press release.

Designing out all of the hard components allows the robot to move in especially tight tunnels and corners. That might make it useful in a disaster or search and rescue operation — areas soft robots are especially poised to make a big impact.

“I’m not saying that the world should be filled with robotic octopus tentacles on assembly lines,” Marchese said in the release. “I just want to challenge the notion that robots have to look or act a certain way.”

discoverynews:

Schizophrenia Is Actually Eight Genetic Disorders
New research published in the American Journal of Psychiatrysuggests that schizophrenia is not a single disease, but rather a group of eight genetically distinct disorders, each of them with its own set of symptoms. The finding could result in improved diagnosis and treatment, while also shedding light on how genes work together to cause complex disorders.

discoverynews:

Schizophrenia Is Actually Eight Genetic Disorders

New research published in the American Journal of Psychiatrysuggests that schizophrenia is not a single disease, but rather a group of eight genetically distinct disorders, each of them with its own set of symptoms. The finding could result in improved diagnosis and treatment, while also shedding light on how genes work together to cause complex disorders.

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

futurescope:

"We’re bouncing now" - New version of MITs robotic Cheetah

MIT researchers have developed an algorithm for bounding that they’ve successfully implemented in their robotic cheetah. It enables the robot to run and jump, untethered, across grass.

In experiments on an indoor track, the robot sprinted up to 10 mph, even continuing to run after clearing a hurdle. The MIT researchers estimate that the current version of the robot may eventually reach speeds of up to 30 mph. The key to the bounding algorithm is in programming each of the robot’s legs to exert a certain amount of force in the split second during which it hits the ground, in order to maintain a given speed: In general, the faster the desired speed, the more force must be applied to propel the robot forward. Sangbae Kim, an associate professor of mechanical engineering at MIT, hypothesizes that this force-control approach to robotic running is similar, in principle, to the way world-class sprinters race. “Many sprinters, like Usain Bolt, don’t cycle their legs really fast,” Kim says. “They actually increase their stride length by pushing downward harder and increasing their ground force, so they can fly more while keeping the same frequency.”

[read more] [more of cheetah]

futurescope:

DARPA: Jetpack developed to help every soldier run 4 minute miles but currently provides 5-12% assist

Nice What if? project from scientist at Arizona State, who developed a small personal wearable jetpack for DARPA to help soldiers run faster.

Let’s combine this with exoskeletons & fire-resistant synthetic spider silk body-armour and you get the futurific soldier from the future.

What if every soldier could run a four-minute mile? That’s the goal behind 4MM, or 4 Minute Mile, a student project to create a wearable jetpack that enhances speed and agility. Working with the Defense Advanced Research Projects Agency and a faculty mentor, Jason Kerestes is the mastermind behind 4MM. He built a prototype of the jetpack and is now testing and refining his design to be as effective as possible.
The 4MM project is part of an ASU program called iProjects, which brings students and industry together to find innovative solutions to real-world problems. See an overview of this year’s iProjects here: researchmatters.asu.edu/videos/students-solve-industry-challenges-through-iprojects
Produced by Alexander D. Chapin. Additional videography by Kirk Davis and Matthew Larsen researchmatters.asu.edu Join the conversation… facebook.com/asuresearchmatters twitter.com/asuresearch

[read more]

scienceisbeauty:

Light Printing

We are exploring new modalities of creative photography through robotics and long-exposure photography. Using a robotic arm, a light source is carried through precise movements in front of a camera. Photographic compositions are recorded as images of volumetric light. Robotic light “painting” can also be inverted: the camera is moved via the arm to create an image “painted” with environmental light. Finally, adding real-time sensor input to the moving arm and programming it to explore the physical space around objects can reveal immaterial fields like radio waves, magnetic fields, and heat flows.

Via Mediated Matter (MIT)