Google
 
Showing posts with label 应用. Show all posts
Showing posts with label 应用. Show all posts

Sunday, March 22, 2009

英国拟向泰晤士河投放机器鱼监测水质(图)

英国拟向泰晤士河投放机器鱼监测水质(图)

编者注:机器鱼本身已经成熟到这个程度了吗?有待验证,想法很好,似乎技术难度还很多

科技时代_英国拟向泰晤士河投放机器鱼监测水质(图)

机器鱼外形酷似鲤鱼

   新浪科技讯 北京时间3月23日消息,据英国《每日邮报》报道,英国埃塞克斯大学的研究人员称,他们最近准备向泰晤士河投放专门设计的仿生机器鱼,以用于探测水中的污 染物,并绘制河水的3D污染图。这种机器鱼形似鲤鱼,身上装备有探测传感器,可以发现水中的多种污染物,如轮船泄漏的燃油或其他化学物等。

  英国埃塞克斯大学的研究人员表示,为找到监测水污染的新方法,欧盟出资250万英镑进行各种研究和设计。此次将投放在泰晤士河中的机器鱼是科学 家们完全按照仿生学原理设计的,体长约50厘米,高15厘米,宽12厘米。这些机器鱼的身上将安装传感器,可以自动监测河水中的各种污染物,并利用GPS 装置将数据适时传给研究人员。投放在泰晤士河中的所有机器鱼都具备协同工作的能力,即使没有科学家的控制,它们也能根据此前设定的程序协同合作。当一条机 器鱼“嗅出”一片水域中的有害物质时,它们就通过wi-fi无线连接彼此交流数据,然后适时向研究人员和环保部门发出警报。

  这种机器鱼是科学家们根据仿生学原理设计制造的,它们游动起来酷似真正的鲤鱼,身体在发动机的推动下来回摆动,并用鳍和尾来改变它们的游动方 向,其游动速度可望达每秒半米。此机器鱼先遣队将在18个月内真正去水里进行污染探测,最初会去港口监测大型船只的泄露和排放,还可能去查一下泰晤士河的 污染情况。届时,它们将分别配备不同的传感器来探测不同的污染物,之后科学家再用这些数据绘制实时的水污染3D图,好让环保部门采取最好办法来清除这里的 污染物。科学家表示,他们会让这些机器鱼充电一次就能在水中持续游动24小时。

  美国华盛顿大学的研究人员此前已经成功地研制出三条机器鱼,在水中游泳时可互相交流。该机器鱼,就像真鱼一样,依靠鳍游泳。机器鱼还能追逐猎 物,如漂流物或小鱼。机器鱼的后部有两片平行于水面的尾舵,随着尾舵转动,机器鱼可以上浮和下潜。还有一条竖直的尾鳍,用来保证平稳。机器鱼唯一的动力来 自尾巴。这片尾巴,由后部伸出的一只机械臂带动。机器鱼模仿的是鲑鱼的动作。鲑鱼的划水动作看似简单,其实科学家需要利用专门的仿生学研究其轨迹,得出相 应的算法,好指挥机械尾巴运动,做到尽量平滑。

  英国埃塞克斯大学的研究人员表示,“泰晤士的污染状况正在不断恶化,如果不能确定河水中污染物的方位,污染物泄漏无疑将随着时间推移而变得愈发 严重。我们希望通过这么做(投放机器鱼)可以防止向海中排放具有潜在危险的物质。” 如果实验成功,科学家希望这种机器鱼在全球各地得到使用,以阻止污染蔓延。(刘妍)

link: http://tech.sina.com.cn/d/2009-03-23/07142932198.shtml

http://www.sina.com.cn 2009年03月23日 07:14 新浪科技

Friday, May 02, 2008

Saving Energy in Data Centers

Saving Energy in Data Centers

A  group at Microsoft Research attacks the problem on two fronts.

image

Monitoring the conditions: This sensor, a prototype developed by the Networked Embedded Computing group at Microsoft Research, is sensitive to heat and humidity. The group envisions using sensors like these to monitor servers in data centers, enabling significant energy savings. The sensors could also be used in homes to manage the energy use of appliances.
Credit: Microsoft Research

ps: 算是WSN的一个小应用吧

Data centers are an increasingly significant source of energy consumption. A recent EPA report to Congress estimated that U.S. servers and data centers used about 61 billion kilowatt-hours of electricity in 2006, or 1.5 percent of the total electricity used in the country that year. (See also "Data Centers' Growing Power Demands.") Concern about the amount of energy eaten up by data centers has led to a slew of research in the area, including new work from Microsoft Research's Networked Embedded Computing group, which was showcased last week in Redmond, WA, at Microsoft's TechFest 2008. The work attacks the energy-consumption problem in two ways: new algorithms make it possible to free up servers and put them into sleep mode, and sensors identify which servers would be best to shut down based on the environmental conditions in different parts of the server room. By eliminating hot spots and minimizing the number of active servers, Microsoft researchers say that the system could produce as much as 30 percent in energy savings in data centers.

The sensors, says Feng Zhao, principal researcher and manager of the group, are sensitive to both heat and humidity. They're Web-enabled and can be networked and made compatible with Web services. Zhao says that he envisions the sensors, which are still in prototype form, as "a new kind of scientific instrument" that could be used in a variety of projects. In a data center, the idiosyncrasies of a building and individual servers can have a big effect on how the cooling system functions, and therefore on energy consumption. Cooling, Zhao notes, accounts for about half the energy used in data centers. (He believes that the sensors, which he says could sell for $5 to $10 apiece, could be used in homes as well as in data centers, where they could work in tandem with a Web-based energy-savings application.)

Another aspect of the research, explains Lin Xiao, a researcher with the group, is new algorithms designed to manage loads on the servers in a more energy-efficient way. Traditionally, load-balancing algorithms are used to keep traffic evenly distributed over a set of servers. The Microsoft system, in contrast, distributes the load to free up servers during off-peak times so that those servers can be put into sleep mode. The algorithms are currently designed for connection servers, which are employed with services for which users may log in for sessions of several hours, such as IM services or massively multiplayer online games. Because long sessions are common, shifting loads requires complex planning in order to avoid disconnecting users and other problems with quality of service. Xiao says that the group has developed two types of algorithms: load-forecasting algorithms, which predict a few hours ahead of time how many servers will need to be working, and load-skewing algorithms, which distribute traffic according to the predictions and power down relatively empty servers.

The beauty of the system, Xiao says, comes when the two systems work in tandem. The sensors monitor the servers to make sure they're not being overcooled (a common problem in data centers, he says, since people often set the cooling system conservatively, to protect the equipment). In addition, the sensor system watches for hot spots, which can make the air-conditioning system work inefficiently. This information is then used by the load-skewing algorithms. Knowing that you want to shut down 400 servers is one thing. The sensor helps determine which ones to shut down.

Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and the author of several reports on data-center energy consumption, says that he sees this type of research as one step toward a big-picture vision for data centers. "There's a focus by the big players in the data-center area to try to get to a point where they can shift computing loads around, dependent on not just electricity prices, but also weather and other variations." Ultimately, Koomey says, this could mean shifting loads not only within a data center, but also from region to region.

The group ran simulations using data from the IM service Windows Live Messenger and found that the system could produce about 30 percent in energy savings, depending on the physical structure of the data center and on how the system is configured. Zhao says that the savings produced by the group's system does depend on how the user chooses to deal with some inherent trade-offs. For example, he says, Microsoft is working on several areas of research that will help in modeling the unexpected, such as load spikes. However, a user might choose to keep more servers than is strictly necessary powered on as a reserve in case of a spike, at a corresponding loss in energy savings. "Our research shows the trade-off between energy saving and performance hit, and lets users choose the right balance," Zhao says.

Other researchers are working on developing techniques for shutting down servers at optimal times. Xiao says that the Microsoft group's work is distinguished by its focus on connection servers and the problems that come with shifting loads when users typically stay logged in for many hours.

"Servers are only being used [about] 15 percent of their maximum computing ability, on average," Koomey says, "so that means a lot of capital sitting around." He expects companies to be very motivated to implement the research that they do in this area, since "they want to make better use of their capital," he says. Wasting energy and computing power doesn't make good business sense.

link: http://www.technologyreview.com/Biztech/20388/page1/

Google