background preloader

Design machine

Facebook Twitter

Dataflow animate. New cache design. Transistors keep getting smaller and smaller, enabling computer chip designers to develop faster computer chips. But no matter how fast the chip gets, moving data from one part of the machine to another still takes time. To date, chip designers have addressed this problem by placing small caches of local memory on or near the processor itself. Caches store the most frequently used data for easy access. But the days of a cache serving a single processor (or core) are over, making management of cache a nontrivial challenge. Additionally, cores typically have to share data, so the physical layout of the communication network connecting the cores needs to be considered, too. Researchers at MIT and the University of Connecticut have now developed a set of new “rules” for cache management on multicore chips.

So how are these caches typically managed, and what is this group doing differently? Caches on multicore chips are arranged in a hierarchy, of sorts. Smart machine. Forthcoming from Columbia Business School Publishing in fall 2013, Smart Machines: IBM’s Watson and the Era of Cognitive Computing —by John E. Kelly III, Director of IBM Research, and Steve Hamm, writer at IBM and former business and technology journalist—introduces readers to the fascinating world of "cognitive systems," allowing a glimpse into the possible future of computing.

Today, the world is on the cusp of a new phase in the evolution of computing--the era of cognitive systems. The victory of IBM’s Watson on the TV game show Jeopardy! Signaled the dawn of this new era. Read an excerpt from the book (click on the icon on the bottom right-hand side to view in full screen): About the authors: John E. Steve Hamm is a writer at IBM. Back to analog. Our recently developed ability to structure materials at nanometer scales has led to a variety of applications, but few of them have made geeks as excited as they are about cloaking devices. Although still fairly limited, these cloaking devices rely on what are termed metamaterials—devices that are structured so that they can manipulate light, bending it in unusual directions. Now, researchers have determined that it should be possible to create metamaterials that can take lightwaves and perform calculus using them. Although the paper is entirely theoretical—no actual devices were constructed or nor were any light waves bent—simulations using the properties of materials we know how to work with, like silicon, indicate that real world devices should perform almost as well as virtual ones.

The authors of the paper describing these metamaterials say they were inspired by analog computers. What can you do with these? Nolinear advanced. Old fire controller. The USS Zumwalt, the latest destroyer now undergoing acceptance trials, comes with a new type of naval artillery: the Advanced Gun System (AGS). The automated AGS can fire 10 rocket-assisted, precision-guided projectiles per minute at targets over 100 miles away. Those projectiles use GPS and inertial guidance to improve the gun’s accuracy to a 50 meter (164 feet) circle of probable error—meaning that half of its GPS-guided shells will fall within that distance from the target. But take away the fancy GPS shells, and the AGS and its digital fire control system are no more accurate than mechanical analog technology that is nearly a century old. We're talking about electro-mechanical analog fire control computers like the Ford Instruments Mark 1A Fire Control Computer and Mark 8 Rangekeeper.

These machines solved 20-plus variable calculus problems in real-time, constantly, long before digital computers got their sea legs. Shooting things with a gun from a ship is not exactly easy. Data on the_move. Over the past year, we’ve covered a number of the challenges facing the supercomputing industry in its efforts to hit exascale compute levels by the end of the decade. The problem has been widely discussed at supercomputing conferences, so we’re not surprised that Horst Simon, the Deputy Director at the Lawrence Berkeley National Laboratory’s NERSC (National Energy Research Scientific Computing Center), has spent a significant amount of time talking about the problems with reaching exascale speeds. But putting up $2000 of his own money in a bet that we won’t hit exascale by 2020?

That caught us off guard. The exascale rethink Simon lays out, in a 60+ page slideshow , why he doesn’t think we’ll hit the exascale threshold within seven years. Power consumption per FLOP (now vs. 2018) Power efficiency, measured on a per-core basis, is expected to continue improving for multi-core and many-core architectures, but interconnect power consumption hasn’t scaled nearly as well. Why exascale matters. Biblios du siècle.

Étude struct-algo