Dell XPS L501X Battery

Just last week, a manufacturer rushed a brand-new product over to me for a “super quick” turnaround, and indeed they got it back even quicker than anticipated. As soon as I saw the courier label, I realised it had been forwarded from a reviewer at another website, so I checked it immediately. Sure enough, it was broken and wouldn’t even boot.If I didn’t know better, I’d say the buggers did it on purpose. I fantasise that these magazines and websites are secretively shooting-star sellers on ebay, specialising in cables, adapters, power supplies, driver CDs, user manuals and randomly shaped blocks of polystyrene.These experiences taught me to be more considerate when treated to a pristine, never-before-opened product to test... to a measure that some might describe as anally retentive. I keep every strip of sticky tape and cable tie. I cut plastic bags open with a pair of scissors. When it comes to repacking, I refer to the sequence of photographs I shot while unpacking.

Once, when returning one particularly expensive item, the inventory manager looked inside the neatly repackaged box and asked, perhaps only half-jokingly, whether I’d actually taken the product out to test it. Being careful can be bad for your reputation.Also its raises another possibility that worries me: maybe my colleagues are simply testing these products properly.Doing it properly, I guess, involves kicking the package up and down the street for a few hours before unpacking it, handing the box to a starving Rottweiller to play with, ripping open every plastic bag from the centre outwards with frantic glee, slamming the largest clump of hardware on a table and dumping everything else out of the window.My first encounter with the notorious drop-test was in the late 1980s on the long-since defunct PC Business World weekly tabloid magazine, at which mild-mannered Reviews Editor Jonathan Angel revealed his devilish side by dropping portable hard drives down the stairwell of the office building and reporting on the resulting carnage. The manufacturers went ballistic and levelled all sorts of legal and financial threats, but the readers loved it.


Frankly, if a product is designed to be handheld or jostled around in a backpack, drop-testing should be mandatory – a bit like the way Ikea uses robotic children to jump up and down on its beds for a virtual ten years.Certainly, today’s portable hard disks are ruggedised like never before, with 2.5in drives built to withstand at least three times as many Gs as your average 3.5in drive. And unless you’re an easily impressed tit who finds it shocking that a Samsung Galaxy S III screen risks cracking every time you throw a bottle of beer at it, even delicate smartphones can survive an extraordinary amount of rough treatment.There are environmental testing businesses that specialise in this kind of thing, from engineering design to packaging. These guys, for instance, can quote you the relevant ISO testing standards and have the blue-stained sanitary pads to prove it.So perhaps I’m not growing less clumsy so much as manufacturers, aware that modern users are tactile and impatient, are simply building their products to be tougher.

Why other reviewers feel the need to drop-test 22in displays and A3 laser printers is beyond me. Inappropriate testing must be the new order of the day, so the next time I test a notebook, let’s see how it fends off fireworks, or fares on the rugby field or at the bottom of a lake. To hear Intel Fellow Matt Adiletta tell it, Chipzilla not only invented the term microserver but saw the trend towards wimpy computing coming way ahead of the all this fawning over the ARM architecture and a half-dozen upstarts wanting to take big bites out of the Xeon server processor cash cow.When El Reg says “fawn”, that’s an intentional pun that harkens back to FAWN: A Fast Array of Wimpy Nodes, a paper published in May 2008 by a bunch of server geeks researchers at Carnegie Mellon University.That paper compared the energy profile and performance of x86 and ARM architectures, specifically for server nodes equipped with flash storage. It demonstrated how the combination of low-powered (in terms of both performance and electrical consumption) processors combined with flash could yield a 50X improvement over then-current x86 and hard-disk clusters fielding requests from a key-value store, and on the order of 4X compared to low-power x86 chips mated to flash.


Subsequent papers published by the CMU researchers were done in conjunction with Intel Research, as you can see at the FAWN project.Adiletta, as it turns out, caught the microserver bug back in 2006, when it wasn’t even called that yet. In 2007 his team at Intel created what he calls a “CPU DIMM” that was about the size of a folded wallet, as he explained in a conference call today with the press, that had either Atom or two-core Core desktop/laptop processors on them. It had a lot of pin and signal connectors and plugged into a memory slot, and Adiletta explained that in 2008 he had shown it to none other than Sun cofounder and serial capitalist Andy Bechtolsheim to get his opinion.Bechtolsheim asked a lot of questions about thermals, performance, reliability, and other feeds and speeds, then was quiet for a bit, holding his head in his hands and rocking a little bit. “Geez, it just hurts my head to think about all of the opportunities this could provide if we can realize it,” Adiletta recalls Bechtolsheim finally saying as he came out of his trance.

Intel’s point in hosting Thursday’s meeting with journos and in telling this story about the meeting with Bechtolsheim is that the company wants to demonstrate that it has not been caught by surprise by either the advent of microservers or the movement of the ARM architecture from the smartphone and embedded spaces into the data center. In fact, the second generation of FAWN research at CMU compared x86 to ARM because Intel knew where the real competition would come from.El Reg notes that at this time Bechtolsheim was the CTO for servers at Sun Microsystems, which was nearly three years away from being thrown into the arms of Oracle after a catch-and-release by Big Blue. What Bechtolsheim did not do was launch microservers at Sun, but rather he invested his money in Arista Networks, where he became chairman and chief development officer in late 2008.Bechtolsheim likes to flip back and forth between systems and networking, and has his own kind of ticking and tocking going on.Adiletta, as the godfather of microservers at Intel, was trotted out to establish this creation myth in our psyches and to also to remind everyone that Intel is expected to launch its dual-core “Centerton” processor before the end of the year – than means soon, obviously – as the first server-class Atom processor.


“Having more chefs in the kitchen helps, up to a certain point, depending on what is being served,” Adiletta explained by way of metaphor to illustrate why Intel was enthusiastic about its impending Atom S Series of server chips and the possibilities they present. “We’re very bullish on this segment.”Maybe Intel’s researchers were indeed enthusiastic about microservers, but their business managers were not so sure and absolutely did not want to upset the Xeon cash cow, particularly during the Great Recession.As Jason Waxman, general manager of the Cloud Computing Group at Intel, put it when microserver upstart SeaMicro launched a Xeon-based SM10000 cluster in January of this year: “SeaMicro is pretty modest. They were really the first company to push us hard on the Atom, and they are the first to develop a system that supports both Atom and Xeon.”A little more than a month later, floundering AMD, looking for some kind of salvation after having big handfuls of server market share ripped from it by Chipzilla, snapped up SeaMicro for $334m and is now working on an Opteron ARM processor due in 2014 with SeaMicro’s interconnect fabric gluing them together into what amounts to a data center in a box.

Share This Story

Get our newsletter