Tilera Chips Beat Intel And AMD Claims Facebook

Facebook engineers say Tilera’s multi-core processors are more efficient than low-power Intel and AMD chips

On  by Jeffrey Burt eWEEK USA 2013. Ziff Davis Enterprise Inc. All Rights Reserved. 

Tilera, which builds many-core processors to compete with x86-based chips from Intel and Advanced Micro Devices (AMD), got a boost from Facebook when engineers from the social media giant said Tilera’s processors out-performed those of its larger rivals.

The Facebook engineers released a white paper at the International Green Computing Conference in Orlando, Florida, that said tests determined that a Quanta Computer system running on Tilera’s low-power 64-core TilePro64 processors offered more than three times the performance-per-watt of systems running Intel’s quad-core Xeon chips and more than four times than servers running AMD’s eight-core Opteron chips.

The report comes at a time when Intel and AMD, the dominant server chip makers, are being challenged in the market for low-power systems that are increasingly being used by Internet-based companies like Facebook, Google, Amazon and Twitter, which have huge, densely populated data centres designed to process large amounts of small, Web-based workloads. In such environments, power and cooling are as important of factors as performance.

Important Role In Web 2.0

In the white paper, the Facebook engineers said key-value, or KV, stores play an important role in such Web 2.0 environments. They also noted that applications found in such data centres, such as Memcached, have hundreds of thousands of independent simple transactions that need to be processed in parallel. Facebook ran the tests using Memcached, and engineers found that the Tilera-based system was more efficient than those running x86 chips.

“Low-power many-core processors are well suited to KV store workloads with large amounts of data,” the engineer wrote in their paper. “Despite their low clock speeds, these architectures can perform on-par or better than comparably powered low-core-count x86 server processors.”

In this case, Facebook found the 64-core Tilera chip had 67 percent higher throughput than the low-power x86 chips at the same latency. At the same time, the engineers wrote that “when taking power and node integration into account as well, a TILEPro64-based S2Q server with eight processors handles at least three times as many transactions per second per watt as the x86-based servers with the same memory footprint”.

For the testing, Facebook used a 1U (1.75-inch) server running a quad-core Intel Xeon L5520 chip running at 2.7GHz, a 1U server powered by AMD’s eight-core Opteron 6128 HE clocked at 2.0GHz, and a 2U (3.5-inch) Quanta S2Q with eight Tilera TilePro64 processors – for a total of 512 cores – at 866MHz. Tilera officials said Facebook also plans to run the same tests on Tilera’s new 64-bit Gx3000 series processor, which was announced in June. That chip will begin sampling this month.

Tilera officials noted that Quanta builds Facebook’s open compute platform. In April, Facebook officials launched their Open Compute Project, open sourcing the specifications it uses for its hardware and data centre to efficiently power its massive social network. That is in contrast to others like Amazon, Google and Twitter, which keep the technology used in their data centres a secret.

Big Guns Target Micro Server Space

Intel and AMD are both working to drive up the performance and power efficiency of their processors though such avenues as adding more cores and integrating high-level graphics onto the same piece of silicon as the CPU. For the past two years, Intel also has been pushing the idea of micro servers, designed for highly virtualised data centres and cloud computing environments and using low-power Xeon and Atom chips.

At the same time, other chip vendors and systems makers are looking to make inroads into the low-power server space. Officials with ARM Holdings, whose designs dominate the mobile computing space, including smartphones and tablets, are looking to move up the ladder and into the data centre. A number of companies that make ARM-based chips, including Nvidia, Calxeda and Marvell Technologies, are pushing ahead with plans to release server processors based on ARM designs. SeaMicro is rolling out low-power servers based on Intel’s Atom processors.

Tilera officials said they do not expect to overtake Intel as the world’s top chip maker. However, it can challenge Intel and AMD in the 20 percent of the server market aimed at large cloud installations, they have said.


US Announces 100Mbps National Broadband Plan For 2020

The US FCC officially unveiled its National Broadband Plan on 16 March which looks to connect up to 100 million American households to high-speed broadband service

The US Federal Communication Commission officially unveiled its ambitious National Broadband Plan on 16 March, which calls for massive overhaul of the United States’ Internet infrastructure during the next decade and opening up the country’s wireless spectrum to accommodate new devices.

The FCC proposal, which is officially called “Connecting America: The National Broadband Plan,” looks to invest billions of dollars to give more Americans access to high-speed broadband service. One of the plans most ambitious goals is to connect 100 million U.S. households to 100 megabits-per-second broadband service by 2020.

In addition, the FCC proposal looks to create “anchor institutions,” such as schools, hospital and military bases, which can offer the public broadband services of 1gigabit-per-second. There are also plans to free up to 500 megahertz of spectrum for people to use new types of wireless Internet devices.

Overall, the FCC broadband plan looks to replace traditional means of communications with high-speed Internet access. However, the plan goes beyond simply offering a proposal for wiring households with broadband service. Instead, the FCC sees its 10-year plan as a way to spur job growth, better protect the United States and educate scores of school children.

“The National Broadband Plan is a 21st century roadmap to spur economic growth and investment, create jobs, educate our children, protect our citizens, and engage in our democracy,” FCC Chairman Julius Genachowski wrote in a statement. “It’s an action plan, and action is necessary to meet the challenges of global competitiveness, and harness the power of broadband to help address so many vital national issues.”

For a discussion of what the FCC Broadband Plan means, please click here.

The FCC has now sent its Broadband Plan to Congress and it’s expected to take years to implement the recommendations if the proposals are approved at all. Offering high-speed broadband access to more Americans and brining these services to parts of the country that could not access the Internet were major themes of President Barrack Obama’s campaign in 2008.

The actual broadband plan, which runs more than 300 pages, does not offer a great deal of specifics on how much the recommendations will cost in the long run. However, there are some areas the FCC did recommend specific dollar amounts. For example, the plan calls for a 10-year investment of up to $6.5 billion (£4.3bn) for homeland security and public safety, which includes the development of a next-generation 911 system and fighting cyber-crime.

The FCC also believes that it can either make money or offset the cost of creating 500 megahertz of wireless spectrum through the auctioning of that spectrum to businesses. Just before the FCC released the plans, Reuters reported that Genachowski said that broadcasters are willing to auction off their spectrum in exchange for a slice of the revenue.

The FCC Broadband Plan also looks to keep America competitive and part of that means improving the country’s Internet infrastructure. The FCC recommendations specifically pointed to efforts by Japan, South Korea and Germany to offer their citizens high-speed Internet access.

In a paper released the same day as the FCC plan, Roger Kay, an analyst with Endpoint Technologies Associates, offered a blueprint of how technology will change in the next 10 years. Kay believes that computers will surround everyone but people will be less aware of them. In order to fulfill that vision of a more wired world, the United States needs a better infrastructure and the federal government should take some role in this development. “As part of that integration, our broadband infrastructure will be complete, both wired and wireless, with federal involvement as necessary,” Kay wrote in his paper called “The Future: Unknowable Mystery or Mere Evolution.”

Part of that will require 100 million households having access to better broadband service. The FCC plan calls for a goal of 100 million U.S. homes with actual download speeds of 50 Mbps and actual upload speeds of 20 Mbps by 2015.


US Suspends Legal Action Against Intel

Intel and the FTC have filed a joint motion to suspend the legal proceedings while the two sides work to negotiate a settlement

On  by Jeffrey Burt eWEEK USA 2013. Ziff Davis Enterprise Inc. All Rights Reserved. 0

Intel and the Federal Trade Commission have suspended legal proceedings related to the lawsuit filed by the federal regulators against the chip maker while the two sides try to negotiate a settlement.

In a statement released on 21 June, Intel officials said the two sides agreed to file a joint motion to suspend the administrative trial proceedings to give the parties time to negotiate. According to Intel, the motion calls for suspending the proceedings until 22 July.

Intel’s statement indicated that a consent order has been proposed, and will be a key topic of discussion during the negotiations. However, terms of the proposed consent order were not released.

Anti-Competitive Behavior

The FTC on 16 Dec, 2009, filed a lawsuit against Intel, accusing it of practicing anti-competitive behavior designed to hinder competition from rival Advanced Micro Devices and graphics chip maker Nvidia.

The federal regulators said Intel used its dominant position in the x86 chip market to coerce OEMs like Dell, IBM and Hewlett-Packard into limiting their use of processors from AMD, and Intel had altered some of its technology—such as compilers—to hurt the performance of products from AMD and other competitors.

The allegations echoed those leveled by other regulatory bodies. The European Commission, the antitrust arm of the European Union, fined Intel $1.45 billion (£983m) in May 2009, a fine the chip maker is appealing.

In addition, the U.S. Attorney’s Office in New York filed a similar lawsuit against Intel in November 2009 for practices that allegedly were intended to illegally stifle competition from AMD.

Legal Dispute With AMD

The FTC also accused Intel of practicing anti-competitive behavior against Nvidia. All the agencies accused Intel of using financial incentives and coercion to convince systems makers to limit their use of competing products.

In December, Intel ended its longstanding legal dispute with AMD in a settlement that included a $1.25 billion payment to AMD. Lawsuits between Nvidia and Intel are still ongoing.

Intel officials have strongly defended Intel business practices, insisting that while their company has been aggressive in the market, it has not used its dominant position illegally. While Intel did settle its legal issues with AMD, company officials did not admit to any wrongdoing.

In a sharp statement issued two weeks after the FTC filed its lawsuit, Intel officials said the agency did not understand the x86 chip marketplace, and that the lawsuit contradicted established antitrust regulations.

Articles on TechWeekEurope are available in Google Currents and in the AppStore

Cloud News

Vendors Converge On Packaged Clouds

HP and Oracle extend their converged infrastructure products but smaller packages are also appearing

On  by Jeffrey Burt eWEEK USA 2012. Ziff Davis Enterprise Inc. All Rights Reserved. 0

Top-tier data centre systems makers continue to roll out prepackaged hardware-and-software offerings as they look to grow their capabilities in the increasingly competitive converged infrastructure space.

Both Hewlett-Packard and Oracle added to their portfolios this week with solutions that leverage both in-house and partner technologies designed to give enterprises integrated and easy-to-deploy converged data centre systems. Such offerings are considered by analysts and vendors as critical technologies as businesses continue their migration to cloud computing environments.

Broad Pre-Packaged Cloud Market

Smaller systems makers also are making strides in that direction. At the Cloud Summit East show in New York on June 7, Supermicro and cloud software vendor Nimbula partnered to offer Supermicro servers optimised for Nimbula’s Director software platforms and aimed at businesses looking to grow their cloud computing capabilities.

“Awareness of IT cost and infrastructure benefits in private and public cloud computing is reaching the masses,” Wally Liaw, vice president of international sales at Supermicro. “Supermicro’s cloud-ready server solutions are an ideal computing platform for Nimbula Director. Supermicro’s application-optimised systems combined with Nimbula’s expertise in cloud deployment automation, operation, and scalability will provide any size enterprise or service provider with an accelerated, cost-effective path to evolving cloud services.”

Such converged data centre solutions got a shot in the arm a couple of years ago when Cisco Systems rolled out its UCS (Unified Computing System), a tightly integrated, all-in-one data centre offering that includes Cisco-branded server and networking devices, storage from EMC and virtualisation capabilities from VMware. It also includes management software.

The UCS has been a solid business for Cisco. IDC analysts said last month that Cisco is now the No. 3 blade server vendor in the world, and company executives said Cisco now has 5,400 UCS customers and an annual run rate of $900 million for UCS product orders.

Converged Cloud Offerings Proliferating

Such integrated offerings are not necessarily new, but vendor and customer interest has grown with the rise of virtualisation and cloud computing. Now most hardware vendors are rolling out such converged packages. For example, Dell in April announced vStart, a pre-assembled hardware and software bundle of Dell PowerEdge servers, EqualLogics storage and PowerConnect switches that will be delivered as a single unit and easily deployed. A vStart package will let businesses initially run 100 or 200 virtual machines with that number growing later.

As part of its Converged Infrastructure initiative, HP executives at their Discover 2011 show June 6 rolled out new and enhanced data centre packages complete with HP servers, storage, networking and services, and with support for a wide range of virtualisation technologies from VMware, Citrix Systems and Microsoft. HP’s AppSystem, VirtualSystem and CloudSystem offerings are designed to help businesses more easily migrate to cloud computing environments, according to company officials.

Analyst generally applauded HP’s announcements. Charles King, principle analyst with Pund-IT Research, said in a note that HP’s vision of an Instant-On Enterprise, with systems that provide seamless and flexible support for myriad processes, makes sense.

“If this all sounds familiar, it should,” King wrote. “Though HP’s branding is fairly unique, the company’s go-to-market approach and goals fall generally in line with those pursued by virtually every other major systems vendor… HP’s Converged Enterprise strategy and growing solution portfolio have made the company more formidable than it has been for some time.”

Forrester Research analyst Richard Fichera said the HP offerings gives enterprises options.

“With these new announcements, the virtual infrastructure platform segment of the [converged infrastructure] space begins to look positively crowded, and now HP users will have an alternative to the VCE offerings [from Cisco and partners] as well as Dell’s new vStart options when looking at these platforms,” Fichera wrote in a June 7 blog post. “On the integrated application stack side, the new HP options look like strong choices for users of these complex vertical stacks.”

Sun Breaking Through The Cloud

Oracle officials have been looking to leverage combined hardware-software offerings since buying Sun Microsystems last year and inheriting its SPARC systems, rolling out such solutions as the Exadata database system and Exalogic, a cloud-in-a-box offering.

On June 7, Oracle unveiled the Oracle Optimised Solution for Enterprise Cloud Infrastructure, an integrated and pre-tested solution that combines Oracle’s Sun Blade servers, ZFS storage appliance and Oracle VM virtualisation technology. It will run Oracle Solaris or Oracle Linux operating systems, and comes with Oracle consulting services.

“Oracle is radically simplifying cloud deployment with a pre-tested, single vendor solution for enterprise cloud infrastructure,” Ali Alasti, vice president of hardware development for Oracle, said in a statement. “By engineering our hardware and software together, the Oracle Optimised Solution for Enterprise Cloud Infrastructure cuts deployment time from weeks to hours and helps customers get virtualised infrastructure up and running faster.”

Two days later, Oracle officials announced they were preloading new virtualisation software onto some SPARC system. Oracle VM Server for SPARC 2.1 enables users to host as many as 128 virtual machines on a single server.

Articles on TechWeekEurope are available in Google Currents Subscribe now!