Categories
Healthcare IT Infrastructure News

CeBIT: Quarter Of Germans Happy To Have Chip Implants

The head of Germany’s main IT trade body told the audience at the opening ceremony of the CeBIT technology exhibition that one in four of his countrymen are happy to have a microchip inserted for ID purposes.

Professor August Wilhelm Scheer made the comments at an event this week to announce the start of the show which runs until Saturday in the German city of Hanover. With around 4000 companies from over 70 countries expected at the event, CeBIT continues to be the largest tech show in Europe according to its organisers.

As well as foretelling the imminent demise of the CD and DVD, Professor Scheer said that implanting chips into humans was going to become commonplace. “The speed of the development is not going to be reduced this decade,” he told an audience of tech execs and politicians including German Chancellor Angela Merkel. “Some developments can already be seen. CDs and DVDs are going to disappear as material sources of information. Wallpaper will be replaced by flat screens and many of us will have chips implanted beneath our skin by the end of next decade.

Rather than being based on pure speculation, Scheer said that his organistion BITKOM had actually conducted research which had shown that a quarter of Germans would be happy to have a chip implanted if it meant they could access services more easily.

“We just carried out a survey and one out of four people are happy to have a chip planted under their skin for very trivial uses for example to pass gates more quickly at a discotheque for example and to be able to pay for things more quickly in the supermarket,” said Scheer. “The wilingness of the population to accept our technology is certainly given.”

Tech implants are already gaining ground in the field of healthcare. Last August saw the first US implant of the Accent RF pacemaker. Combined with remote sensoring capabilities, the Accent allows doctors to more efficiently monitor patients, while patients enjoy the convenience of care from home.

As well as his predictions for more outlandish technologies, Scheer also made reference to the rise of cloud computing and the disruptive effect it was having on the software industry. “Cloud computing is something that is going to revolutionise the software industry and mix everything up,” he said. “That is forseeable already but there are going to be many surprises on top of that.”

Scheer also commented on Europe’s role as an innovator and user of technology but admitted that countries such as China and India were threatening to catch-up and even overtake. “We are the number four in Germany when it comes to be using of technology,” he said. “Europe by the way is the largest user and we are even ahead of Asia. But the Asian countries are of course going to catch-up.”

Green IT was one of the major focuses for the CeBIT event last year with around 2000 square meters given over to a dedicated Green IT World.

Categories
Infrastructure News Security

IPv6 Traffic Remains Minuscule

 by Fahmida Y Rashid

Despite growing interest in IPv6, the traffic over the protocol remains less than 1 percent of overall online traffic, Arbor Networks has found

Even though the number of available IPv4 addresses are dwindling faster than expected, the move to IPv6 remains sluggish, according to a recent study from Arbor Networks.

In a study of native IPv6 traffic volumes across multiple large carriers, IPv6 adoption remains minuscule as a result of technical and design challenges, no economic incentives, and a dearth of IPv6 content, according to the Arbor Networks study released on 19 April. During the six-month study period, Arbor Networks researchers found that traffic over IPv4 networks grew by an average of 40 percent to 60 percent while IPv6 traffic actually decreased by an average of 12 percent proportionately because it was not growing fast enough in comparison to IPv4 traffic.

Rising IPv6 traffic

“Despite 15 years of IPv6 standards development, vendor releases and advocacy, only a small fraction of the Internet has adopted IPv6,” said Arbor Networks chief scientist Craig Labovitz.

While actual IPv6 traffic volumes have gone up, it has shrunk as a percentage of all Internet traffic, to a mere 0.25 percent of all net traffic, Labovitz said. The top IPv6 applications are largely peer-to-peer applications such as BitTorrent, accounting for 61 percent of traffic. In comparison, peer-to-peer networks account for 8 percent of IPv4-based traffic. Web traffic makes up the second largest block of traffic on both IPv4 and IPv6 networks, but the differences are still striking. HTTP traffic accounts for 19 percent of IPv4 traffic, compared to a mere 4.6 percent over IPv6.

Online video, such as Netflix, YouTube and web video, accounted for a little less than half of IPv4 traffic, but they didn’t even make a dent over IPv6. It’s ironic considering Netflix is one of the few major companies with an IPv6-accessible website.

Users and businesses that are interested in migrating, but stymied by their ISP’s lack of IPv6 offerings, can use tunnels to get IPv6 connectivity. Arbor examined the total IPv6 traffic over a specific 24-hour period in February and found over 250,000 such tunnels. More than 90 percent of the tunnels belonged to five major tunnel brokers, including Hurricane Electric, Anycast and Microsoft’s Teredo service.

The Arbor research highlighted the fact that most companies and ISPs are way behind in their transition plans to move their networking infrastructure to the newer address space. This is worrying in light of the fact that the remaining IPv4 addresses are running out faster than predicted.

ICANN (Internet Corporation for Assigned Names and Numbers) allocated the last blocks of IPv4 addresses to the five regional internet registries in a public ceremony on 3 February.

While existing sites will continue working just fine even when the last IPv4 address has been assigned, any organisations wanting to expand or add new capabilities will be unable to without transitioning their network infrastructure to IPv6.

IPv4 exhaustion

In fact, that’s more or less the case for Asia-Pacific businesses. The Asia Pacific Network Information Centre, the RIR responsible for assigning IP addresses to the region, announced the release of its last available batch of IPv4 addresses on 15 April. While analysts had predicted APNIC would run out of the IP address blocks first, the predictions had estimated the supply would last till the summer.

“Considering the ongoing demand for IP addresses, this date effectively represents IPv4 exhaustion for many of the current operators in the Asia Pacific region,” said APNIC director general Paul Wilson.

APNIC have placed the remaining IPv4 addresses under limited distribution. “From this day onwards, IPv6 is mandatory for building new Internet networks and services,” Wilson said.

Asia-Pacific is well on the way to become the first “IPv6-enabled region”, but businesses need to begin the migration if they haven’t already done so in order to “remain viable”, according to Wilson.

The American Registry for Internet Numbers received 253 requests for IPv6 address blocks from internet service providers in the first quarter of 2011, compared to 134 requests in the last quarter of 2010. It’s not just ISPs talking about IPv6, as ARIN also received 247 end-user requests for IPv6 address space in the first quarter 2011, compared to 103 requests in first quarter 2010. ARIN received a total of 434 requests from ISPs in 2010, and expects requests to exceed that in 2011.

The upcoming “World IPv6 Day” on 8 June marks “a major milestone in the Internet’s evolution”, Labovitz said, because it will force businesses and ISPs to stress test the global network infrastructure. “Will the flood of IPv6 traffic result in network failures? As an industry, we’re not sure,” Labovitz concluded.

Categories
Infrastructure News Search Engines

Google Developing Caffeine Storage System

This new storage system will include more diagnostic and historic data and autonomic software

Google has been ahead of its time in more than just Web search and online consumer tools. Out of sheer necessity, it’s also been way ahead of the curve in designing massive-scale storage systems built mostly on off-the-shelf servers, storage arrays and networking equipment.

As the world’s largest Internet search company continues to grow at a breakneck pace, it is now in the process of creating its second custom-designed data storage file system in 10 years.

This new storage system, the back end of the new Caffeine search engine that Google introduced Aug. 10 and is now testing, will include more diagnostic and historic data and autonomic software, so the system can think more for itself and solve problems long before human intervention is actually needed.

Who knew 10 years ago, when it was the newbie on the block to Yahoo’s market-leading search engine, that Google would grow into a staple of Internet organization that is relied upon by hundreds of millions of users each day?

Just before Rackable sold Google its first 10,000 servers in 1999 and started the company on a server-and-array collection rampage than may total in the hundreds of thousands of boxes, Google engineers were pretty much into making their own servers and storage arrays.

“In 1999, at the peak of the dot-com boom when everybody was buying nice Sun machines, we were buying bare motherboards, putting them on the corkboard, and laying hard drives on top of it. This was not a reliable computing platform,” Sean Quinlan, Google’s lead software storage engineer, said with a laugh at a recent storage conference. “But this is what Google was built on top of.”

It would be no surprise to any knowledgable storage engineer that this rudimentary file system had major problems with overheating to go with numerous networking and PDU failures.

“Sometimes, 500 to 1,000 servers would disappear from the system and take hours to come back,” Quinlan said. “And those were just the problems we expected. Then there are always those you didn’t expect.”

Eventually, Google engineers were able to get their own clustered storage file system — called, amazingly enough, Google File System (GFS) — up and running with decent performance to connect all these quickly custom-built servers and arrays. It consisted of what Quinlan called a “familiar interface, though not specifically Posix. We tend to cut corners and do our own thing at Google.”

What Google was doing was simply taking a data centre full of machines and layering a file system as an application across all the servers to get open/close/read/write, without really caring where the data is in the machine, Quinlan said.

But there was a big problem. The GFS lacked something very basic: automatic failover if the master went down. Admins had to manually restore the master, and Google went dark for as long as an hour at times. Although failover was later added, when it kicked in it was annoying to users, because the lapse often was several minutes in length. Quinlan says it’s down now to about 10 seconds.

Eventually, the growth of the company and its subsequent IPO in 2004 spurred even more growth, so a modification to the file system was designed and built. This was called BigTable (developed in 2005-06), a distributed database-like file system built atop GFS with its own “familiar” interface; Quinlan said it is not Microsoft SQL.

This is the part of the system that runs user-facing applications. There are hundreds of instances (called cells) of each of these systems, and each of those cells scales up into thousands of servers and petabytes of data, Quinlan said.

At the base of much of this are Rackable’s Eco-Logical storage servers, which are clustered to run on Linux to produce storage capacity as high as 273TB per cabinet. Of course, Google now uses a wide array of storage vendors, because it’s all but impossible for one vendor to supply the huge number of boxes needed by the search monster each year.

The Eco-Logical storage arrays feature high-efficiency, low-power consumption and intelligent design intended to improve price-performance per watt, in even very complex computing environments, Geoffrey Noer, Rackable’s senior director of product management, told eWEEK.

The original Google storage file systems have served the company very well; the company’s overall performance proves this. But now, in 2009, the continued stratospheric growth of Web, business and personal content and ever-increasing demands to keep order on the Internet mean that Quinlan and his team have had to come up with yet another super-file system.

Although Google folks will not officially sanction this information for general consumption, this overhaul of the Google File System apparently has been undergoing internal testing as part of the company’s new Caffeine infrastructure announced earlier this month.

Google on 10 Aug introduced a new “developer sandbox” for a faster, more accurate search engine and invited the public to test the product and provide feedback about the results. The sandbox site is here; as might be expected, there’s also a new storage file system behind it.

“By far the biggest challenge is dealing with the reliability of the system. We’re building on top of this really flaky hardware — people have high expectations when they store data at Google and with internal applications,” Quinlan said.

“We are operating in a mode where failure is commonplace. The system has to be automated in terms of how to deal with that. We do checksumming up the wazoo to detect errors, and using replication to allow recovery.”

Chunks of data, distributed throughout the vast Google system and subsystems, are replicated on different “chunkserver” racks, with triplication default and higher-speed replication relegated for hot spots in the system.

“Keeping three copies gives us reliability to allow us to survive our failure rates,” Quinlan said.

Replication enables Google to use the full bandwidth of the cluster, reduces the window of vulnerability, and spreads out the recovery load so as not to overburden portions of the system. Google uses the University of Connecticut’s Reed-Solomon error correction software in its RAID 6 systems.

Google stores virtually of its data in two forms: RecordIO — “a sequential series of records, typically representing some sort of log,” Quinlan said — and SSTables.

“SStables are immutable, key/value pair, sorted tables with indexes on them,” Quinlan said. “Those two data structures are fairly simple; there’s no update in place. All the records are either sequential through the RecordIO or streaming through the SSTable. This helps us a lot when building these [new] reliable systems.”

As for the semi-structured data storage, stored in BigTable’s row/column/timestamp subsystem, the URLs, the per-user data and the geographic locations are the data sets stored that are constantly being updated.

“And the scale of these things is large, with the size of the Internet and the number of people using Google,” Quinlan said, in an understatement. Google is storing billions of URLs, hundreds of millions of page versions (with an average size of 20KB per data file version), and hundreds of terabytes of satellite image data. Hundreds of millions of users use Google daily.

When the data is stored into tables, Google then breaks up tables into chunks called tablets. “These are the basics that are distributed around our system,” Quinland said. “This is a simple model, and it’s worked fairly effectively.”

How the basic Google search system works: “A request comes in. We log it in GFS; it updates the storage. We then buffer it in memory in a sorted table. When that memory buffer fills up, we write that out as an SSTable; it’s immutable data, it’s locked down, we don’t modify it.

“The request then reads through SSTables [to find the query answer].”

This is a fairly straightforward and simple process, Quinlan said. At the rate the Google search engine is used on a day-to-day basis, it has to be simple.

Scale remains the biggest issue. “Everything’s getting bigger, we’re growing exponentially. We’re not quite an exabyte system today, but this is definitely in the not-too-distant future,” Quinlan said. “I get blase about petabytes now.”

More automated operation is in the cards. “Our ability to hire people to run these systems is not growing exponentially, so have to automate more and more. We want to bring what used to be done manually into the systems. We want to bring more and more history of information about what’s going on in the system — to allow the system itself to diagnose slow machines, diagnose various problems and rectify them itself,” Quinlan said.

How to build these systems on a much more global basis is another Quinlan goal.

“We have many data centers across the world,” he said. “On an application point of view, they all need to know exactly where the data is. They’ll often have to do replication across data centres for availability, and they have to partition their users across these data centers. We’re trying to bring that logic into the storage systems themselves.”

So, as Caffeine search is being tested now, so is the new storage file system. Google hopes this will be one that is flexible and self-healing enough to be around for a while.

Categories
Cloud Infrastructure News

IBM Embraces Corporate Cloud

IBM is making available a new portfolio of cloud computing products and services that it claims will provide corporate users with ease of use to rival the consumer Web

IBM has been trying to get its Big Blue arms around cloud computing for a while, perhaps because the cloud is one of the few things in IT that IBM didn’t help invent.

The vision from Armonk, N.Y., was fuzzy for a couple of years, but now the glasses are on and the focus appears to be sharpening. Following a five-year-long, multibillion-dollar development effort called Project Blue Cloud, IBM on 16 June will make available a new portfolio of cloud computing products and services that it claims will provide corporate users with ease of use to rival the consumer Web.

In short, IBM has designed and built a number of shortcuts for cloud computing development, so that an enterprise aiming to build its own internal or external cloud-type system can do it with the least amount of time, effort and capital.

Cloud computing, or utility computing, serves up computing power, data storage or applications from one data centre location over a grid to thousands or millions of users on a subscription basis. This general kind of cloud—examples include the services provided online by Amazon EC2, Google Apps and Salesforce.com—is known as a public cloud, because any business or individual can subscribe.

Private clouds are secure, firewalled systems that tie together an enterprise with its supply chain, resellers and other business partners.

“What we are doing here is branding the choices that we are giving clients for the deployment of cloud solutions,” IBM Cloud CTO Kristof Kloeckner told eWEEK. “It’s a family of preintegrated hardware, storage, virtualisation and service management solutions that target specific workloads.”

Those workloads can be virtually anything a company needs to have done on a daily basis: e-mail, retail transactions, scientific computations, health record management, financial services, and a number of other functions.

Thus, IBM now sees cloud computing as a “reintegration of IT around types of work, with the most successful clouds being defined by the types of work they do—for instance a search cloud or a retail transaction cloud,” Kloeckner said.

Three cloud models offered

IBM is now offering three cloud models for delivering and consuming development and test services:

* IBM Smart Business Test Cloud, a private cloud behind the client’s firewall, with hardware, software and services supplied by IBM;

* Smart Business Development & Test, and Smart Business Application Development & Test, which use Rational Software Delivery Services on IBM’s existing global cloud system; and

* IBM CloudBurst, a preintegrated set of hardware, storage, virtualization and networking [options], with a built-in service management system.

The underpinnings of all this are Tivoli Provisioning Manager 7.1 and the new Tivoli Service Automation Manager, which automates the deployment and management of computing clouds.

Tivoli Storage as a Service is the foundation for IBM’s Business Continuity and Resiliency Services cloud. Beginning later in 2009, developers will be able to use Tivoli data protection via a cloud service.

Back in February, IBM released Rational AppScan 7.8, an application management system that enables Web services to be secure and regulations-compliant. Alongside the new Rational AppScan OnDemand, this service software ensures that Web services are monitored on a continuous basis and provide IT managers with ongoing security analysis.

Using this catalog, users can get a custom private cloud built by IBM, get started immediately on building their own cloud with IBM CloudBurst or choose to receive standardised cloud services from the existing IBM cloud.

IBM also is providing optional virtual desktops, which use about two-thirds less power than traditional desktops and laptops and are much lighter loads for servers to handle.

IBM offers two options in this realm: the IBM Smart Business Desktop Cloud, which is a cloud service delivered via the client’s own infrastructure and data center; and the IBM Smart Business Desktop on the IBM Cloud, which is delivered via IBM’s own public cloud.

IBM: Listening, learning for two years. “Since their announcement of Project Blue Cloud a year and a half ago, IBM has been doing nothing but listening and learning. And this [represents] the first fruits of that,” James Staten, Forrester Research principal analyst for IT Infrastructure, told eWEEK. “We think they actually got it right.”

IBM now understands what a cloud solution is and what an enterprise needs it to do, Staten said.

“These are just 1.0 offerings, but they’re correct in understanding the solution,” Staten said. “The first real toe-in-the-water effort by any enterprise is going to be tied to [development]. IT operations, the central guys who run the data centre, don’t like the fact that their ‘innovative’ developers are bypassing them and going to use public cloud resources. They want to offer something as an alternative to that, but it has to meet their security [requirements] and pass all their processes and procedures.”

IBM understands this, so it has two offerings, the first being a hosted cloud with enterprise-level security parameters around it, Staten said.

“It’s not that different from some of the others that are available, such as Terramark or Rackspace, but it has the IBM stamp of legitimacy on it,” Staten said. “So if you’re an IBM customer, or customer of IBM outsourcing, this becomes attractive.”

The second option is software development inside the cloud, “which is what the IT ops guys really want: to keep all that development effort staying inside the data center,” Staten said. “They just needed something they can deploy quickly that uses the cloud; that’s what this CloudBurst thing is all about.”

Hewlett-Packard recently launched HP BladeSystem Matrix, a similar set of products and services. “It’s all the same components,” Staten said. “They just didn’t call it ‘cloud.’”

For current IBM-Tivoli customers, “this will be really easy to consume,” Staten said. “Because they’ve built a tie-in to Tivoli Provisioning Manager and Tivoli Service Automation Manager, which are at the core. So this is going to become really, really simple.

“If you’re a non-IBM shop, this is kind of a nonevent.”