Categories
News Security

Russian Police And Internet Registry Accused Of Aiding Cybercrime

  • October 21, 2009
  • By Andrew Donoghue

Internet registry RIPE NCC turned a blind eye to cybercrime, and Russian police corruption helped the perpetrators get away with it, according to the UK Serious Organised Crime Agency

Amsterdam-based Internet registry organisation RIPE NCC has been singled out for its involvement with notorious criminal network provider Russian Business Network (RBN) by the UK’s Serious Organised Crime Agency.

The registrar took money from the well-known criminal organisation, and subsequently, corruption in the Russian police allowed the network’s organisers to escape SOCA’s clutches according to Andy Auld, head of intelligence for the agency’s e-crime department, speaking at the RSA Conference Europe security event this week in London.

RIPE NCC denies any wrong-doing and Auld explained that the registrar wasn’t actually being investigated for its involvement with RBN – but as the registry body had accepted payment from the Russian criminal organisation, it could be seen by some as having been complicit in criminal activities, he said.

“An entity like Russian Business Network – a criminal ISP and recognised as such by just about every media outlet worldwide that covers these things – RBN was registered as local internet registry with RIPE, the European body allocating IP resources to industry,” explained Auld.

The SOCA officer argued that any company that does business with a known cyber-criminal organisation such as RBN could itself be open to accusations of acting illegally.

“RIPE was being paid by RBN for that service, for its IP allocation,” he said. “Essentially what you have – and I make no apologies for saying this is – if you were going to interpret this very harshly RIPE as the IP allocation body was receiving criminal funds and therefore RIPE was involved in money laundering offences,” said Auld.

Serious organised crime – not a cottage industry

RBN’s systems were used to host child pornography and at its peak, according to SOCA, the organisation hosted around one third of all the “pay-per-view” child pornography in the world. The rest of the illegal network was devoted to malware including systems to control botnets.

“What we are tallking about is a purpose-built criminal ISP – built for and used by criminals and a highly profitable organisation at that,” said Auld. “This is organised crime. Don’t be confused with the idea that is a hobby industry or cottage industry, this was a proper organised crime syndicate that just so happened to have an e-crime component to its crimial portfolio.

As well as SOCA, the FBI and Dutch and German law-enforcement groups were involved in the investigation of RBN last year. However as the investigation continued the group behind RBN set up a “disaster recovery plan” to ensure that it could continue operating if its existing systems were shut down. This plan was set in motion in November 2008 but according to SOCA it was able to shut-down the new systems before RBN was able to migrate over to them.

“All we could get there was a disruption, we weren’t able to get a prosecution in Russia,” admitted Auld. “Our biggest concern is where did RBN go? Our information suggests that RBN is back in business but now pursuing a slightly different business model which is bad news.”

Auld added that other registries also had some connection to RBN which could similarly be construed as illegal – although he admitted that SOCA preferred to work with these companies than seek to prosecute them.

“We are not actually treating it [RIPE] that way but if you want to interpret it that way the same would apply to both ARIN [American Registry for Internet Numbers] APNIC [Asia-Pacific registry], AFRINIC [African registry] and so on,” he said.

According to SOCA, it is actively working with internet registry organisations to make sure that they don’t, whether intentionally or unintentionally, end up aiding criminals and harming consumers.

“Where you have got LIRs (Local Internet Registries) set up to run a criminal business- that is criminal actvity being taken by the regional internet registries themselves. “So what we are trying to do is work with them to make internet governance a somewhat less permissive environment for criminals and make it more about protecting consumers and individuals,” added Auld.

RBN looked legitimate, says RIPE NCC

In response to the comments that it could be accused of being involved in criminal activity, Paul Rendek, head of external relations and communications at RIPE NCC said that the organisation has very strict guidelines for dealing with LIRs.

“The RBN was accepted as an LIR based on our checklists,” he said.” Our checklists include the provision of proof that a prospective LIR has the necessary legal documentation, which proves that a business is bona fide.”

Renek maintained that RIPE has had a good relationship with SOCA and other law-enforcement organisations. “We have always cooperated with SOCA, and continue to work very closely with relevant criminal investigation bodies to ensure investigations can be carried out as swiftly and efficiently as possible in order to ensure best practice Internet governance is adhered to and criminal activity is identified and dealt with in the appropriate manner,” he added.

Russian “corruption”

SOCA also attributed some of the blame for failing to prosecute any members of RBN as being down to corruption on the part of police in St.Petersburg who, Auld alleged, appeared to have agreed to protect the criminal gangs behind the network.

“We strongly believe that this organisation had not only the local police but the local judiciary and local government in St. Petersbeg firmly in its pocket that meant when we tried to investigate RBN we met significant hurdles – quite obvious hurdles – when trying to deal with Russian law enforcement to tackle the operation,” said Auld.

Earlier this month, US law enforcement agencies got much better international co-operation in shutting down a phishing ring based in Egypt.

Categories
News

IBM Releases More Cloud Computing Tech

At its analyst conference, IBM announced three more additions to its Project Blue Cloud

It took a little while for IBM to define its corporate approach to cloud computing during the last few years, but now that it has one, the world’s largest IT company is going all out in the sector.

On 6 Oct at its Information Infrastructure Analyst Summit in Boston, the company introduced three more additions to its Project Blue Cloud bag of goodies: a new software infrastructure specifically aimed at the building of private cloud systems, an online information archive and — you’ve guessed it — a slew of new consulting services to go with both.

“This is really the next instance in the continuing drumbeat of IBM delivering enterprise-ready cloud services,” IBM Cloud CTO Kristof Kloeckner told eWEEK. “We’re putting a great deal of corporate time and effort into this.”

Cloud computing, or utility computing, serves up computing power, data storage or applications from one data center location over a grid to thousands or millions of users on a subscription basis. This general kind of cloud—examples include the services provided online by Amazon EC2, Google Apps and Salesforce.com—is known as a public cloud, because any business or individual can subscribe.

Last June, IBM launched three cloud models: IBM Smart Business Test Cloud, a private cloud behind the client’s firewall, with hardware, software and services supplied by IBM; Smart Business Development & Test and Smart Business Application Development & Test, which use Rational Software Delivery Services on IBM’s existing global cloud system; and IBM CloudBurst, a preintegrated set of hardware, storage, virtualisation and networking options, with a built-in service management system.

The underpinnings of these are Tivoli Provisioning Manager 7.1 and the new Tivoli Service Automation Manager, which automates the deployment and management of computing clouds. The same foundations will power the new packages.

“The intent of this private storage cloud offering is to serve customers efficiently with their active, file-based data — the term would be near-line storage, meaning it’s not direct-attached storage, but not remote archival storage, either,” Kloeckner said. “The scenarios would include any information-rich enterprise that needs frequently accessed data in a file format.”

Everybody is seeing increasing amounts of data being created in collaborative environments, made by creative processes and devices, Kloeckner said. IBM believes that an automated cloud computing approach to handle this overflow of information is one that makes sense for a good many enterprises.

“We see this as one element of making information pay off for the enterprise, so to speak,” Kloeckner said. “Digital media, medical imaging, Web content, analytics, geospacial data, engineering modeling data, are just some of the use cases. We all know that the interconnection of devices creates a huge amount of data that needs to be managed efficiently, accessed, stored and secured in order to be analysed.

This is all designed for file-based storage — it is not block-based or individual record-based storage, or what is contained in a database, Kloeckner said.

Right now, the private cloud software is available in a beta release only. It should be available for full production in a few weeks, Kloeckner said.

IBM is in the process of preparing a public cloud offering, but Kloeckner did not want to speculate on when that might be available for beta testing.

Tivoli and IBM System Storage are the foundations for the new Information Archive, which uses hard disks and tape machines within a single pool. It features deduplication and compression techniques to optimise storage capacity, Kloeckner said.

“When using the archive, a user can designate whether he or she wants to store the files on disk or on tape, and the tape can be stored wherever they want,” Kloeckner said.

The hardware-and-software archive uses Big Blue’s General Parallel File System, Tivoli Storage Manager and IBM’s Enhanced Tamper Protection in an IBM array of the user’s choice.

The IBM Information Archive is the first offering announced as part of IBM’s unified archiving strategy, called IBM Smart Archive. The archive, available now as a preview, offers long-term storage for any kind of digital file, such as e-mail, images, databases, applications, instant messages, account records, contracts or insurance claim documents, logs, and others.

The archive can be organized into separate collections within a single system, and each collection can be configured with different retention policies and protection levels to meet specific needs — including business, legal, or regulatory, Kloeckner said.

Finally, IBM’s enhanced Cloud Consulting Services are available now to support the new software and hardware packages.

Categories
Infrastructure News Search Engines

Google Developing Caffeine Storage System

This new storage system will include more diagnostic and historic data and autonomic software

Google has been ahead of its time in more than just Web search and online consumer tools. Out of sheer necessity, it’s also been way ahead of the curve in designing massive-scale storage systems built mostly on off-the-shelf servers, storage arrays and networking equipment.

As the world’s largest Internet search company continues to grow at a breakneck pace, it is now in the process of creating its second custom-designed data storage file system in 10 years.

This new storage system, the back end of the new Caffeine search engine that Google introduced Aug. 10 and is now testing, will include more diagnostic and historic data and autonomic software, so the system can think more for itself and solve problems long before human intervention is actually needed.

Who knew 10 years ago, when it was the newbie on the block to Yahoo’s market-leading search engine, that Google would grow into a staple of Internet organization that is relied upon by hundreds of millions of users each day?

Just before Rackable sold Google its first 10,000 servers in 1999 and started the company on a server-and-array collection rampage than may total in the hundreds of thousands of boxes, Google engineers were pretty much into making their own servers and storage arrays.

“In 1999, at the peak of the dot-com boom when everybody was buying nice Sun machines, we were buying bare motherboards, putting them on the corkboard, and laying hard drives on top of it. This was not a reliable computing platform,” Sean Quinlan, Google’s lead software storage engineer, said with a laugh at a recent storage conference. “But this is what Google was built on top of.”

It would be no surprise to any knowledgable storage engineer that this rudimentary file system had major problems with overheating to go with numerous networking and PDU failures.

“Sometimes, 500 to 1,000 servers would disappear from the system and take hours to come back,” Quinlan said. “And those were just the problems we expected. Then there are always those you didn’t expect.”

Eventually, Google engineers were able to get their own clustered storage file system — called, amazingly enough, Google File System (GFS) — up and running with decent performance to connect all these quickly custom-built servers and arrays. It consisted of what Quinlan called a “familiar interface, though not specifically Posix. We tend to cut corners and do our own thing at Google.”

What Google was doing was simply taking a data centre full of machines and layering a file system as an application across all the servers to get open/close/read/write, without really caring where the data is in the machine, Quinlan said.

But there was a big problem. The GFS lacked something very basic: automatic failover if the master went down. Admins had to manually restore the master, and Google went dark for as long as an hour at times. Although failover was later added, when it kicked in it was annoying to users, because the lapse often was several minutes in length. Quinlan says it’s down now to about 10 seconds.

Eventually, the growth of the company and its subsequent IPO in 2004 spurred even more growth, so a modification to the file system was designed and built. This was called BigTable (developed in 2005-06), a distributed database-like file system built atop GFS with its own “familiar” interface; Quinlan said it is not Microsoft SQL.

This is the part of the system that runs user-facing applications. There are hundreds of instances (called cells) of each of these systems, and each of those cells scales up into thousands of servers and petabytes of data, Quinlan said.

At the base of much of this are Rackable’s Eco-Logical storage servers, which are clustered to run on Linux to produce storage capacity as high as 273TB per cabinet. Of course, Google now uses a wide array of storage vendors, because it’s all but impossible for one vendor to supply the huge number of boxes needed by the search monster each year.

The Eco-Logical storage arrays feature high-efficiency, low-power consumption and intelligent design intended to improve price-performance per watt, in even very complex computing environments, Geoffrey Noer, Rackable’s senior director of product management, told eWEEK.

The original Google storage file systems have served the company very well; the company’s overall performance proves this. But now, in 2009, the continued stratospheric growth of Web, business and personal content and ever-increasing demands to keep order on the Internet mean that Quinlan and his team have had to come up with yet another super-file system.

Although Google folks will not officially sanction this information for general consumption, this overhaul of the Google File System apparently has been undergoing internal testing as part of the company’s new Caffeine infrastructure announced earlier this month.

Google on 10 Aug introduced a new “developer sandbox” for a faster, more accurate search engine and invited the public to test the product and provide feedback about the results. The sandbox site is here; as might be expected, there’s also a new storage file system behind it.

“By far the biggest challenge is dealing with the reliability of the system. We’re building on top of this really flaky hardware — people have high expectations when they store data at Google and with internal applications,” Quinlan said.

“We are operating in a mode where failure is commonplace. The system has to be automated in terms of how to deal with that. We do checksumming up the wazoo to detect errors, and using replication to allow recovery.”

Chunks of data, distributed throughout the vast Google system and subsystems, are replicated on different “chunkserver” racks, with triplication default and higher-speed replication relegated for hot spots in the system.

“Keeping three copies gives us reliability to allow us to survive our failure rates,” Quinlan said.

Replication enables Google to use the full bandwidth of the cluster, reduces the window of vulnerability, and spreads out the recovery load so as not to overburden portions of the system. Google uses the University of Connecticut’s Reed-Solomon error correction software in its RAID 6 systems.

Google stores virtually of its data in two forms: RecordIO — “a sequential series of records, typically representing some sort of log,” Quinlan said — and SSTables.

“SStables are immutable, key/value pair, sorted tables with indexes on them,” Quinlan said. “Those two data structures are fairly simple; there’s no update in place. All the records are either sequential through the RecordIO or streaming through the SSTable. This helps us a lot when building these [new] reliable systems.”

As for the semi-structured data storage, stored in BigTable’s row/column/timestamp subsystem, the URLs, the per-user data and the geographic locations are the data sets stored that are constantly being updated.

“And the scale of these things is large, with the size of the Internet and the number of people using Google,” Quinlan said, in an understatement. Google is storing billions of URLs, hundreds of millions of page versions (with an average size of 20KB per data file version), and hundreds of terabytes of satellite image data. Hundreds of millions of users use Google daily.

When the data is stored into tables, Google then breaks up tables into chunks called tablets. “These are the basics that are distributed around our system,” Quinland said. “This is a simple model, and it’s worked fairly effectively.”

How the basic Google search system works: “A request comes in. We log it in GFS; it updates the storage. We then buffer it in memory in a sorted table. When that memory buffer fills up, we write that out as an SSTable; it’s immutable data, it’s locked down, we don’t modify it.

“The request then reads through SSTables [to find the query answer].”

This is a fairly straightforward and simple process, Quinlan said. At the rate the Google search engine is used on a day-to-day basis, it has to be simple.

Scale remains the biggest issue. “Everything’s getting bigger, we’re growing exponentially. We’re not quite an exabyte system today, but this is definitely in the not-too-distant future,” Quinlan said. “I get blase about petabytes now.”

More automated operation is in the cards. “Our ability to hire people to run these systems is not growing exponentially, so have to automate more and more. We want to bring what used to be done manually into the systems. We want to bring more and more history of information about what’s going on in the system — to allow the system itself to diagnose slow machines, diagnose various problems and rectify them itself,” Quinlan said.

How to build these systems on a much more global basis is another Quinlan goal.

“We have many data centers across the world,” he said. “On an application point of view, they all need to know exactly where the data is. They’ll often have to do replication across data centres for availability, and they have to partition their users across these data centers. We’re trying to bring that logic into the storage systems themselves.”

So, as Caffeine search is being tested now, so is the new storage file system. Google hopes this will be one that is flexible and self-healing enough to be around for a while.

Categories
Uncategorized

Asus Dumps Linux From The Eee

If you want Linux on an Eee, you’ll have to go to Toys R Us, Asus tells eWEEK Europe

Computer maker Asus has stopped selling any versions of its popular Eee netbooks with Linux in the UK, explaining that the move has been made because people prefer Windows XP.

Asus virtually invented the netbook with the Eee, a low-cost machine first launched in late 2007 running Linux. Now, although Asus’ site lists several versions of the Eee supplied with Linux, the company has confirmed that it can supply no Linux Eee models at all – according to a phone call to the company, all the machines it sells come XP.

The Asus site lists nine models of Eee PC, including the new Seashell. All of them are listed with GNU Linux as an option and, for the Eee 1000, Gnu is the only option according to the site (see screen-grab below). A more detailed product sheet lists 26 versions, all but four of which can have Linux as an operating system.

However, on the phone to Asus, it is clear that none of these Linux versions is available from Asus. “It’s been a gradual migration over the last three months,” said a sales executive who answered Asus’ public number. “People have preferred Windows XP.”

Asking for a Linux Eee, eWEEK Europe was given two options: an Eee with a 7in screen, on sale from Misco, and one with a 9 in the screen, sold by Toys R Us. Could we have Linux if we bought an order of 500 machines? “That’s doubtful,” we were told. “We haven’t done that yet.”

At a product launch last week – of the SeaShell and other machines – Asus defended its decision to move to XP, blaming other vendors for changing expectations and moving the market.

“When we launched the Eee PC we launched it with Linux and people were quite pleased with it,” said John Swatton, marketing specialist at Asus. “Then HP and Dell came along and said ‘Why are you buying Asus, with a small hard drive? Buy ours with a big hard drive.'”

Other vendors offered netbooks with XP, and Asus began losing market share, said Swatton. As a smaller company whose brand is less well-known, Asus had no choice but to follow the market, producing more fully-featured and expensive netbooks (including reported plans for one with an 11.6 in screen).

However, now netbooks have become 20 per cent of the notebook market – by units sold – and Asus has benefitted, so the company might try and buck the trends in future when it is better known. “Brand awareness is important in the channel,” said Swatton. “We’ll keep raising the bar, and each time we’ll raise our awareness”.

Although Microsoft has won the battle to provide the netbook operating system, it might cost the company a great deal, said open source advocate Mark Taylor of Sirius IT: “Microsoft had to sell an obsolete operating system – nearly a decade old! – at a huge discount. This has already had a direct and extraordinary effect on both the revenue and profitability of Microsoft’s client division.”

Windows 7 may have a harder time ruling the netbook market, however, if rumours of an Android running Google’s Android version of Linux turn out to be true,

Some of the notebooks that Asus sells still have Linux after a fashion, in the form of a quick-launcher, which fires up a reduced operating system and a browser

Categories
Cloud Infrastructure News

IBM Embraces Corporate Cloud

IBM is making available a new portfolio of cloud computing products and services that it claims will provide corporate users with ease of use to rival the consumer Web

IBM has been trying to get its Big Blue arms around cloud computing for a while, perhaps because the cloud is one of the few things in IT that IBM didn’t help invent.

The vision from Armonk, N.Y., was fuzzy for a couple of years, but now the glasses are on and the focus appears to be sharpening. Following a five-year-long, multibillion-dollar development effort called Project Blue Cloud, IBM on 16 June will make available a new portfolio of cloud computing products and services that it claims will provide corporate users with ease of use to rival the consumer Web.

In short, IBM has designed and built a number of shortcuts for cloud computing development, so that an enterprise aiming to build its own internal or external cloud-type system can do it with the least amount of time, effort and capital.

Cloud computing, or utility computing, serves up computing power, data storage or applications from one data centre location over a grid to thousands or millions of users on a subscription basis. This general kind of cloud—examples include the services provided online by Amazon EC2, Google Apps and Salesforce.com—is known as a public cloud, because any business or individual can subscribe.

Private clouds are secure, firewalled systems that tie together an enterprise with its supply chain, resellers and other business partners.

“What we are doing here is branding the choices that we are giving clients for the deployment of cloud solutions,” IBM Cloud CTO Kristof Kloeckner told eWEEK. “It’s a family of preintegrated hardware, storage, virtualisation and service management solutions that target specific workloads.”

Those workloads can be virtually anything a company needs to have done on a daily basis: e-mail, retail transactions, scientific computations, health record management, financial services, and a number of other functions.

Thus, IBM now sees cloud computing as a “reintegration of IT around types of work, with the most successful clouds being defined by the types of work they do—for instance a search cloud or a retail transaction cloud,” Kloeckner said.

Three cloud models offered

IBM is now offering three cloud models for delivering and consuming development and test services:

* IBM Smart Business Test Cloud, a private cloud behind the client’s firewall, with hardware, software and services supplied by IBM;

* Smart Business Development & Test, and Smart Business Application Development & Test, which use Rational Software Delivery Services on IBM’s existing global cloud system; and

* IBM CloudBurst, a preintegrated set of hardware, storage, virtualization and networking [options], with a built-in service management system.

The underpinnings of all this are Tivoli Provisioning Manager 7.1 and the new Tivoli Service Automation Manager, which automates the deployment and management of computing clouds.

Tivoli Storage as a Service is the foundation for IBM’s Business Continuity and Resiliency Services cloud. Beginning later in 2009, developers will be able to use Tivoli data protection via a cloud service.

Back in February, IBM released Rational AppScan 7.8, an application management system that enables Web services to be secure and regulations-compliant. Alongside the new Rational AppScan OnDemand, this service software ensures that Web services are monitored on a continuous basis and provide IT managers with ongoing security analysis.

Using this catalog, users can get a custom private cloud built by IBM, get started immediately on building their own cloud with IBM CloudBurst or choose to receive standardised cloud services from the existing IBM cloud.

IBM also is providing optional virtual desktops, which use about two-thirds less power than traditional desktops and laptops and are much lighter loads for servers to handle.

IBM offers two options in this realm: the IBM Smart Business Desktop Cloud, which is a cloud service delivered via the client’s own infrastructure and data center; and the IBM Smart Business Desktop on the IBM Cloud, which is delivered via IBM’s own public cloud.

IBM: Listening, learning for two years. “Since their announcement of Project Blue Cloud a year and a half ago, IBM has been doing nothing but listening and learning. And this [represents] the first fruits of that,” James Staten, Forrester Research principal analyst for IT Infrastructure, told eWEEK. “We think they actually got it right.”

IBM now understands what a cloud solution is and what an enterprise needs it to do, Staten said.

“These are just 1.0 offerings, but they’re correct in understanding the solution,” Staten said. “The first real toe-in-the-water effort by any enterprise is going to be tied to [development]. IT operations, the central guys who run the data centre, don’t like the fact that their ‘innovative’ developers are bypassing them and going to use public cloud resources. They want to offer something as an alternative to that, but it has to meet their security [requirements] and pass all their processes and procedures.”

IBM understands this, so it has two offerings, the first being a hosted cloud with enterprise-level security parameters around it, Staten said.

“It’s not that different from some of the others that are available, such as Terramark or Rackspace, but it has the IBM stamp of legitimacy on it,” Staten said. “So if you’re an IBM customer, or customer of IBM outsourcing, this becomes attractive.”

The second option is software development inside the cloud, “which is what the IT ops guys really want: to keep all that development effort staying inside the data center,” Staten said. “They just needed something they can deploy quickly that uses the cloud; that’s what this CloudBurst thing is all about.”

Hewlett-Packard recently launched HP BladeSystem Matrix, a similar set of products and services. “It’s all the same components,” Staten said. “They just didn’t call it ‘cloud.’”

For current IBM-Tivoli customers, “this will be really easy to consume,” Staten said. “Because they’ve built a tie-in to Tivoli Provisioning Manager and Tivoli Service Automation Manager, which are at the core. So this is going to become really, really simple.

“If you’re a non-IBM shop, this is kind of a nonevent.”

Categories
Uncategorized

Red Hat Sues Switzerland Over Microsoft Monopoly

£8 million a year to Microsoft, with no public bidding. And that’s just the tip of the iceberg, say, open-source activists

Linux vendor Red Hat, and 17 other vendors, have protested a Swiss government contract given to Microsoft without any public bidding. The move exposes a wider Microsoft monopoly that European governments accept, despite their lip service for open source, according to commentators.

The Red Hat group has asked a Swiss federal court to overturn a three-year contract issued to Microsoft by the Swiss Federal Bureau for Building and Logistics, to provide Windows desktops and applications, with support and maintenance, for 14 million Swiss Franc (£8 million) each year. The contract, for “standardised workstations”, was issued with no public bidding process, Red Hat’s legal team reports in a blog – because the Swiss agency asserted there was no sufficient alternative to Microsoft products.

Red Hat and others have made the obvious response that there are plenty of alternatives to Microsoft, and the current situation makes them more attractive than ever, according to a report issued this week by Freeform Dynamics.

“It’s not just Switzerland who have been getting away with this kind of nonsense,” said Mark Taylor of the UK-based Open Source Consortium, adding that much of the credit for this action should go to the Free Software Foundation Europe, led by Georg Greve.

“All over Europe this kind of thing is happening, and in the UK almost all public sector tenders that we see actually *specify* Microsoft products,” said Taylor. “Even those that don’t will normally insist that the tendered for technology ties in with specific Microsoft products. I cannot imagine any other area of Government procurement where this practice would be allowed.”

Governments are tacitly accepting Microsoft’s monopoly on their ICT systems, said Taylor, despite public statements such as the UK’s recent announcement that it could save £600 million a year with open-source said Taylor: “The cost is phenomenal,” he added, with Government spending billions a year on proprietary systems. “It is a scandal and a waste of public money that makes the MPs expenses scandal look like a drop in the ocean, and yet hardly anyone talks about it!”

The challenge to the Swiss government “raises important issues of openness in government and of a level playing field for open source and other competitors of Microsoft,” said Red Hat’s legal team. “Red Hat is seeking a public bidding process that allows for consideration of the technical and commercial advantages of open-source software products.”

Even within Switzerland, Red Hat countered the bureau’s argument, by pointing to several Swiss agencies, including the City of Zurich, the Federal Agency for Computer Sciences and Telecommunications (BIT), and the Federal Institute for Intellectual Property (IGE), who are Red Hat customers.

Microsoft’s European chair, Jan Muehlfeit recently boasted to eWEEK that Microsoft effectively owns 40 per cent of all Europe’s IT, on the occasion of a giant promotional event with the EU in Brussels.

European ignorance and hostility to open source and free software are such that a group has launched a pact for candidates in the forthcoming European elections to sign, pledging support for free software.