and now, the Google Chrome OS…

After the raging success of Google’s Android phone OS (ok, NOT), Google are now leaping into everyday computing OS with their Google Chrome Operating System.

It will be released initially for netbooks, but won’t be open source <gasp>, until later this year <exhale>.

One comment in the above article that worries me is:

“And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work.”

I mean, it’s not as if Windows had viruses the first day it was released.  All I’ll say is ‘if you build it, they will come’.

It’s nice to have competition in the desktop space, but until there is a narrower Linux distro base, all this variety will help MS dominate.  Apple’s Linux disto works well primarily due to aesthetics of their kit and serious branding.  It’ll be interesting to see how Google take theirs to market.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

VMware vShield – was it worth it?

I just spent a couple of hours happily deploying VMware vShield Zones, less happily pouring over the manuals, and then unhappily thinking I’d wasted my time.

I think our ESX platform is fairly typical. We have multiple ESX servers, running guest VM’s for multiple customers (or departments), many of which are tagged to isolated vLans, and most of which ultimately communicate to the outside world via our firewall clusters. To achieve security in this scenario means understanding your vlans, dropping the right vNic on the right VM, and managing a typical firewall appliance (Cisco in my environment).

VMware vShield Zones have been introduced (actually bought from Blue Lane Technologies) supposedly to simplify the network security by implementing a firewall within your ESX farm. Sounds cool, right? It would be too, if it was done right.

I won’t go into the detail of how it works, and how to configure it, as you can read up on that by following the links on Rodos‘ blog.
There are loads of gotchas, and strange concepts at first, but they’re all well documented in the manual. The install process was flawless too, so what’s not to like?

Well:

  • It requires a vShield agent VM per vSwitch with a physical NIC attached. That means lots of additional VM’s for us.
  • It does not offer anywhere near enough reporting detail. No real time bandwidth monitors, just per hour statistics.
  • It does not offer any bandwidth controls like rate limiting or QoS.
  • But mostly IT DOES NOT SIMPLIFY ANYTHING.

On the contrary, as I doubt anybody will be throwing out their perimeter firewalls just yet, vShield adds a further layer to manage. Perhaps I’m missing something.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

 

Oops how embarrassing!

I often stumble upon an interesting blog or website, but am usually reluctant to add it to my favourites.  My favourites is full of clutter from broken links, retired sites, and urls that are quicker just to type in.

What I need is a web service that provides a list of favourite sites, which saves me synchronising my favourites within my Mesh, and also allows me to share links I think are interesting, but not worth blogging about.

Enter Ma.gnolia.com (I have no idea why they write it like that).  Ma.gnolia is, well, let them tell you:

At Ma.gnolia, members save websites as bookmarks, just like in their browser. Except with a twist: they also “tag” them, assigning labels that make them easy to find again. So when you search for something, you use words that people choose and look only at websites that people think are worth saving. Suddenly you have access to a human-organized bookmark collection that numbers in the millions, but is as easy to use as a search engine.

With Ma.gnolia, that’s really all the work you have to do. Finding by tags makes organizing bookmarks a thing of the past. Since it’s a website, your Ma.gnolia bookmark collection can be reached by you and your friends from anywhere, any time. And don’t worry about web pages disappearing from your searches or even the web, as we make a saved copy of each page you bookmark where websites allow us to.

All very interesting, but one of the main reasons to use the service is so that you always have access to your favourites.  Unless they lose them of course.

A couple of days ago, that’s exactly what they did.  And they can’t get them back.  Here’s what they have to say (link):

Dear Ma.gnolia Community Member or Visitor,

Early on the West-coast morning of Friday, January 30th, Ma.gnolia experienced every web service’s worst nightmare: data corruption and loss. For Ma.gnolia, this means that the service is offline and members’ bookmarks are unavailable, both through the website itself and the API. As I evaluate recovery options, I can’t provide a certain timeline or prognosis as to to when or to what degree Ma.gnolia or your bookmarks will return; only that this process will take days, not hours.

I will of course keep you appraised here and in our Twitter account.

Most importantly, I apologize to all of you who have made Ma.gnolia a home for your bookmarks and community. I know that many of you rely on Ma.gnolia in your day to day work and play to safely host you bookmarks, keeping them available around the clock, and that this is a difficult disruption.

Sincerely,
Larry

Oh dear.

I’m especially surprised by the “as I evaluate recovery options” comment.  Surely every business understands their recovery options.  Don’t they?

When online presence is crucial (i.e. your main business function), as it is with web service providers, a fast recovery plan should have been in place.  Replication of the data to a second location, with regular snapshots to protect against data corruption, is such an inexpensive protection strategy nowadays.  Add to that the ease with which service providers can test the recoverability, this failure is a true schoolboy error.

The lesson to be learned for the rest of us, is to take DR plan into your own hands.  Store multiple copies of the data you want to keep.  Fortunately a very helpful blogger, Hutch Carpenter, posted a great idea to make this a simple process.  Store your bookmarks at Diigo, and let Diigo copy them to Del.icio.us.  See his site for a step by step guide.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

VMware and iSCSI – explained

A colleague alerted me to a great post regarding iSCSI performance with specific reference to VMware ESX hosts.

I know many organisations operating VMware farms with iSCSI storage systems, and I expect many will fall foul of some of these excellent gotchas.  The most important of which is that you should really have multiple iSCSI targets if you want to maximise your performance.  Hence, make sure your iSCSI storage hardware supports presentation of LUN’s as individual targets.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

Monster Mistakes

If a website collects, stores and processes detailed personal information on millions of people, you’d expect it to take security seriously.

Indeed, Monster.com has “a full-time worldwide security team, which constantly monitors for both suspicious behaviour on our site and illicit use of information in our database”.

Unfortunately, this team (and monster will not disclose any details or even the number of people in the team), don’t appear to be very effective.

In 2007, hackers obtained 1.6million Monster.com users details.  This was public and embarrassing, so you’d expect them to tighten their security and processes further.

Then in 2008, a further 1.6million records were stolen.  OK, so maybe it’s time to have another look at the way security is implemented at Monster.com.  Indeed, Monster themselves wrote:

“Monster has made, and will continue to make, a significant investment in enhancing data security, and we believe that Monster’s security measures are as, or more, robust than other sites in our industry”.

So, good.  Problem fixed right?

Er, no.  On 23rd January 2009 hackers again managed to steal yet more user data.  This time though, they’re not saying whose data has been breached.  No emails to users of Monster to make them aware that their employment history, address, date of birth, and education history are in the hands of ‘black hats‘.  Apparently monster are worried that an email to notify their clients will result in further phishing scams by the black hats using their email as a template.  Seriously, that’s what they said (more or less):

“Monster elected not to send e-mail notifications to avoid the risk those e-mails would be used as a template for phishing e-mails targeting our job seekers and customers. We believe placing a security notice on our site is the safest and most effective way to reach the broadest audience. As an additional precaution, we will be making mandatory password changes on our site. ” monster.com

So unlike ebay, Amazon, Itunes, and every other retailer or indeed bank, Monster.com does not feel it can communicate a simple warning about the issue and the dangers of possible phishing scams.  Maybe a short “Sorry, we were hacked again – you’d better change all your passwords for other sites that are similar to the credentials we let slip” email is all they need to send, but send something they should.  I’d be interested to hear if they send out marketing emails still, or perhaps they’ve ‘gone dark’.  Faxes only from now on?

Luckily I don’t use monster.com, but if they’d lost my details, I’d prefer it if they let me know.

All of that sounds like a PR nightmare for monster.com, but how about this to ice the cake:  Users of monster that read about this issue, will likely attempt to change their password for that site a.s.a.p.  Many of those returning users will have forgotten the original password they used, and so will go through the ‘Forgotten Password’ route to reset it.  Remember that “full-time, worldwide security team”?  They don’t appear to have noticed that this password reset process sends the password in clear text (well spotted Richard).

Oh dear.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

Microsoft to cut up to 5,000 jobs

The BBC have just reported that Microsoft are to cut up to 5,000 jobs.  This appears to be a pre-emptive strategy based on forecasted sales over the coming months.  Jobs are going from nearly all departments, and I wouldn’t be surprised if bonuses are capped and salaries fixed next.  This is becoming a common story, but very uncommon for Microsoft.  To quote from the BBC report:

“Richard Williams, an analyst at Cross Research, said: “Microsoft has never had a layoff like this in my knowledge, and it’s sending a signal that the times are definitely changing.””

They are also making cost cutting measures in other areas, and I’d love to know where.  They talk about reducing travel expenses, but I wonder if their IT budget is expecting a chop too.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

Dynamic Infrastructure: Networking Industry’s Biggest Hope

I found this fairly technical article addressing the exciting potential of Infrastructure 2.0 (anyone? no? first I’d heard about it too.)

It does look like a big change is on the way, and I for one can’t wait.  If you are providing Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) solutions, this article gives much food for thought.

This was originally posted by Gregory Ness over at seekingalpha.com, but I found it on another blog, so to give credit, that’s the one I’m linking to 🙂

Here’s an excerpt:

“Dynamic infrastructure will unleash new potentials in the network, from connectivity intelligence (dynamic links and reporting between networks, endpoints and applications) to the rise of IT automation on a scale that few have anticipated. It will unleash new consolidation potentials for virtualized data centres and various forms of cloud computing. It will enable networks to ultimately keep up with increasing change velocities and complexity without a concomitant rise in network management expenses and manual labour risks.

Further down the road there will be even more capabilities emerging from Infrastructure 2.0 as virtualization and cloud payoffs put more pressure on brittle Infrastructure 1.0 networks. The evolution of cloud (James Urquhart calls it a maturity model in his recent CNET piece) will drive new demands on the network for automation.

Cisco is looking to address end-to-end IT automation and virtualization with a combination of partner technologies from the likes of VMware (VMW), and our own successes in the Catalyst and Nexus lines (e.g. the Nexus 1000v). Stay tuned on that front for some eye raising announcements.
– James Urquhart, Cisco, December 7, 2008

Without dynamic infrastructure enabled by automation, the payoff of virtualization and cloud initiatives will be muted in the same way that static security muted the virtualization payoff into a multitude of hypervisor VLANs. Think static pools of dynamic processing power that will eventually be consolidated into ever larger pools, enabling greater consolidation, greater efficiency and bigger payoffs free of the churn and risk of on-going manual intervention. This is the vision of Infrastructure 2.0.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

DR tests too costly?

I was speaking with a colleague last week about UK businesses scaling back their budgets for disaster recovery and business continuity provision. It seems that while some firms have decided to take the risk of not having a plan at all, others are trying to find shortcuts to reduce their spending. By far the most obvious piece of the jigsaw to remove, for most businesses, are the test invocations.

Test invocations form a crucial part of all disaster recovery plans, but often it is the most expensive component of the solution. Test invocations are frequently overlooked at the outset of a business continuity plan, as service providers and manufacturers proclaim ‘ease of recovery’. Only when the first test is carried out does the extent of the hidden costs become apparent. Even simple tape restore testing can be time consuming and therefore expensive (and often outside the desired Recovery Time Objective or RTO). Worse still, if the test fails, further staff time must be dedicated to investigation and documentation updates. When job losses are on the horizon, and teams are running on empty, just sparing the staff to fulfil the project may not be an option.

Some DR processes have an even higher cost due to bad design, and can only be carried out at the expense of uptime. HP ServerPhysical servers sometimes need to be moved, or shutdown to carry out all the environment or application testing. Some business continuity advisors get it right and ask service providers to ‘bundle’ test invocations into the service contract. That is fine as far as it goes, but it still frequently does not account for the hidden costs like resource, transport, and documentation updates.

It seems fair then to reduce or postpone test invocations as part of a budget cutting directive, but at what cost? When times are good, and business is booming, cash flow is rarely a problem. IT budgets increase as stakeholders recognise the need for business continuity plans and related insurance strategies. In reality, during such times, the organisation may be able to recover from the impact of a couple of days IT downtime. Sure, some customers will switch to your competitors, some of those will never come back, but your order book, and cash flow will be strong enough to carry the business through. In contrast, during a recession, when order books are small, and cash flow is tight, the same period of IT downtime, and resultant loss of business, could be enough to break the camel’s back. Hence, economic recession makes a working business continuity plan even more crucial.

Some service providers, like virtualDCS (but there are others), have engaged with their customers to find a solution to this dichotomy. It is possible, given the right approach, to leave the invocation process to the service provider. The service provider maintains a detailed documentation process, and provides both the equipment and the manpower to invoke the solution independently, with no impact on the client’s live running IT operation, or the team supporting it. Once the solution is fully invoked, the business can carry out specific application tests, before leaving the service provider to dismantle the invocation test again, and update the documentation.

This sounds like a shift to wholly outsourcing the disaster recovery solution to a service provider, and it is. It also sounds very expensive, but it isn’t. The recovery team at virtualDCS (I can’t speak for other service providers), perform test invocations every day of the year. Fortunately live invocations are rare, but test invocations happen on a regular basis. Because the test invocation is a routine action, and highly automated, the costs are kept small, and more importantly, included in the contract. With contracts starting at around £50/week for a server with 60GB of data, and an achievable Recovery Point Objective (RPO) of near zero, why would you do it yourself?

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl

New Year. New President. Any new ideas?

Barack gave his inauguration speech today, which was very impressive. Unfortunately, it left me feeling depressed about the state of the global economy, and the bleak future awaiting us over the coming months. I guess it’s because I’m not an American, and am therefore missing that ‘Yeah, we can do anything’ gene that seems to have been handed out as they disembarked the Mayflower. It’s a character trait that the rest of the world is both envious of, and sickened by. Maybe it’s a jealousy thing.

What I do know is that even if Pres. Obama manages to turn the US economy around, it won’t happen overnight. Most of us are already feeling the effects of recession. At best it’s affecting our spending decisions for holidays, new cars, and gadgets, and at worst people are losing their jobs, and their homes.

So he asked for new ideas. Any ideas. It started me thinking about ways in which we should change our behaviour, practices and decision making in my industry, IT. IT has traditionally been one of the driving forces at the helm of the economic boom. The healthy race for technological advances has increasingly made everything smaller, yet more powerful. For most businesses, this technological progression has not gone un-noticed, but it has also failed to deliver any startling benefits. A PC which cost £400 5 years ago would have been a fairly good mid-market model, allowing basic office use. The equivalent PC today still appears to cost around £400, so where are the benefits. OK, so we have nice 19″ LCD displays instead of 17″ CRT monitors, but the PC is still a PC.

The same can be said for server class computers from vendors like HP and IBM. 5 years ago, a company would spend £10,000 on a new database server, and a further £20,000 to licence the software to run on it. Today, the same purchases are being made, with amazingly similar budgets.

The problem is more to do with the way people expect to use the technology. 10 years ago, you needed a separate server for each application you wanted to run. Often that old rule is no longer applicable, and yet IT departments continue to hold on to that model. Those IT teams that have been paying attention to the technology available, have already identified that a quad core CPU (which is becoming common even in PC’s now) is way over-powered for most traditional server tasks. These ‘Adaptive Thinkers’ have been quietly deploying virtualization solutions from firms like VMware and Microsoft. Hypervisor based server platforms that can harness the power of these smaller, faster technology advancements in ways that traditional server environments cannot.

If you haven’t already virtualized your IT systems, you’re behind the times. Unfortunately, if you have virtualized, you’re probably still behind the times too. Virtualization is again re-inventing itself with a service focus though IaaS (Infrastructure as a Service). VMware vCloud and Microsoft Azure Cloud platform refocus IT consolidation efforts into the data centre. By providing the environment on a service/rental basis, firms no longer have to look after their own virtualization platforms. This can reduce training costs, support costs, and obviously capital costs.

In the upcoming economic uncertainty, it surely makes sense to take Barack’s advice regarding new ideas, and rethink our approach to traditional computing if we are to survive this approaching storm.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to Ma.gnoliaAdd to TechnoratiAdd to Furl