SPAM This thread is for random spam!!

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
So yeah. I just got married! In the best Vegas tradition, 15 minutes in a drive thru (10 of which were taking photos) and it was done. Then went for a cruise up the Strip with the top down. Couldn't be happier :)

Wazz is also getting married.

WAIT, DID YOU BOTH ELOPE IN A MOMENT OF GAY?
 

Job

The Carl Pilkington of Freddyshouse
Joined
Dec 22, 2003
Messages
21,652
Trying to upload picture from phone...keeps saying wont allow file extension..its .jpg ??
 

Bodhi

Once agreed with Scouse and a LibDem at same time
Joined
Dec 22, 2003
Messages
9,284
Cheers all! Must admit it's still a bit strange being a proper grown up now - even if we did it in the least grown up way possible :)

We went for The Little White Wedding Chapel on the Boulevard, so we now have a claim to fame - We got married in the same place as Bruce Willis and Demi, Michael Jordan and Jack Nicholson (twice). Britney got married there too, but we don't mention that one.

It was a lovely ceremony, especially considering they probably do hundreds a day, and we didn't even have to get out of the car. Hired a 4 Series Convertible for the trip too

Been to Area 51 and Hoover Dam today, now getting ready for our last night in Vegas before driving back over to California. It's been a pretty epic holiday Tbh.
 

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
BA's CEO:

BA chief executive Alex Cruz says he will not resign and that flight disruption had nothing to do with cutting costs.

He told the BBC a power surge, had "only lasted a few minutes", but the back-up system had not worked properly.

He said the IT failure was not due to technical staff being outsourced from the UK to India.

Bullshite on all three points.

Here's what I think: at the very least, your power redundancy was inadequate and not tested enough, and the IT guys (read: Indians) made a proper cunt of getting the systems / applications back up and running.
 
Last edited:

Shagrat

I am a FH squatter
Joined
Dec 23, 2003
Messages
6,945
BA's CEO:



Bullshite on all three points.

Here's what I think: at the very least, your power redundancy was inadequate and not tested enough, and the IT guys (read: Indians) made a proper cunt of getting the systems / applications back up and running.

If you haven't got multiple redundancy in place for a critical system like that, you are asking for something like this to happen.
 

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
Absolutely, and that's why it's clearly bollocks. I don't believe for a second that BA's critical IT systems aren't fully redundant, probably even have a tertiary generator.
 

Shagrat

I am a FH squatter
Joined
Dec 23, 2003
Messages
6,945
Yep and if like he says the back up 'didn't work properly' it just raises loads more questions.

who set it up?
when did they last test it?

I'm guessing it was setup when IT was based here, they did fuck all handover to india when it was outsourced and when it fell over someone panicked and just 'had a go' to try and get it back up
 

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
Precisely - Indian outsourcing or not, someone at the top of the IT chain fucked up really badly here (assuming the power failure story is true).
 

Tom

I am a FH squatter
Joined
Dec 22, 2003
Messages
17,211
I understand IT is one of those departments:

Running well - "wtf are we paying these guys for?"
Fucked up - "wtf are we paying these guys for?"
 

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
Pretty much. Similar related thing with vendors we work with - "do we ever need to call them? no? what the fuck, don't renew the support contract, save some money" .... "it's gone tits up and we have no vendor support? what do you mean we have no support in place, who's fucking idea was that?!". You can never win.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,804
I understand IT is one of those departments:

Running well - "wtf are we paying these guys for?"
Fucked up - "wtf are we paying these guys for?"


this tbh. also what's happening now: "heroic outsourced Indian IT dept gets BA back up with the minimum possible downtime despite shoddy handover" is not the headline being run.
 

Zarjazz

Identifies as a horologist.
Joined
Dec 11, 2003
Messages
2,389
Here's what I think: at the very least, your power redundancy was inadequate and not tested enough, and the IT guys (read: Indians) made a proper cunt of getting the systems / applications back up and running.

Even if they fuck up epically in once data centre where the fuck is the DR??? The only people who spend more on IT than airlines are banks so how a company like BA can essentially go offline for 2 to 3 days is beyond belief.
 

old.user4556

Has a sexy sister. I am also a Bodhi wannabee.
Joined
Dec 22, 2003
Messages
16,163
Even if they fuck up epically in once data centre where the fuck is the DR??? The only people who spend more on IT than airlines are banks so how a company like BA can essentially go offline for 2 to 3 days is beyond belief.

This is why I don't buy it. What I think's happened is there may have been a power disruption at one of the datacentres causing the systems to automatically (or, perhaps manually by Indian IT staff) failover and that's where the problems began. I have to do 2 to 3 simulated power failure DRs per year and it's alarming just how little the technicians know how an application will behave or any manual intervention required when one side of a datacentre pops. Then you've got the hidden landmines - applications and systems using hard coded URLs instead of load balanced URL endpoints meaning they don't work in a DR situation, unix config files with incorrect settings on the failover leg, websphere clones with the same issue, the list goes on and on.
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,804
This is why I don't buy it. What I think's happened is there may have been a power disruption at one of the datacentres causing the systems to automatically (or, perhaps manually by Indian IT staff) failover and that's where the problems began. I have to do 2 to 3 simulated power failure DRs per year and it's alarming just how little the technicians know how an application will behave or any manual intervention required when one side of a datacentre pops. Then you've got the hidden landmines - applications and systems using hard coded URLs instead of load balanced URL endpoints meaning they don't work in a DR situation, unix config files with incorrect settings on the failover leg, websphere clones with the same issue, the list goes on and on.

indeed. gosh I had a whole wall of text written earlier today. pity I decided against posting it now. jist of it was that, yes, you can simulate till the cows come home and you'll get really good at solving simulation DR's. the added value for you will be that you work out minor kinks like the points you mentioned, config files, etc. perhaps you'll go as far as to enforce a loose coupling rule, where systems are only allowed to communicate indirectly, you may tune your patch management to ensure that your prod and fail-over systems are always synced, or any similar practice. those things will help, sure.

my point is that ultimately you'll only be solving simulations, because zomg thou shalt not fuck with production, ever! forsooth! froth! etc. I've been trying to get my management to approve me running a DR for my compute infra that's as real as I can get it. e.g. nobody in IT will know about it till it kicks off. the scenario will be completely random, and it will happen at a random time. we'll literally turn off a bunch of applications and power down some systems and run the failover script, with full management attention. and we log everything, every single thing, so that we can identify items to work on and gain insight into how we'd do if something almost real happened.

you should see the mgmt team when I do this pitch, you'd think I butchered and ate a baby right in front of them to see their faces. because testing scenarios are ace, right? they cover everything, right? nope! the IT guys see it coming, they're well rested, they sit in a nice ventilated work space, they have catering ffs. with respect, the last time I faced a calamity of such magnitude that we had to go for DR, we had no power in the datacenter, we had no power in the buildings, our no-breaks had failed because the power had cut off several times in a row and ultimately stayed off. it was night, it was over 50C on the compute floor, systems were going into thermal shutdown, shutting themselves off whilly-nilly. I was actually standing next to a bunch of our big Sun servers (yes, that long ago) when they shut themselves down. we almost lost our production mainframe and NSK systems too. scary shit tbh. there were about 50 of us running about on the floor with mag-lights turning off any system that didn't have a big red P on it's front casing. I wacked my head in the dark, broke my glasses and knocked myself out for a couple seconds and had to do two more hours of DR with a splitting headache until someone who could replace me turned up. anyway, you can imagine what we learned and how much we managed to improve from that one evening. good times.
 

Zarjazz

Identifies as a horologist.
Joined
Dec 11, 2003
Messages
2,389
.... I have to do 2 to 3 simulated power failure DRs per year and it's alarming just how little the technicians know how an application will behave or any manual intervention required when one side of a datacentre pops. Then you've got the hidden landmines - applications and systems using hard coded URLs instead of load balanced URL endpoints meaning they don't work in a DR situation, unix config files with incorrect settings on the failover leg, websphere clones with the same issue, the list goes on and on.

The US company that bought us out is really anal about it's DR. If a working DR test for each application isn't successfully completed once a month we don't have DR, it's kind of nuts. Pain in the ass at first but I did a lot of design and coding work and added some automation on top and can now cold start an entire DC in under 5 minutes and that includes all config & dns updates etc. Backup's on the other-hand we still suck at no matter how much I slap people about the head to improve things :D
 

Users who are viewing this thread

Top Bottom