An IT executive turned California e-Discovery Attorney and Consultant shares his personal insights and experience - with an emphasis on facilitating the relationship between legal and technology professionals.
As we mark the 100th anniversary of the sinking of the Titanic, questions that were raised then are being repeated now; how did this happen? Obviously, I'm not going to cite all of the opinions, but as one can imagine, blame is assigned virtually everywhere. Regulation (or lack, thereof) is to blame. Management is to blame. You know what I think? They all miss the point, entirely.
Why? Because 1,500 people are dead, that's why. Titanic was doomed before it ever left port. One thing we can state with certainty; it was known that if the ship sank in an isolated area, then there wouldn't be enough lifeboats on-hand to save all of the lives on board.
Analysts point to a comedy of errors that, if they hadn't occurred in sequence, then Titanic wouldn't have sunk. There's that word, if, again. But this flies in the face of competent disaster planning. We already know that if events occur as expected, then there won't be a disaster.
Let's enter the equation at the real-time departure of Titanic in its actual condition (meaning, not enough lifeboats). Here are some of the assumptions that might have been made:
The ship will not sink
The number of lifeboats are an acceptable risk versus the unlikely possibility the ship might sink
Other disasters may occur (e.g. boiler explosion), but the ship won't sink and there are sufficient lifeboats
Even if the ship were to sink, it will sink very slowly - or close to land - allowing landed resources or other ships to respond in sufficient time
The experts on board - and in the surrounding area (such as the California, or wireless operators) - will act predictably (i.e. not make any mistakes), thereby avoiding errors
Icebergs will be exactly where we expect them to be
Here's a question I haven't seen (although I'm sure someone has probably raised it): Even if the ship had sufficient lifeboats on board, would the crew have been able to launch all of them in the rapid time that titanic sunk (estimated at 2 hours, 40 minutes)?
A good risk management team understands the first rule of disaster planning - follow Murphy's Law: If anything can go wrong, it will. The second rule? Follow-up with O'Toole's Commentary: Murphy was an optimist.
No plan will ever be correctly analyzed unless it begins with an honest assessment of the following question: What can go wrong?
The rest becomes a matter of probability. What is the likelihood that anything (and everything in-between) on the list created by question one might go wrong.
Then, it's a matter of prioritizing between several factors, such as time needed to address & correct, manpower required, cost vs. budget, insurance, politics, etc.
I wish we could go with something more simplistic: The chance of the bread landing with the buttered-side down is directly proportional to the cost of the carpet...
(Embedded video feeds aren't resolving properly on some systems. If you don't see the video interface above, here's a direct link to launch it, manually)
Last night, 60 Minutes broadcast an excellent, in-depth analysis of the Stuxnet Worm and how it was used to infiltrate and damage the Iranian nuclear program. Let's put politics aside for a moment (as I always try to do on this blog). Anyone who wants (or needs) to understand how malicious code may be used to wreak havoc upon a thought-to-be-secure system should watch this video.
Particularly, pay close attention to how the worm was introduced into the facility's computers. I guarantee, it'll be the best 15 minutes you can invest before you sit down and formulate your security plan.
Yeah, I know. The summit ended Saturday at noon. It's been a busy week for me, but better late than never. I had to skip the Thursday sessions, but arrived early Friday morning. I was backing up another one of my LPMT colleagues in the tech lab, so between his presentations and mine, I didn't get to attend anyone else's sessions, which was a shame, because there were some good ones. I did catch the bulk of Stephen Fairley's morning keynote on marketing and SEO. I can only say this; the man is right on about what he was saying. It was similar to the advice I received from my web guru, Clint Brauer. Bottom line; if you're going to make a serious attempt at creating an online presence, you need to understand how your information will propagate to the 'web before you develop web sites, create accounts, etc.
I didn't know what to expect for my labs on disaster planning, but for both sessions (I did the identical presentation back-to-back) I had full houses. The attendees asked a lot of good questions - which is the first indication they're not bored - and although we had some technical difficulties, I was able to illustrate how, in some cases, a few minutes is all it takes to create a basic backup strategy.
Day three, Saturday, I took in the morning keynote on "Multitasking Gone Mad", or, how the more we multitask, the less we accomplish. Now, this was Irwin Karp presenting - who also preceded me on the LPMT committee - but I'll tell you, the idea of doing one thing at a time is something to strive for, but awfully hard to accomplish.
The second session should really make the eDiscovery people excited. It covered hearsay (civil, for the most part), but guess what the starring attraction of most of the examples was? Electronic evidence! For example, the presenter showed a slide from a traffic camera of a car colliding with a truck at an intersection. Another was a photo of a simple bar code (not a QR code, like the one you see on my right sidebar). In both instances, the question was, is this hearsay? As usual, the answer was, it depends on your jurisdiction.
The third session was one that eDiscovery professionals most likely wouldn't be attending. It covered the activity up to and including the arrest of a client. As you know, I also handle criminal cases, so again, this was a good refresher for me.
So, basically a quick in-and-out, and barring any changes to the schedule, my next presentation will be at Calbar's annual meeting in September.
Remember this post from precisely three months ago? Well, I'm here to tell you; lightning does strike twice - and I mean exactly!
I'm out of town - in the same place I was three months ago - and once again, my Blackberry was working fine this morning...then it wasn't. It was virtually the identical problem to last time (frozen solid), except for two glaring differences; 1) I haven't made any modifications to the device in a while, so there wasn't any clue as to why this happened and, 2) (this is critical) I could get to my password screen and unlock the device. I would also like to note that I have virus software and upon reboot, was able to run a sweep before the device froze again - no sign of any contamination.
So, I went over to the same retail outlet, where some of the same people tried to do the same thing (a software repair push). Fail! I basically told the techs (same as last time) "I don't care if you have to wipe it out, I have no problem restoring from backup." (Yes, I have a recent backup, just like last time). I also told them, "Whether this works or not, I have to walk out of here with a working device."
But - just like last time - no love. They couldn't wipe the device, either. Now, here's where it gets ugly. Last time they had a spare Tour in stock - this time, they didn't. So, they offered to have a new one shipped to me via overnight courier. Normally this would be completely reasonable. Unfortunately this happened today, and on this particular day, this device must work. I can't forward my cell number elsewhere because I'm out of town, on the go and I need to be reachable (is that even a word?)
This is where the password-protection comes in. With a Blackberry (not familiar with how other PDAs handle this), when password-protection is enabled, a companion security setting automatically enables a 'doomsday' scenario - and you can't turn it off (unless you disable password-protection altogether). That's right; it doesn't just fail to unlock the device - it allows you to select the number of incorrect passwords you'll allow (from 3-10), then if that threshold is reached, the device wipes itself out. Even the techs at the store didn't know this. So, as a last resort, I suggested, since the only thing that did work was the password screen, try repeatedly entering an incorrect password to trigger doomsday. Even though the device was frozen otherwise, I hoped that enough of the O/S was running in the background that it might work.
Most of you know I tend to be vague about my devices, but most of you also have long since figured out my PDA is a Blackberry. The reason I mention it this time is, I'm afraid I'm worn out with them. Just like my clients, I cannot afford to have a primary device crashing for no reason. I lost more than half a day resolving this in the short-term, but for the long-term, I'm switching to a Droid.
"A ‘disaster' encompasses a lot more than you might think. The ‘physical' office is covered; what about the ‘virtual' office? If you suffer a catastrophic failure on Monday, can you be back in business Tuesday morning? Are there ethical issues? It's 2 am. Do you know where your data is?"
Hey, that's what I get for writing synopses at the last minute, late at night. How many times do you think a tech-weenie like myself has used the line, "It's 2 am. Do you know where your data is?" (Answer: Too many times!!!).
Folks, I'm not going to bag on Amazon.com too much for their Elastic Compute Cloud (EC2) failure; I'm sure they're getting enough flack from their customers. However, this is why I dislike any absolute statements when we're dealing with this type of technology. Technically, they're right. You don't have to worry about the cloud. You do have to worry about your cloud.
The cloud may have a backup plan for you. Do you have a backup plan for your cloud?
Risk. What is it, exactly? Here are a few good definitions:
"The possibility of suffering harm or loss; danger."
"Product of the impact of the severity (consequence) and impact of the likelihood (probability) of a hazardous event or phenomenon."
"The quantifiable likelihood of loss or less-than-expected returns."
"The amount of statistical gamble that someone (usually management) is willing to take against a loss. That loss can for example be either in profits, reputation, market share, or franchise."
Ok...so, knowing that all of this is on the line, why do I keep hearing words like, "Underestimated", "Miscalculated", "Unexpected" and "Unanticipated" every time something goes wrong?
The latest is the crack (or should I say, sun roof) that appeared in the fuselage of a Boeing 737. As either lawyers or techies, our miscalculations are bad enough, but when these people miscalculate (as in our previous study of Japan's nuclear mishap), other people die!
As a human being, I have to at least give Boeing credit for stepping up to the plate and publicly acknowledging their mistakes. As an attorney? Well...that's another matter, entirely.
A company I worked with many years ago hadn't implemented any disaster-planning. When their catastrophic event occurred (prior to my arrival), they were essentially out of business for close to three weeks before the systems could be rebuilt. In another incident, one of my direct reports was fooling around with an Exchange server and ended up accidentally deleting one of the accounts. Too bad it happened to belong to the CEO...
Yep, disregarding risk may result in a death...but in our disciplines, it's more likely to be one of us!