Dropbox – risks and remediation

Archive for the ‘Storage’ Category

Dropbox – risks and remediation

Posted by

Dropbox is a cloud service that presents storage as a local computer drive. Michael Galligan introduced me to the service about a year ago, when he redid the SimWitty branding. You install the Dropbox app, the folder appears, you copy files to the folder, and they synchronize with anyone else who has access to your Dropbox folders.

There are some real risks with transferring files using someone else’s system, of course. There is the chance of local attacks on your Dropbox (see: Dropbox authentication: insecure by design). More likely, there is a chance of a security incident at Dropbox’s systems, thus allowing a malicious insider or attacker to gain access to the documents. A big collection of documents presents an attractive target.

What to do? Dropbox released some guidance this week. Using the tried-and-true Truecrypt software, you can encrypt your Dropbox folder. This restricts access to only those who have access to your decryption key. It is a good option for those who want the ease of the cloud with some assurances as to the safety of the data.

Innovating in storage – apps and clouds

Posted by

“Driving innovation through information infrastructure”, that was the theme of SNW Spring 2011. I spent a good portion of the time looking for innovation.

I will tell you what did not seem innovative to me. Boot from SAN? No, been doing that for more than a decade. Thin provisioning? Automated tiering? Replication? Nope. Been there, done that, for more than five years. Faster disks? Faster SSD? Faster FC and iSCSI? Incremental improvements to be sure, but not radically innovative.

These advances are all within the storage stack. Moving up the stack into applications, and down the stack into cheap cloud storage, that is innovative.

Today’s primary storage is great at working with blocks, but it is largely ignorant about what the operating systems are doing with the block-level storage. This reminds me of old school stateful firewalls: excellent at TCP/IP but largely ignorant at what the applications were actually doing with the packets. Just like the firewall innovations during the last five years were driven by application awareness, storage innovations in the next five years will be all about the application.

At the same time, we need to keep lowering the costs. Another realm of innovations is in using cloud storage. (Read hosted off-premise multi-tenant storage made available via XML/HTTP calls.) Cloud storage from Amazon, Google, and Microsoft cost a fraction of what enterprise HDD cost from Samsung, Seagate, and Western Digital. Innovation will come from balancing cost/performance by tiering with SSD, HDD, and the cloud.

What will an innovative information infrastructure look like?

Here is my take of a 2014 SAN: Fast access to block storage on-premise over maturing protocols (FC, iSCSI, FCoE). Self optimizing for IO cost or IO performance thru automatic tiering. Optimizing for application thru application awareness (SharePoint, Exchange, SQL, Oracle, et cetera). Enabling new application-specific features. All back-ended onto the cloud with deduplication, compression, and WAN optimization.

That is what the new SAN will look like. And I want one.

Miss the basics, miss the boat – Core Blood Registry

Posted by

“The Cord Blood Registry earlier this week began notifying some 300,000 registrants that their personal data might be at risk. (…) a report on the Office of Inadequate Security website indicates that the breach was the result of the theft of data backup tapes from an employee’s car.”

— darkreading.com

The breach is a good reminder of the basics. If it moves, encrypt it. If it rests, encrypt it. If you are moving tapes, have basic media controls in place to keep unsecured tapes from sitting in someone’s car. Miss the basics, miss the boat.

Baseline Article on Business Continuity Planning

Posted by

Baseline has an article on Best Practices in Disaster Recovery, Business Continuity Planning. “… disaster recovery priorities depend on the nature of the system. ‘We take snapshots ranging from every hour to every 15 minutes, depending on our systems,’ says Wolfgang Goerlich, network operations and security manager for the Birmingham, Mich.-based investment banking firm. ‘Our top-tier systems, such as trading, can have an issue if we lose even 15 minutes. Lower-tier systems, such as research, just generate reports once a day, so if they lose data for [a few] hours, it isn’t as big of an issue. With our lowest-tier systems, our DR plan is to go out and buy boxes and bring them up in a couple of weeks.'”

“The key thing for us was a very short recovery-time objective,’ says Goerlich. The firm uses Compellent’s virtual storage arrays, with the DR baked in. He says it takes just one click to activate DR and boot up the systems on a new box.”

WinBoot/I — Check it Out

Posted by

My top priority is delivering IT services in a flexible and agile fashion. This means shifting services from one site to another, from one computer to another, or even from one computer to a virtual machine. WinBoot/I plays an important role in achieving this vision.

The services’ performance and business value dictate the hardware resources we commit. WinBoot/I then enables us to seamlessly move servers between iSCSI and FC, or between lower and higher capacity server hardware. WinBoot/I also enables seamless moves between physical hardware and virtual machines. At Munder, we put this flexibility to use in our disaster recovery planning for smooth transitions between production and recovery equipment.

WinBoot/I, in conjunction with our SAN, maximizes the value of my hardware investments.

Out and About: Storage Networking World

Posted by

I will be out at the Storage Networking World Conference onApril 7 thru 10. On Tuesday, I am holding a session in the Business Continuity/Data Protection track. The topic is Simplifying Business Continuity Planning using OS and Storage Virtualization. Hope to see you there.

Abstract: This session presents the evolution of disaster recovery. An institution responsible for billions in assets, Munder Capital Management’s information systems must be always available. Munder has been thru several BCP cycles as they went from tape to standby systems, from cold to hot sites. This session delves into the lessons learned from these DR strategies as well as presents their latest: use OS and storage virtualization to completely automate recovery.

Tiered Storage

Posted by

I have had the luck to work on a number of data storage projects. I have designed, tested, and re-architected San and Nas deployments. (That is, Storage Area Networks and Network Attached Storage.) Raid is always a component of these.

At my current position, we have a Compellent San. The Compellent offers tiered virtual storage.

The way this works is that there are actual Raid devices at various levels (Raid1, Raid5, Raid10). The volumes or virtual hard drives are assigned a Raid level. These virtual volumes are then carved out of the physical Raid devices. You can tier the volume so that frequently accessed data and rarely accessed data are at different Raid levels.

This allows different blocks on a server’s volume to be at Raid5 or Raid10. Why would you want to do this? Well, Raid10 is fast but takes up twice as much raw disk space. Thus you put the speed sensitive storage blocks on Raid10 and the rest on Raid5, maximizing your disk investment.

The Compellents are very cool technology. It came out in 2004, and now the idea has spread to other vendors. Still, they were the first and are our preferred vendor.