- Sign-up Now!
 - Current Issue
 - Edit Your Profile/Unsubscribe

Subscribe | Media Kit | About Us | All Issues | Subscriber Feedback | Contact Us | Privacy Statement
Sunbelt W2Knews™ Electronic Newsletter
The secret of those "who always seem to know" - Over 500,000 Readers!
Thu, Nov 15, 2001 (Vol. 6, #88 - Issue #323)
Don't Flunk YOUR Security Review
  This issue of W2Knews™ contains:
    • Disaster Recovery And Security
    • Protect Your Company Crown Jewels
    • Microsoft Gets .Net Server Ready
    • Don't Flunk YOUR Security Review
  5. W2Knews 'FAVE' LINKS
    • This Week's Links We Like. Tips, Hints And Fun Stuff
    • Exchange 2000 On Site
  SPONSOR: Double-Take
Double-Take provides real-time (and open file) data replication. You
can use it for either High Availability and/or Disaster Recovery. It
is your main job to reduce downtime for Windows NT and W2K networks.
Double-Take is the industry leading product that will help you do just
that. Because it is not a matter of "if" disaster strikes. Fires, floods
and other mayhem always happens when you least expect it. Check out the
specs and the eval here:
Visit Double-Take for more information.

Disaster Recovery And Security

Looking at the industry, these are the two areas where IT is investing. So, this issue is dedicated to solutions in these categories. Building in sufficient redundancy to make sure your data is safe is an area where you cannot afford to skimp. The rule is: Failure To Plan = Planning To Fail. Keep that old wisdom in mind and spend sufficient time to first protect your networks, and then make sure downtime does not kill your company. It's the best job security you have.

Warm regards,

Stu Sjouwerman (email me with feedback: [email protected])

  SPONSOR: NovaStor's Online Backup
Thinking about Off-Site Backup lately?
NovaStor?s new Online Backup Service is the perfect answer for
protecting your business data from major disasters and even the
more common day-to-day human errors. Our Online Backup Service
is simple for end users and packed with powerful features for system
administrators. We built it for you by allowing pre-configurable
settings, pre/post backup tasks, a programmer?s API and much
more. Check Out this Real Online Backup Solution Today:
Visit NovaStor's Online Backup for more information.

Protect Your Company Crown Jewels

Since September 11 it is more than ever clear that you need to protect the "Crown Jewels" of your company and guarantee your users access to that data. The costs of this is relatively unimportant. You cannot afford NOT to do it. The tool you choose to prevent server downtime has to be the most reliable and efficient for the job. This can be looked at from two perspectives: High Availability (HA) on the one side and Disaster Recovery (DR) on the other side. The difference? Simple: DR-data usually needs to travel further. The distance your data needs to travel prompts a software 'architectural' choice: Do we go synchronous or asynchronous?

The Technical Difference

Getting data across a distance has always has its challenges. Things like data latency, data integrity, network reliability, and the actual physical distance are all important factors. The synchronous approach has been with us successfully for a long time. First all the data gets mirrored once from the source to the target (secondary) server. Next, all changes to the source disk get written to both these servers at the same time and the I/O write acknowledgement is not returned to the source system until both writes are complete. This adds to latency but is very fail-safe. High-end Examples of this kind of technology are EMC's Symmetric Remote Data Facility (SRDF) and IBM's Peer-to-Peer Remote Copy engine (PPRC). Both of these transmit your data across fiber optic links (usually block-level) to somewhat remote target servers, mostly on the same campus. A low cost alternative for NT/W2K is Legato's Co-Standby Server.

Asynchronous mode means the source server does not wait for the target to send an I/O acknowledgement back. All disk writes are pushed over to the target (secondary) server and expected to make it through. The big advantage is of course speed, but a major interruption on the network could prevent a few writes from completing on the target. Different tools solve that problem in their own ways.

If Disaster Recovery is your main goal, asynch is the way to go. An example would be 40 people creating CAD/CAM designs all day long. At 4:30pm your primary server dies due to a block-wide power outage of an hour. Without real-time replicating to a remote office you would have lost 40 man-days. These losses are significant. The cost for a DR solution is usually earned back by preventing just one of these incidents, but sometimes a bit of a challenge to justify.

Some of the high-end applications require identical primary and secondary storage arrays, and EMS's Symmetrix solutions are an easy $600,000 to build a primary SAN. There are some lower priced alternatives such as the Datacore SANsymphony, but if you add replication the price tag is still very steep. Current low-cost solutions based on NT/W2K servers are Legato's Octopus and NSI Software's Double-Take.

Even within in these tools there are differences how the asynchronous connection is handled. Data gets only written once on the source and once on the target in the Double-Take scenario. A filter driver looks at the data that gets written to disk, and copies the 'delta's' straight through the LAN to the target server.

Octopus does it as follows: a replicated file operation gets queued in a shared memory segment. This shared memory segment is configurable in size (up to about 1 Gig, depending on available memory). The operation is then forwarded to the target system using TCP/IP and then applied directly (it is NOT written to a log file on the target either). It can be written to a log file on the source if the shared memory segment has filled. In that case it overflows to disk. No data is ever lost unless the disk fills. On the target, the administrator can specify "Pause Updates", in which case the file operations are queued to a file on the target. When Pause Updates is turned off, all saved updates will be applied. The bottom line is that under normal operations, file data never incurs additional disk activity (either writing OR reading) on either the source or target.

Asynch Replication can occur in the following configurations:

  • One-to-One, Active/Standby (the target does nothing)
  • One-to-One, Active/Active (the target is also source)
  • Many-to-One
  • One-to-Many
  • Chained (A to B to C)
  • Single Machine (replicate data c:\ to d:\)
Here is a section where you can find the best-of-breed HA/DR solutions:

Microsoft Gets .Net Server Ready

WinXP's hype has pretty much taken up all the bandwidth, but MS is working diligently on the release of its next big OS, WinXP's big brother for servers.

This month, MS will release the third beta version of .Net Server, the next flavor after W2K. (Just when everyone is finally migrating to W2K) BillG announced the release at Comdex last Sunday. MS spent its promo budget on WinXP, so not a lot of marketing has actually seen the light of day for Win.Net yet. We should see the actual release in the first 6 months of 2002, so you can get your hands on a few copies and throw it in the test bed.

Obviously it's a new version with some additions, but this is more a branding change than a fundamental architectural change. So, since WinXP is really NT V5.1, let's call Win.Net NT V5.2 and we're close to reality. The new stuff is technology to build and distributing XML Web services. It will also have the .Net Framework and the MS Passport authentication service built in, enabling easy adoption as Microsoft unveils more .Net technology.

Another nice thing is that it will ship in "Paranoid" mode. You will have to actively enable a bunch of stuff if you really want it to be "on". That will make it more secure out of the box.


Don't Flunk YOUR Security Review

Now this one is interesting. It's getting ugly from here on down. A House panel last week gave two-thirds of all federal agencies a failing grade for efforts to secure information systems. And this was a worse showing than last year! The fact the grades were even lower than last year was of course caused by the fact we are way more aware of security vulnerabilities in general.

Rep. Stephen Horn (R-Calif.), who has graded agencies on several information technology management topics over the years, gave the government an overall grade of F for its effort to secure IT systems, with 16 of 24 agencies surveyed receiving the failing grade. Only one agency received a grade higher than a C-plus. Here is the list:

New set of security grades from Horn - (Last year's scores in parentheses)

Agriculture           (F)   F
USAID                 (C-)  F
Commerce              (C-)  F
Defense               (D+)  F
Education             (C)   F
Energy                (Inc) F
HHS                   (F)   F
Interior              (F)   F
Justice               (F)   F
Labor                 (F)   F
Nuclear Regulatory Commission (Inc) F
OPM                   (F)   F
SBA                   (F)   F
Transportation        (Inc) F
Treasury              (D)   F
VA                    (D)   F
NSF                   (B-)  B+
Social Security       (B)   C+
NASA                  (D-)  C-
EPA                   (D-)  D+
State                 (C)   D+
FEMA                  (Inc) D
GSA                   (D-)  D
HUD                   (C-)  D
Government wide grade (D-)  F

The estimate is that agencies will spend at least $2.7 billion on security in fiscal 2002 and they must learn to spend it more wisely, Forman said. "We don't believe that simply adding more money will solve the problem, it is important to impress upon them that true improvements in security performance come not from external oversight but from within," Forman commented. Source: "InfoSec News" (c4i.org)

So, to prevent flunking your own security reviews, here are two solutions. Get some outside help from Sunbelt Consulting, and /or start scanning your own networks with Retina or STAT and make sure that there are no more vulnerabilities left. Two links below:

Security Consulting:

Advanced Security Vulnerability Scanner:


This Week's Links We Like. Tips, Hints And Fun Stuff

  • We tracked down the largest Windows-based high-performance cluster complex in the world. It's here:
  • Tired of your joystick or "steering-wheel-and-gear-shift" input devices for PC-based car racing? Try This!
  • Intel presented a showcase Future PC with lotsa cool, advanced stuff in it:

    Exchange 2000 On Site

    Exchange 2000 Server On Site is a complete reference to planning, deploying, configuring, and troubleshooting Exchange 2000 in any size organization. The book includes step-by-step instructions for important configurations. It focuses on SMTP and helps admins to understand how it works in Exchange. The book is helpful for administrators, IT managers, and consultants that are considering implementation and shows how to migrate from Exchange 5.x to Exchange 2000. It has detailed information and illustrations of how Exchange 2000 works and explains the relationship between Windows 2000 and Exchange 2000.