BusinessContinuity_0.jpg

NetConnect Blog - Your Resource For IT Tips, Tricks and News

Product Update: Veeam Backup and Recovery Version 9

Posted by Mike Pagan on Mar 7, 2016 4:00:00 PM

BackupandReplication.jpgVeeam Backup and Replication is probably a product that probably doesn't need an introduction but in case you are not familiar, Veeam is a next generation backup product built for virtual machines. Most of Network Center's customer have moved from legacy backup products to Veeam, so the release of version 9 is a milestone release in Veeam's product portfolio.

I have been using Veeam Backup and Recovery as Network Center’s primary backup product for about 2 years now, so when version 9 was released I was ready to test the update on our servers. (At the Network Center we "eat our own dog food," which means that we use the products we recommend).

A couple days after the update was available, I downloaded and started the update. I walked through the three phases of the update (Enterprise Console, Backup Console and Backup and Recovery) and it was quick and easy. After a reboot of the server, that night’s backups were ready to run and all backup jobs have completed successfully since the update (which is exactly how an upgrade is supposed to work).

For those of you who wait until the first update of a software before considering updating, Anton Gostev, Vice President of Product Management and author of the Veeam Community Forums Digest announced on 2/14/2016 that "(Veeam is) now planning to release Update 1 around the end of this month" Based on this information you should be clear to upgrade your Veeam software after the first of March. 

If you would like a full walk through of the current upgrade process, check out Vladan Seget's How-to Upgrade to Veeam Backup and Replication v9. His blog post has screen shots and a video that details the process if your curious about the details.

Best New Features:

So there is a new update of your backup software, why should I care? Good question. It's easy to get lost in the never ending slog of product updates, but here are some of the new features that I think make the update worthwhile: 

  • Standalone Backup Console: This doesn't add new backup functionality to the product but it does allow for admins to add the console do their laptop/desktop and connect back to the Veeam server to check the status of the backup jobs and make changes. Sure, you can RDP to the server to do the same thing. What if you wanted a junior admin to be able to check the backup jobs without giving them access to the server itself? The new standalone console and role based access controls now allow that to happen.
    BackupandReplication2.jpg

  • Backup Copy Job Improvements: There have been a number of enhancements with how v9 handles backup copy jobs including parallel processing of jobs, graceful termination of running jobs and general performance improvements. All of these improvements should make the off site copy of your backup data faster and more reliably.

  • New Veeam Explorers:
    • Backup/recovery of Group Policy Objects, Active Directory integrated DNS records
      BackupandReplication3.jpg

    • Exchange 2016 support and additional eDiscovery features
      BackupandReplication4.jpg

    • More granularity recovery SQL information with table-level recovery and object-level recovery
      BackupandReplication5.jpg

  • Per-VM Backup File Chains: This new backup repository option creates a new backup file chain per VM, which can increase backup performance. For those with de-duplicating storage appliances like ExaGrid this can increase backup performance by up to 10x according to Veeam.
    BackupandReplication6.jpg

  • BitLooker: Ok, so the name is too similar to Microsoft's BitLocker which may be a poor marketing decision, but the features that it adds are useful. BitLooker helps to reduce the size of your backup files by excluding:
    • Deleted file blocks
    • Swap and hibernation files
    • User-specified files and folders (I prefer to back everything up, but there are use cases where it is required to exclude a folder or two from the backup) New, modern UI: This isn't so much a feature, but it does make the new version feel like a complete and mature product.

      There are other new improvements in v9 that I did not mention, so if you're interested in the full list check out the What's new in Veeam Backup & Recovery v9 document from Veeam.

      As always if you have any questions about the upgrade or would like some assistance with it, please let us know we'd be happy to assist.

ContactUsButton-1.jpg

Topics: Data Backup, Virtual Replication, Veeam

Your Information, Their Cloud

Posted by Tyler Voegele on Sep 18, 2014 3:30:00 PM

CloudServerBy now, you've probably heard a lot about the cloud and how most of our private data is soon going to be stored there. Be forewarned, 'the cloud' will be used many times in the following article. If you aren't sure what 'the cloud' is yet exactly let me explain it to you simply. When we talk about 'the cloud' it really is just a collection of servers that store data somewhere that is not residing in your physical location. That's it. Nothing fancy floating up there in the sky, other than actual real clouds. The number of people entering information into the cloud increases each year by a fascinating amount. Everything we do might soon be stored in servers around the US or even other parts of the world. Some of our mobile devices already automatically sync our data to cloud services such as Apple's iCloud. Our PCs and documents are now also making the move to cloud services and why wouldn't they? It is an easy, no-hassle way to store our information safely and securely, or so we think.   

We trust our personal and work data completely with companies providing these cloud solutions, but just how secure are these companies keeping our personal information? You've most likely heard of numerous security breaches with multiple companies which almost seems like a common occurrence. Data privacy legislation proceeds in a tempo that is unable to keep up with the speed of our technological process. You'll find it hard to get any universal rules or laws that could be applicable to any cloud services legally binding companies to uphold standards to protect us. So, what must we accept if we are going to store our data in the cloud?   

password security1. Passwords can be hacked. This isn't something new that you've probably heard. Security professionals have long been shaking their proverbial finger at us for a long time. People who want to obtain our information will use a dictionary and brute force attacks to hack our passwords. You will have to think of a strong password that can easily beat these attacks but also keep you sane from having to remember a 25 character mess. (More on this below.)  

2. Data can be captured en route. Fortunately, most cloud services encrypt data while it's going to and from their site, making it impossible to read even if someone were to obtain the files while in transit. Still, if you are using a cloud service in the web, make sure that you have "https" instead of "http" in front of the URL in your browsers address bar. Secure HTTP or HTTPS ensures you that the site you are currently using should be sending files...you guessed it, securely.  

3. Data breaches can happen. The data breach at Target, resulting in the loss of personal and credit card information of up to 110 million individuals, was a recent theft that took place during the normal processing and storage of data. People can sometimes get access to data, and what we store in the cloud is susceptible to whatever security practices companies currently have in place.  

4. Data loss can also happen. A data breach is the result of a malicious and probably intrusive action, and data loss may occur when disk drives die without the company having created a backup or having reliable redundancy. Small amounts of data were lost for some Amazon Web Service customers who suffered "a re-mirroring storm" due to human operator error in April 2011, showing that data loss could occur un-intentionally or intentionally in the event of a malicious attack.  

5. Denial of Services can stop you from obtaining your data. The assault by hundreds of thousands or millions of automated requests for service has to be detected and screened out before it ties up operations, but attackers have improvised increasingly sophisticated and distributed ways of conducting the assault, making it harder to detect which parts of the incoming traffic are the bad traffic versus legitimate users. This leaves you without access to your data and sometimes they shut down the service for an unknown amount of time to fix the problem.  

6. There could be malicious insiders. With the Edward Snowden case and NSA revelations in the headlines, malicious insiders might seem to be a common threat. If one exists inside a large cloud organization, the hazards are magnified. We must rely on the company to have practices in place to protect us, or have encrypted data to protect us from theft.    


We can break these problems down into 3 simple questions. Is my data securely stored? Is my data safe from outside intruders/attacks, and also protected from other tenants in the cloud service? Is my data protected from the cloud provider themselves or government officials trying to collect corporate server data? These are very important questions to ask our providers. The real question is, how can we protect ourselves from what almost seems like an inevitable breach in our personal data we store in the cloud?  

1. Read up about where you are storing your information. Every cloud provider has different guidelines and security about how they store your data. You wouldn't want your important or sensitive data stored in someone’s garage server would you?  They should even state whether or not they comply with government gathering data. Most big companies are cracking down on security measures and offer many ways to protect you such as two-factor authentication. I always recommend the extra step in enabling two-factor authentication. It may seem like a hassle, but if security is important to you then this step is a must.  

2. You need to get serious about passwords. Yes, yes, you've heard it one thousand-trillion-infinity times, but it's still a problem! The reason people lose sensitive and important data is almost always related in some way with weak passwords. Even worse, many people use the same password for multiple accounts making them even more vulnerable with cloud services. My favorite XKCD comic shows us how we've been creating our passwords all wrong. Creating a long password such as "correcthorsebatterystaple" is very easy to remember, but for a PC to guess it is very difficult. Obviously, simplicity is what we are going for, (Which is why most of us use the same "strong" password for many accounts.) so try to correlate your passwords with your service. You want to create a password in Google Drive cloud storage for your accounting documents? What about, "storagedocumentsaccountingworkgoogle." See? Easy as pie.

comic photo

3. Encrypt your data before sending it to the cloud. Encryption is, so far, the best way we can protect our data. Encrypting our data before we send it to our cloud storage is often the safest solution in many of the cases we made above. This way if someone was to obtain the data they would not be able to read the contents.  

4. Use an encrypted cloud service. This may not always be an option and there isn't many options as of late. The cloud provider in some way should provide local encryption and decryption of your files in addition to storing and backing them up. This means that the service takes care of both encrypting files on your computer and storing them safely in their cloud infrastructure. This way not only would intruders not have access to data, but also neither would the service providers or administrators.  

The bottom line is we need to think about where we are storing our data and how comfortable we are with storing it in sometimes less than reputable places. Whether we like it or not data is slowly migrating to cloud infrastructure in many businesses, but we also have a choice to choose what we do to protect ourselves and our data.   

Are you a candidate for cloud services? Are you currently using cloud services? How safe is your data? Contact NetWork Center, Inc. to talk to one of our engineers about your cloud services.

Contact Us Today!

Topics: NetWork Center Inc., Data Backup, Protection, Cloud computing, Security Technologies, IT Consulting

An Introduction to Zerto Virtual Replication

Posted by Mike Pagan on Aug 13, 2014 2:00:00 PM

zerto logoOne of the more interesting returns that I get from attending tech conferences is the opportunity to walk the showroom floor and speak to vendors about their products. It exposes me to new products and services that I can bring back to vet for use with our customers. Zerto Virtual Replication is a product that caught my eye, so I thought I would share what I’ve discovered about it.

In short, Zerto Virtual Replication (ZVR) is software that will compress and replicate virtual machines to a remote site or VMware vCloud Datacenter, without requiring expensive storage based replication. It is not a replacement (at least not yet anyway) for backup and recoverysoftware, but an additional piece in a multifaceted business continuity plan.

Zerto Architecture

Virtual machine replication over IP networks isn't necessarily unique in today's virtual ecosystem. However, one feature that differentiates Zerto from the competition is the ability to maintain recovery point objectives (RPOs) of minutes to less than 10 seconds. This ability offers near synchronous replication and protection of your applications and virtual machine data.

These super low recovery point objectives are attainable because Zerto does not use virtual machine snapshots to gather the data to be replicated. The magic sauce for Zerto is the Zerto Replication Appliance (ZRA) which gets deployed on every ESXi host and continuously compresses and replicates changes from protected virtual machines to the configured remote site. By not using virtual machine snapshots, Zerto is not affected by the performance hit often seen when taking or committing a snapshot.

Management tab in vCenter GUI

Zerto Virtual Replication also helps with an important but often overlooked component of a disaster recovery plan -- actually testing the plan. Typically, disaster recovery testing has largely been ignored because of the amount of time it takes to recover servers before you can test the applications. This is usually a complicated task that most companies are not comfortable handling on their own, but Zerto makes it simple.

Testing of replicated virtual machines with Zerto is a wizard driven procedure that most in house IT staff can be quickly trained to do. After walking through the test failover wizard, the replicated virtual machines power on and register themselves at the disaster recovery site. If there is an isolated testing network configured at the disaster recovery site you can login to and test applications and ensure they can interact with each other.

Test Failover (in progress)

After you have logged into the virtual machines and are satisfied that they are working correctly, the test failover is stopped. Zerto then automatically cleans up the test environment by powering off the virtual machines and deleting them from the recovery site's vCenter. If all of the tasks are successful, a report is generated that you can save for your next IT audit.

Sample Recovery Report
You can also perform a move operation to relocate your virtual machines from the production site to your disaster recovery site. Typically this is for moving servers between datacenters or keeping servers up and running in the event of a pending outage (flood, planned electrical work, etc.). During a move operation you can automatically reverse the protection between sites and swap the protected and recovery sites in Zerto. This reuses the virtual machine data in the production site and allows for quick re-protection of your production data to the remote location.

A new feature in the 3.5 release of Zerto Virtual Replication is Offsite Backups. The Offsite Backup feature creates jobs that use the replicated data at the recovery site for creating the backups. Because it uses the already replicated data, the production servers are not affected during backup jobs. These jobs can be configured to run daily, weekly or monthly depending on your data protection requirements.

Offsite Backup Architecture

At this point the Offsite Backup feature is in its infancy and there is room for the offsite backup feature to grow as a backup product. I expect that Zerto will include additional functionality such as file level recovery and other granular restore features as the product matures. 

I have implemented Zerto Virtual Replication internally in a lab and in a small production environment and found it quick and easy to setup and configure. The failover test was simple to walk through and most importantly, worked as expected. I contacted support to ask a question and they were quick to respond with the information I requested. 

I believe that Zerto Virtual Replication is a capable product that has worked well in my short time working with it. It is very comparable with VMware's Site Recovery Manager (SRM) from a cost and basic feature standpoint, but it also has the additional ability to meet super low recovery point objectives compared to SRMs minimum RPO of 15 minutes with vSphere Replication. If you are evaluating business continuity software, I recommend giving Zerto Virtual Replication a look. 

You can find out more information about Zerto at www.zerto.com. For questions about Business Continuity options, or infrastructure recommendations, contact us at NetWork Center, Inc.

Contact Us Today!

Topics: Technology Solutions, NetWork Center Inc., Data Backup

The Silence of Corruption

Posted by Jason Keller on May 16, 2014 4:00:00 PM

datacenterRacks of servers, a low drone of fan noise. A breeze of cool air on your face as you walk through the cold isle. Disks, shallowly clicking away like a synchronized ballet reliably serving data to hungry processors and memory subsystems to be processed. Network switches, carrying data out to waiting customers to enrich their lives.

Well, that's the dream at least. But this is your datacenter. Your network has terminal cancer (the DNS kind).Your internet connection is fifty megabits too small and your old groaning disk subsystems have all the grace and poise of a Call of Duty team death match. Sure, there are problems, but accounting won't budge until they feel some pain, some damage. Problem is, by the time they feel it, it'll be too late. As in, too late to save your company. Because a disk just failed in your SAN, and what you didn't know is that it wasn't the only one not feeling well.

It’s what you don’t know that can hurt you. How error detection and correction on disks actually works. First off, all modern storage subsystems utilize error correction code (ECC). You may notice that term from your server memory. And on that very label, you'll notice in the fine print it always says "corrects single bit errors, detects double bit errors". That should be your first clue as to how ECC works. An ECC can detect an error up to twice the hamming distance of what it can correct. So if it can correct 46 bytes in your 512 byte sector hard drive, it can detect up to 92 bytes of error. What isn't correctable, is reported to the controller as uncorrectable and the disk controller increments the "uncorrectable error" counter in S.M.A.R.T.

Guess what happens to any error larger than that? It isn't detected. It is passed straight up the stack to the controller as good data. Yes, you read that right. Go read it again. We'll wait for your jaw to come off the floor.

But surely RAID will catch it, right? Wrong. RAID actually depends on the disk ECC subsystem to detect errors and report them, so it can pull data from another disk or reconstruct it via parity depending on your RAID level. Take RAID 5 for instance, does it hit every disk in your array for every single read to compute parity and make sure nothing is amiss? Negative. RAID does not preserve the integrity of your data. It only addresses availability of your data, nothing more.

So what about the file system? Hate to burst your bubble, but the most ubiquitous file systems in use today (NTFS, HFS+, XFS, UFS, EXT) are woefully underprepared for data corruption, and don't have any mechanisms to verify the data they are getting from the subsystem is good. Some can check their metadata, but that's about it.

Ok, well, what about your backups? Hate to break it to you, but what good are your backups when you've been feeding them corrupt data? Garbage in, garbage out. So we all need to be buying expensive, higher quality disks right? Thank Google for busting that myth wide open, as they discovered essentially identical failure rates among drives in their huge populations.

So, to recap. We have big corruption on our disks that is invisible to ECC. RAID is useless rubble and doesn't do anything but pass it on, because it relies on the disk's ECC to tell it the data is good or bad. The transport, even if it arrives intact courtesy of its own ECC, is still corrupt! The file system is blind to corruption and passes it right up to the application, which freaks out and in all likelihood crashes immediately. Our backups, which everyone likes to knee-jerk about as being the gold standard, are useless as well because they've been fed bad data too.

This is all stemming from a huge, industry-wide attitude that data integrity is simply taken for granted. No one really cares whether you have good data or bad data. They just care about doing it as quickly as possible so they look good on their performance benchmarks.

storwize v7000.jpg resized 600First off, in the SAN camp, they've finally gotten their brains pointed in the right direction and implemented a little thing called T10 PI (SCSI DIF, expanding each 512 byte sector to 520 bytes to hold some additional tags and a 16 bit CRC-16-T10-DIF checksum). Checksums are far more sensitive to disturbances in the force, and can reliably detect single bit errors. They just can't do anything about it but cry like Chicken Little. This field is rigidly defined, including the hash, so that all devices in the path can independently verify it. Nice. But there are some downsides to this implementation.

CRC-16, while reliable, is expensive to calculate in software, and HBAs that don't implement it in hardware pass on quite a significant tax to the CPU. And to top it off, if you actually have something like Oracle's DIX implementation (so that the OS can verify the checksum as well) you'll be doing it twice, one for the HBA and one for the OS.

Another problem is, you need special disks, special firmware, special HBAs, special drivers, and kernel support to do all that all the way up the stack. Very few vendors today actually support it, and unless you get a significant chunk of that out of the way (say, to the HBA), you're still in significant danger.

Oh, and I'm not done throwing dirt on that either. Do you have any good mechanisms to pull all data on the array at an opportune time and compute its checksums to verify that it’s still good? Because if even one sector is corrupt on a disk and you lose another disk, if you only have one disk margin of safety you are once again in the dock, as now you have no way to rebuild that data. The best way to reliably know that a sector has been recorded successfully to media is to read it!

On the personal side, what's happening to your vacation pictures or pictures of your newborn child? Did I mention that SATA doesn't have SCSI DIF? And by the way, when they went to 4k sectors, and "enhanced ECC", guess what they did? They "doubled" it. As in, they blew up the sectors to eight times the size, while only doubling the size of the per-sector ECC. Did you catch that math? You now have 1/4 of the ECC you previously had. Have fun with that.

If you're still with me, you're one step closer to becoming a paranoid storage zealot. So, what are we to do to combat this growing epidemic? By using modern file systems designed from the ground up for data integrity. ZFS for instance. It hashes for integrity (optionally even using cryptographically strong hashes like SHA256), and can automatically heal data using known good replicas (because it knows what data blocks are good and bad by checksumming on each read/write). It can also do scrubs that can pull every block of data from every disk in its array and ensure it's all still good at opportune times on a periodic basis. There are loads of other hugely useful features of ZFS too, but those are beyond the scope of this blog post.

It's time to get serious about data integrity. Silent corruption isn't a myth. It happens to real people in the real world. I was personally burned by it three times before I started using ZFS. Now you have some options – it’s time to go use them and save your data, and quite possibly your whole company. Contact NetWork Center, Inc. if you have any questions.

Contact Us Today! 

Topics: Technology Solutions, NetWork Center Inc., Network Security, Data Backup

The DRs of Business Continuity

Posted by Jon Ryan on Jan 3, 2014 4:15:00 PM

Cloud BackupIn the ever changing technology of the digital world, company data has become one of the most important parts of a successful business.  Not only the retention of data but also having reliable access to it. Imagine this scenario: Your business has multiple locations in the Mid-West that access data from a core location. A power grid goes offline due to iced up lines. How does that affect all locations? Are you prepared for this today? Or do you just plan to react to it when it happens? Take into account this scenario: Suddenly you are unable to access your data on your server. With the recent growth of RansomWare (where your data is held hostage by hackers) are you protected, and how do you continue to do business? Or even in the case of everyday hardware failure, are you backing up your data, and if so, what level of redundancy do you have to be able to access it?

It is easy for a business to have a false sense of security when it comes to their data. Having a firewall or anti-virus program is not enough protection from everyday threats. It is absolutely imperative to have a Data Backup & Recovery and Disaster Recovery plan to ensure the highest level of protection. Two of the DR’s of business continuity. Let’s take a look at what Disaster Recovery and Data Recovery really are.

Data Recovery

Probably the most common form of business continuity practiced, Data Backup and Recovery is the most affordable way to protect your business from a catastrophic event. But many people don’t truly understand that there is more to just backing up data than copying it to a network share.

In a typical Data Backup and Recovery solution, the best practice is to write the backups to a local device such as a NAS or External Hard Drive. These backups allow you to take advantage of two levels of protection: server drive RAID failure protection; and local file recovery from an external source. In this example if your server were to crash you would still be able to recover files from a second source.

Any type of backup is better than having nothing, but to achieve a higher level of business continuity you should also have a Disaster Recovery plan as well. What if your building experiences a fire, break in, or natural disaster? Your onsite backup will not be enough to continue operating your business at a normal level. That’s where Disaster Recovery comes into play.

Disaster Recovery

“It will never happen to us.” “What is really the likelihood of a tornado hitting our building?” “I have a hard time justifying the additional cost of Data and Disaster Recovery.” All responses to the question of Disaster Recovery protection.

Disaster Recovery allows companies to save data to an offsite location for just that reason. It is a best practice to save data locally for fast recovery and offsite for disaster recovery. The traditional way to achieve this was to copy data to tape drives and store a daily backup offsite. Cloud technology allows you to get rid of the time and security risk of tape drives.  Now that you have a disaster recovery option and a way to store your data locally and offsite, how do you keep your company processing data with no interruptions in the event you are unable to access your data? 

Data Replication

Data ReplicationIn the ice storm scenario above we discussed multiple locations accessing data from a core business location. If one of the external locations goes down, that location alone is only affected. But if your core server location goes down, all external locations are unable to access the core server data and applications.  The only way to achieve high availability of your data is to replicate it to a second location. 

Data and Compute Replication is a technology that allows you to make a copy of your core server data and applications and place it on hardware at a second site. The second site is usually the site with the best network connectivity the furthest away from the core site. With failover technologies, if the core server goes down, the system automatically fails over to the DR site.  This allows all other locations to continue to access data and applications while the core site recovers from the event. This solution can also act as your Disaster Recovery Data Solution but would also want to be tied to a separate local Data Backup solution for faster file recovery in the event of lost data. 

Conclusion

Now that we’ve given you some ideas and options to improve your business continuity, where do you go from here? A data backup and recovery assessment might be necessary to review your current solution to determine which of the three DR’s you have or need. Contact us to discuss your infrastructure and business continuity plan to see where you stand.

Contact NetWork Center, Inc.

Topics: Technology Solutions, NetWork Center Inc., Data Backup, Disaster Recovery, Cloud backups, Business Continuity

Getting Serious About IT Security

Posted by Tyler Voegele on Oct 25, 2013 5:15:00 PM

We can all agree that the Internet, PCs, mobile devices, servers, and other equipment are essential to everyday business, and without them we would not be able to complete our work. Also, everyone knows by now the impact and multitude of viruses, malware infections, and even hackers that can affect our businesses. It's no secret to how much money can be spent on these problems to try to properly resolve them, so why don't we give it as much attention as any other area? We need to be more proactive in our view towards security. More often than not, the only time we think about security is when it is already too late.

Let’s take a look at some statistics to make more sense of how breaches are effected today:

IT SecurityIT SecurityIT Securityhttp://www.verizonenterprise.com/DBIR/2013/

What are your biggest concerns with IT security? Preventing data loss? Preventing outages? Keeping security up-to-date? To better understand you have to determine where your valued assets lie or maybe you want to focus more on certain parts of your business structure. I like to think of security in three seperate layers. It may be an oversimplification, but it's easier to understand where you should focus time and energy when starting to get serious about security. One of the first road blocks many people come to find when beginning to secure the entirety of their network is where extactly to start.

1. External Network/Edge Devices
2. Core Network/Server Structure
3. Endpoint Devices/BYOD 

As I mentioned, this is a very broad view into your network, and at some point we have to look at cost of dealing with security breaches and spending money to be more secure. Let’s say you want to go with the top-down approach. It is a more comprehensive strategy towards IT security and is definetly not the only way it can be done. I’ve outlined some key steps that I think are very important and the components that are involved in each step.

1.       Create Security Policies and Procedures

This is by far one of the most important and hardest steps you will do. You should create an overall security policy document, BYOD security policy, and determine an action plan for an overall security audit, and also establish a risk management framework and determine the level of risk the business is willing to tolerate. After developing these policies you have to train the staff to adhere to them. Training staff is equally as important as sticking to a training schedule.These documents should always be continuously updated to make sure you can adapt to future security needs. After completeing documenation and an action plan you’ll be better equiped at knowing where to spend time, focus resources, and tackle the big projects. Preparation and adaptiveness are the keys to security success.

2.       Inventory Equipment and Data

Finding old, outdated, or decommissioned equipment and replacing or removing it is important to keeping vulnerability out of the business. Eleminating unnecessary or old data, starting to keep track of what you have, and whether or not it is secure is important to keeping data loss to a minimum. Creating an inventory of what equipment is in the network and asset tagging equipment helps logging and maintentence which is the last step.

3.       Fix Secuirty Holes and Update Equipement

Run tests to see where the security flaws in your network are. Having external auditors run tests both internally and externally is a good idea. Updating software, firmware, operating systems, and antivirus are usually a top priority. Applying security patches when needed and creating secure configurations throughout the network is also important. Create a maintenance window for all equipment and devices you've done, getting up to date. Protect your network against external and internal attacks. Manage the network perimeter of devices at all locations. Create filters for unwanted access both internally and externally.

 4.       Harden Network Security

You’ve probably already documented the policies for most of this step. They may include locking down the operating system and software you run. Creating Group Policies for workstations, servers, and users might also be part of  your policies and is also important. Locking down firewalls and other network equipment is probably one of the most important steps to hardening your security. Why? At least 92% of attacks originate from the external facing part of your network. Put in place policies to disable features that allow users to either remove, disable, or inhibit the functions of a firewall and virus protection suite. Managing user privileges, management processes, and limiting the number of privileged accounts is important. Preventing data loss by creating secure backups is a must to save you in case of critical failures.

 5.       Protecting Mobile Users and Endpoint Devices

Securing users that authenticate from the external world is a must. PCs and other media used to access internal resources need to be as secure as the servers themselves. Manage risks related to the use, processing, storage, and transmission of information or data. Data needs to be kept safe and made sure it is not lost or stolen. Apply a security baseline to all devices. Protect the data in transit as well as outside the network. Those who log into the business through mobile means must have guidelines and restrictions in place to prevent any possible data loss.

 6.       Stabilize and Monitor

Establishing a monitoring strategy is important to maintain support of the policies you’ve created and preventing further exploits that could arise. Continuously monitor the network and analyze logs for unusual activity that could indicate an attack. This is were having an IDS or IPS helps immensly. Without de-emphasizing prevention, focus on better and faster detection through a mix of people, processes, and technology. Tentatively monitoring users can be the difference between pinpointing malicious intent whether intentional or unintentional. Further educate the users of the business to keep policies in check and to make sure they are understood.

IT Security
There is no way to absolutely prevent everything from happening. We can only strengthen our ability to try and detect, prevent, and fix threats that can slip through our defenses. Attackers don’t rely on a single tactic to breach your defenses and neither should you. Remember, there is no “one-size fits all” strategy and many of the things I am suggesting are a great start to a security plan you can implement.

Keep an eye out for the next security blog posts defining more detailed approaches to the top-down approach I explained in this post.

Questions? Comments? We’d love to hear from you! Leave a comment or email us with your questions and we will gladly respond!

 Contact NetWork Center, Inc.

Topics: Technology Solutions, NetWork Center Inc., Email Security, Network Security, Data Backup, Security, Security Technologies, Firewall

Product Release: Veeam Backup and Replication v7

Posted by Mike Pagan on Jun 3, 2013 5:05:00 PM

Veeam v7This week I thought I would preview an upcoming release to a product you may have already used or reviewed for your network; Veeam Backup and Replication v7.

If you’re not familiar with the product, the elevator pitch is that Veeam Backup & Replication is a business continuity product that protects VMware vSphere and Microsoft Hyper-V virtual machines through bare metal backups and virtual machine replication.

At NetWork Center,Inc. we have used Veeam frequently with our VMware virtualization customers to displace agent based backup software and to assist in creating a disaster recovery plan that meets the recovery time objectives of modern businesses. 

Veeam Backup and Replication has performed as we well in both of those areas but there were a few shortcomings that have started to be addressed in the upcoming version. Other than the typical bug fixes and interface tweaks, there are seven new features and two new “disruptive” innovations that Veeam is including in their new version listed below.  

  1. vCloud Director SupportData backup and recovery
  2. vSphere Web Client Integration
  3. Veeam Explorer for Microsoft SharePoint
  4. Virtual Lab for Hyper-V
  5. Native Tape Support
  6. Enhanced 1-Click Restore
  7. Virtual Lab for Replicas

Two new features:

  1. Built-in WAN Acceleration
  2. Backup from Storage Snapshots (for HP SANs)

For more information about the new features, check out Veeam’s product announcement page: http://go.veeam.com/v7?ad=pp

I will highlight the new functionality that I find the most interesting personally and the most relevant to our customers.

  • vCloud Director – Most of us have not had much (if any) exposure to VMware’s Cloud Director so on the surface this feature does not seem like it would add any benefit for us in the SMB market. The usefulness of vCloud Directory support comes from the emerging disaster recovery as a service market, which allows for customers to replicate their VMs to a cloud service provider instead of to a remote branch or other private hosted environment.
  • vSphere Web Client – The vSphere Web Client will become the de facto management tool for vSphere environments in the near future so if you are already using the web client, you can manage your Veeam Backup and Replication software in the same window.
  • Veeam Explorer for SharePoint – Veeam has added single item recovery for SharePoint to this release. This simplifies SharePoint data protection and eliminate the need for a separate 3rd party backup product to accomplish the same tasks.
  • Native tape support – It is likely that you have a tape drive gathering dust in the IT boneyard, now you can use it to archive your VM backups. A backup to tape job will archive an individual VM or an entire backup repository. Additionally, the ability to backup individual files from virtual or PHYSICAL servers has been included. This closes a major gap in data protection for organizations that haven’t been fully virtualized.
  • Built in WAN acceleration – “I would replicate all of my VMs, but I don’t have enough bandwidth.” Does this sound familiar? With Veeam Backup and Replication v7 they are adding a WAN Acceleration feature to the new Enterprise Plus licensing tier. There is a new type of backup jobs called Backup Copy that will utilize the WAN Acceleration AND allow for grandfather/father/son levels of data retention. 

Veeam estimates that the backup job transfer will be up to 50 times faster than a traditional data transfer. WAN Acceleration will initially only be for Backup Copy jobs, but will be available for replication jobs in a future release.

When I found out that the WAN Acceleration feature was a part of the new licensing tier (Enterprise Plus) I reached out to Veeam to find out more regarding the new licensing model.  I was told that current Enterprise edition customers and customers who purchased Enterprise licensing before July 1st can upgrade to Enterprise Plus for free. This will give Enterprise customers access to the WAN Acceleration feature at no additional cost.

In the coming weeks I hope to get my hands on Veeam Backup & Replication v7 so I can put it through its paces to try out of the new features. If you have any questions regarding the new features or Veeam pricing, feel free to reach out your contacts at NetWork Center, Inc. and we’ll be happy to assist in any way we can.

Contact Us Today!

Topics: Technology Solutions, NetWork Center Inc., Data Backup

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all