User manual IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY OVERVIEW

DON'T FORGET : ALWAYS READ THE USER GUIDE BEFORE BUYING !!!

If this document matches the user guide, instructions manual or user manual, feature sets, schematics you are looking for, download it now. Diplodocs provides you a fast and easy access to the user manual IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY. We hope that this IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY user guide will be useful to you.


IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY OVERVIEW: Download the complete user guide (131 Ko)

Manual abstract: user guide IBM TS7650G PROTECTIER DEDUPLICATION GATEWAYOVERVIEW

Detailed instructions for use are in the User's Guide.

[. . . ] PRODUCT PROFILE Evaluating Enterprise-Class VTLs: The IBM System Storage TS7650G ProtecTIER De-duplication Gateway September 2008 Increasingly stringent service level agreements (SLAs) are putting significant pressure on large enterprises to address backup window, recovery point objective (RPO), recovery time objective (RTO), and recovery reliability issues. While the use of disk storage technology offers clear functional advantages for resolving these issues, disk's high cost has been an impediment to widescale deployment in the data protection domain of the enterprise data center. Now that storage capacity optimization (SCO) technologies like single instancing, data de-duplication, and compression are available to reduce the amount of raw storage capacity required to store a given amount of data, the $/GB costs for disk-based secondary storage can be reduced by 10 to 20 times. Virtual tape technology, disk-based storage subsystems that appear to backup software as tape drives or libraries, are one of the most popular ways to integrate disk into a pre-existing data protection infrastructure because they require very little change to existing backup and restore processes. [. . . ] Architectures that support global repositories tend to offer a better growth path as well, since when the performance capabilities of a single SCO VTL are outgrown, a new one can be added and can immediately take advantage of the index that is already there. In today's 24x7 environments, even secondary data has to be highly available so that stringent SLAs can be met. SCO VTLs cannot compromise that PROFILE high availability as they are integrated into existing data protection infrastructures. Once data is converted into a capacity optimized form, it is not usable by applications until it can be re-converted back into its original form. If there is a failure, either within a SCO VTL or at the level of the entire SCO VTL, the data may not be available. For that reason, it is important to support high availability solutions that can ride through single points of failure. High availability architectures allow maintenance to be performed on-line as well, further improving the overall availability of the environment. Clustered architectures are a good way to meet this need, and can contribute to higher overall throughput as well if a global repository is supported. Look for support also for various RAID options on the back end storage to protect against disk failures. Because SCO VTLs effectively convert data into an abbreviated form prior to storing it, there is some conversion risk that must be evaluated. How does the system perform the conversion, and what is the risk of false positives (two elements that are not exactly alike being identified as such)?In SCO VTLs that use conventional hashing methodologies, this risk is called out as the "hash collision rate. " While nominal hash collision rates may appear to be low with conventional systems, if they are going to be used in enterprise environments that may be dealing with petabytes of usable capacity, they need to be evaluated in light of that level of scale. When data is read back, it's important to verify the accuracy of the conversion process. 6 of 11 www. tanejagroup. com 87 Elm Street, Suite 900 Copyright The TANEJA Group, Inc. All Rights Reserved Hopkinton, MA 01748 Tel: 508-435-5040 Fax: 508-435-1530 PRODUCT Does the SCO VTL perform data verification to ensure that any retrieved data, after it is converted back into its original form, exactly matches the data that was originally written by the application?Any system being evaluated for use in an enterprise environment must offer independent data verification to ensure conversion accuracy. Being further down on the learning curve can translate directly into better performance, higher scalability, and improved data reliability. Look for vendors that have at least hundreds of systems deployed in production and can point to a number of references whose environments look similar to your own. Large enterprises often look for very broad support coverage which can address locations they may have on a worldwide basis. Larger, more mature vendors tend to offer better geographical support coverage than smaller vendors. PROFILE September 2008 represents the integration of Diligent's technology into IBM's Tape Systems product portfolio and includes important new functionality for large enterprises. With this release, IBM offers clustering for high availability, supports a global repository across cluster nodes, and doubles the sustained single system throughput of their SCO VTL to almost 1GB/sec ­ a number that clearly marks them as the industry leader for in-line, single system SCO VTL performance today. This is a familiar position for them however, since the previous version of the ProtecTIER technology had the industry's highest in-line, single node throughput before it was superseded by the TS7650G. The ProtecTIER Technology The TS7650G is a SCO VTL gateway based on an IBM System x with 3 GHz, quad core Intel processors and 32GB RAM, running Red Hat Linux. [. . . ] A more in-depth analysis is then performed only on the elements identified as "similar" whereas the "new" elements go immediately into the index before they are stored on the back end storage. Competitive approaches execute their full "chunk evaluation algorithm" on each and every element, which in the end generally means they end up doing a lot more work (at very high latency cost since a large percentage of references may require reads from disk) for every element. HyperFactor's approach not only handles higher throughput but also more reliably identifies each element. ProtecTIER retains metadata about each element, one piece of which is a cyclic redundancy check (CRC or checksum). [. . . ]

DISCLAIMER TO DOWNLOAD THE USER GUIDE IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY




Click on "Download the user Manual" at the end of this Contract if you accept its terms, the downloading of the manual IBM TS7650G PROTECTIER DEDUPLICATION GATEWAY will begin.

 

Copyright © 2015 - manualRetreiver - All Rights Reserved.
Designated trademarks and brands are the property of their respective owners.