Sapphire 2016 observations

At the recent 2016 Sapphire conference in Orlando, Florida, SAP really did not have anything groundbreaking to report during any of the keynotes. Nonetheless, attendance and the buzz everywhere at the event were up. Bill McDermott (SAP’s CEO) reported a record year of 30,000 participants, which in itself is remarkable since so many customers still hesitate to embark on all-out SAP HANA adoption.sapphire from stage view

In addition to a new overview of the HANA Platform, Hasso Plattner presented a few interesting statistics during the third day keynote.

hasso plattnerSAP HANA platform.PNG

hana system sizes.PNG

 

He provided an overview indicating that 86% of SAP customers running SAP ERP and migrating to S/4 HANA will fit into a 1TB RAM SAP HANA system.

 

Considering also VMWare’s announcement during the conference to support vSphere 6.0 for single VM with up to 4TB RAM productively (see OSS note 2315348) is significant, because this development justifies my ongoing conclusion of SAP HANA TDI as the dominant deployment strategy for customers on-premise. In support of this “natural” SAP HANA deployment methodology, EMC recently certified VMAX AFA and Unity for SAP HANA TDI. This makes EMC a major contributor to SAP HANA TDI options with the largest certified and validated solution portfolio. I wrote on this topic right before Sapphire on my blog and I also had a chance to discuss this with Silicon Angle’s John Furrier during an interview for #theCUBE.

Overall, I noticed significant attendance and participation in all the of the tech vendors individual booth theaters, including the EMC, VMware, Cisco and Dell booths, which indicates a presence of more technical resources in addition to the traditional business focused attendees.

Sapphire has traditionally focused on C-level leadership and LOB (line of business) owners. The significant higher attendance appears to be a result of more technical resources visiting Sapphire to better understand new applications, features, and technologies. IT folks still don’t have an optimal relationship with the business and are trying to get their arms around areas of interest to the business by attending Sapphire and learning first hand, what is new, what draws attention, and what exciting ideas their business representatives may come back from this conference with. IT resources are sick and tired of getting the “Just do it” job description and want to participate more actively in the business level conversation to better understand business priorities. I am curious to see how this significant increase in technical attendees impacts to the number of participants at SAP TechEd later this year. On the other hand, this also means, Sapphire will be of confirmed interest to technology vendor’s future presence and conference sponsorships.

SAP HANA TDI is MAINSTREAM!!!

I am still puzzled by the fact that so many folks in the SAP ecosystem worldwide are still unaware of or are not educated on the topic of SAP HANA TDI. The name alone is misleading. TDI (Tailored Datacenter Integration) is a silly name for something most of us have been doing since decades. Building our own SAP system with the hardware components we prefer; to achieve the business benefits that justify the system’s existence in the first place. This is true for SAP HANA just as much as for all earlier SAP systems. Yes, it is an in-memory database and a really cool platform, but it is also just another system requiring a Server, network, and storage. SAP HANA is just another workload in your data centers. So, let me clear – TDI is supported!!! TDI is supported!!! TDI is supported!!! And yes, also for production workloads – since nearly 3 years now!!!
Granted, SAP HANA has specific requirements due to its in-memory nature, but they are easily met by following well established and documented rules. To ensure that we all benefit from the promised functionality and speed of SAP HANA, SAP has defined specific KPIs that each component has to meet. And so, all HW vendors need to certify their components to ensure these standards are met.
So, why on earth would you deploy an SAP HANA appliance that is dedicated to only SAP HANA workloads, that you cannot share with other applications, that you need more than one of for all your non-production systems, that requires special skills to maintain and operate, and that does not change easily as your requirements change or the processor type needs to be upgraded?
SAP maintains a SAP HANA Product Availability Matrix for your reference.

SAP HANA TDI certified
SAP HANA TDI has been announced on the last day of Sapphire in May 2013, which makes TDI now 3 years old. EMC & Virtustream together are proud providing the platform for about 900 productive SAP HANA customers, most of which run on TDI deployed SAP HANA systems. TDI is the only way SAP HANA will be widely adopted in the long run.

HANA TDI mainstream

 

I had the pleasure to write a business white paper on this topic together with Antonio Freitas, which can be referred to here. You find plenty of help in the document to win your TDI discussion

.

 

 

 

TDI mainstream part 2

 

 

And we recently published the second chapter of this whitepaper jointly with Andrew Chen here, which gives you a deeper technical understanding of TDI and its choices.

 

 

 

We have also created a SAP HANA TDI “cheat sheet” here, which summarizes all EMC related SAP HANA TDI choices in a very concise format. All of these documents work actually on any mobile device, to make it easy for you. Or just google SAP HANA TDI and you find a ton of documentation and FAQs also from SAP.
TDI is not rocket science!
Maybe I should create a T-Shirt and give it away which says “Yes, TDI is supported!!!” Meet my EMC team mates at #SAPPHIRENOW in Orlando at booth 434 from 5/17 – 5/19 or contact me via twitter @cstreubert.

Please see my live discussion with Silicon Angle’s John Furrier during #SAPPHIRENOW 2016 from 5/18/2016 on this topic.

Flash Flood for SAP

Changes in technology usually don’t justify system changes or migrations. That is especially true for SAP systems, since the business processes SAP runs are critical to an organization. Changing your running SAP system only for the purpose of introducing new infrastructure technology typically does not happen. I can attest to that from my time at The Walt Disney Company, when I managed SAP system infrastructure. We only changed any technology of the infrastructure stack in combination with business process changes or enhancements the business benefited from. To my surprise though, the last year has taught me something new. I am stunned how many SAP customers have embraced all flash storage systems and changed their running systems only to introduce this technology. This is an unusual behavior in the SAP community, but why is that?

According to the customers I have to pleasure to interact with, all-flash systems offer a few very compelling advantages the business cannot deny.

Extremely simple installation and operation. XtremIO as an example, is so easy to install and configure, you really don’t need complicated data center deployment requirements and storage specialists anymore. Since any all-flash system knows typically only one tier, the provisioning and ongoing manage,net of storage space becomes a no-brainer. I remember hundreds of lines in countless spreadsheets designing and documenting filesystem, their purpose, and requirements with the right storage level configuration and tier. Incredibly tedious work, which takes a lot of time and is hard to ensure accuracy when SAP untrained system administrators have to maintain that part of the system. This matters to the business because it reduces risk associated with system changes dramatically.

Huge cost savings. All-flash storage systems offer very powerful capabilities that can radically reduce the amount of storage needed for your overall SAP system landscape. The use of snapshots for the purpose of running non-production systems is incredibleI know snapshots have been around for a long time, but you always had to balance the number of snaps and their level of activity with remaining performance headroom of the originating ( production) system. Not so with snapshots on all-flash systems. You can literally create a snapshot from your PRD systems for your system refresh of the QAS system and use the snapshot as if it were its own storage space without any impact to your PRD system. When people realize how incredible this capability is, jaws drop.  In my days of operating SAP systems with up to 750TB of storage for all system needs, this could have saved easily 50% of storage. Incredible!!! Additionally, you can use snapshots of course in many different ways. For operational data protection and systems refreshes as data movement tool alone, reducing your backup space dramatically as well if you are using online backups to host file systems. Significant Cost savings are always welcomed by the business.

IT dilemmaWe have posted a quick summary video here XtremIO snapshots for SAP video A very welcomed side affect is, your system refresh speeds up significantly, which brings me to performance next.

Improved performance of your Non-Hana systems. Usually, the first benefit people think of related to flash technology is speed. How this matters to SAP systems is regarding any data that needs to be retrieved from storage of course. Most of the data required to deliver optimal dialog response time is already buffered on app servers, in database buffers, file system buffers, and on storage cache. Larger amounts of data required for batch processing of financial reports, month end close, or running payroll benefit immensely from flash systems because of the incredible speed they deliver. Your basis teams and your business owners can also enjoy much shorter outages for system refreshes. BDLS for example will finish significantly faster than on traditional storage systems without any changes to your Database configuration or SAP buffers. Some of our customers reported improvements of up to 60% faster job completions in SM37. This means, if your BDLS takes currently 72 hours, you can finish in just under 22 hours on flash. Your business owners and regression testers will love you for giving them their QAS system back sooner! Another side effect here is that your QAS system can perform at the same speed as your production system. If today your QAS system is running on cheaper and less performant storage configurations you are unable to make precise predictions regarding outage planning. Also, your SAP Hana system will start and fail over faster utilizing flash for your Hana persistent layer. This is especially true for larger ( > 1TB) Hana systems.

performance elements of SAP

 

New data center standard. Many IT organizations are modernizing their data centers with faster, denser all-flash storage systems and see already incredible improvements for other workloads like VDI and exchange. If you are able to use the same platform for all your data needs in the data center, the overall effectiveness, TCO, risk management all become better. The ability to mix any workload on the same storage platform simplifies operations significantly including of your SAP systems. And you need less specialized knowledge and I described earlier here.

Find myself and my other team mates at EMCWorld in Las Vegas 5/2 – 5/5 and at Sapphire in Orlando May 17-19 or join us for one of our worldwide #SAPWeeks.

Find EMC’s market leading all-flash storage portfolio and SAP solutions at http://www.emc.com and http://www.emc.com/sapsolutions

Game Changer for SAP on Hadoop & Oracle

So, EMC just came out with a new storage system called “DSSD”. It is a truly remarkable platform, which can do wonders for certain use cases in the SAP world. Originally announced at EMCWorld in 2014, the team has worked tirelessly to not only deliver on the original performance targets but also harden the platform to an enterprise grade system. While most facts probably describe the incredible hardware capabilities, I am really amazed by the software stack that DSSD is accompanied by. I mean, you can build the fastest sports car in the world, but if you have an untrained driver behind the wheel, you won’t see the performance the car is capable of.

DSSD front  DSSD HBA

DSSD’s D5 system includes not only the obvious part of the 5U rack mounted actual storage component, but also the proprietary cable connections, PCIe Gen3 cards for the server, and the direct I/O software installed on the server operating system, which enables direct access to D5 data, bypassing traditional OS system calls, POSIX structures, and SCSI interrupts. Chad Sakac (Virtual Geek & President, VCE – Converged Platform Division of EMC) published an interesting early blog on DSSD in May 2015.

DSSD SW.png

While most all-flash systems available today focus on making flash economically appealing and sacrifice performance as a result, DSSD is taking the opposite approach, delivering the highest possible performance out its flash modules, while still ensuring data resiliency through its unique Cubic RAID® feature. You get 100TB of usable capacity (in the first version) with up to 10 Million IOPS at 100GB/s and <100µs latency in just 5U. Pick any TWO of these three major performance metrics (IOPS, GB/s, and latency) was the rule of the past – DSSD delivers all three. And you can connect the D5 to up to 48 servers with redundant paths. In an article of “The Register” from July 2015 you can read how TACC (Texas Advanced Computing Center) created a system based on multiple DSSD storage components with 1TB/s throughput and more than 250 million IOPS!!! You really have to stop and think for a moment what this means and what you can do with such amazing performance.

SAP HANA Vora – Hadoop connect

Since SAP announced HANA Vora many SAP customers are investigating and experimenting how to integrate larger data sets, often in Hadoop, as featured in Intel’s Hana Petabyte Scale project at SAP TechEd Las Vegas 2015. As these larger data sets need to be processed near real‐time, Hadoop, designed for batch analytics, will not provide the desired responsiveness. DSSD has created its own data node implementation, called “DSSD Hadoop Plugin”, to provide amazing performance along with the benefits of shared storage for Hadoop workloads. This creates the ability to query much more data at latencies mobile enterprise users demand. Have a look at Mike Olson’s (Founder & CSO Cloudera) video to hear his view of the incredible capabilities Cloudera on DSSD offers. So, if you are thinking about SAP HANA Vora on Cloudera, you definitely want consider DSSD’s D5 to not have any performance worries at all.

SAP on Oracle

Organizations that run SAP on large Oracle databases and have currently no plans migrating to SAP HANA can gain break-through advantages with DSSD. The performance capabilities of DSSD are so significant, that many of the traditional DBA techniques of staging data to meet business requirements become completely unnecessary. Materialized views, Indexes, partitions, and copies of data (dedicated data marts) are all often only required to increase the performance of a specific set of queries for business users. For example, if your SAP BWA (SAP Business Warehouse Accelerator) does not meet your performance requirements anymore and your SAP HANA project is still too far out, you could run SAP BW on DSSD. This would simplify your database design, where you don’t have to monitor and introduce indexes or manipulate cardinality to achieve your desired SQL access plan. It would also reduce database complexity, associated admin tasks, and therefore reduce risk. Since this is a brand-new EMC platform with snapshot and replication integration planned for later this year, you can continue to use well established Oracle replication and backup solutions like Data Guard and RMAN. The DSSD team tested Oracle on D5 and compared it to Oracle’s own top performance engineered system (Exadata).  For example Exadata’s performance benchmark achieved a max of 4.1 Million 8K IOPS @ ~1 ms; Oracle on DSSD achieved a max of 5.25 Million 8K IOPS @ 340 μs in only 5U, and Exadata storage requires 28U.

So if performance is your biggest problem or if any of these SAP use cases and examples resonate with you, I encourage you to learn more and test it to validate EMC’s breakthrough DSSD for your specific scenarios.

For more information go to

http://www.emc.com/DSSD 

SAP & Big Data – as simple as possible

SAP and Big Data is always a bit entertaining to me, maybe because I have been around SAP applications so long and associate SAP typically with a company’s core business functions. Sure, they create a lot of data but not to the degree that you can call it Big Data. So, although we can observe a lot of new developments, SAP and Big Data today are really two different worlds; at least two data sets, two or more type of platforms, a variety of technologies, and a number of skill sets.
Organizations are also in the unfortunate but understandable situation of having to utilize existing assets as well as investing in the right new technologies to realize big data scenarios. Nobody can just rip and replace everything in their data and IT Eco system starting from scratch. After all, you would not have much big data challenges if you don’t have already a ton of systems that collect or generate data. So, you probably want to maximize your existing investments and platforms while transitioning into this still new and emerging era. There is no doubt that in many cases the insight from big data provides even more power when related to your transactional SAP data. If you want to relate a customer comment on twitter to your CRM data and decide to react in seconds, if you have to collect smartmeter data by geo and relate it to plant profitability, or if you want to overlay weather, news, and traffic data to your logistics information to better tune your fleet routes in minutes, combining big data and transactional data is needed to realize such benefits.
Albert Einstein said “make everything as simple as possible, but not simpler”. I love that principle – but how can you apply this to such a complex topic? I suggest to make everything simple wherever you can – things get complex on their own quickly.
Let’s have a look at the broad requirements to see what we need. You need a system that can receive, process, and store data at extreme speeds (hundreds of Terabyte or more per hour) but you don’t want everything in memory to control cost.
You need to process and store any kind of data type but don’t want to deploy a platform for each. You want to process structured SAP and unstructured any data but don’t want to design complex systems to achieve the same level of operational SLA across both. You want choice of your visualization tools and don’t know what you will prefer over time, which means you are looking for flexibility. You probably want to utilize existing data sources and you don’t want to migrate them all to the same location. And you want to utilize proven technologies you probably currently depend on for critical applications in your organization.
Big Data Platform
EMC’s big data platform for SAP adheres to all the above requirements and thereby simplifies big data projects. The platform also delivers on the principle of data temperatures. Same reason why SAP offers NLS and ILM for SAP-only data, right sizing Hana and deploying only hot data in memory while making cooler data available in less expensive platforms. We realize the same principle but from the big data angle (SAP untypical data). Storing data mostly in highly scalable and parallel SAPIQ, and replicate only required data in Hana based on the use case, and extend into Hadoop as well. It’s an amazing system that brings SAP and Big Data together. Visit the EMC booth #421 at SAPPHIRE Orlando this week (6/3 – 6/5 2014) to learn more.

SAP HANA – Let’s keep it real

Before I continue writing my sequence on HANA and data center readiness, I need to finally write down my thoughts and observations after the recent Sapphire in Orlando and subsequent announcements. SAP seems to pull all stops to drive adoption of HANA in the market. SAP now offers so many ways to consume HANA that customers easily get confused.
SAP HANA consumption models:
– SAP HANA One
64GB max memory, no SLA, no support, for Sandbox and business value or limited POC, very limited data ingest, largely via csv file, 99 cent per hour (TPaaS – Test Platform as a Service – my own personal term for this option)
– SAP HANA One Premium
Same as HANA one, but dedicated support, defined SLA, annual subscription pricing, SAP source system data allowed
– SAP HANA Cloud Application Services
Ready-to-use application services on a HANA cloud with consumption based charge
– SAP HEC (HANA Enterprise Cloud)
Any HANA based applications including production, basically HANA PaaS (Platform as a Service), Enterprise support, and SAP customer source data integration
– SAP HANA on “vCloud Hybrid Service”
Available in Q3 2013, similar to HANA One Premium and based on VMware virtualization
– SAP HANA private instance hosted off premise
Some examples are
Optimal
KIO
VMware vCloud Hybrid Service (Q3 2013)
Virtustream
– SAP HANA on premise in two flavors
HANA on SAP certified “appliances” (requires OSS ID)
Your own HANA system in you data center, provided by your favorite Hardware partner.
SAP HANA Tailored Data Center Integration
HANA on existing or new infrastructure as a reference architecture, certified in the field utilizing SAP tools and KPIs, for max-attention customers initially.

Hasso Plattner addressed typical customer concerns on stage during one of the keynotes. And even though, he is not one of my favorite speakers, he did have a few good points. Now, all amazing HANA statistics are meaningless without the right context and the business could care less which student developer from some elite university has created a tool that shows some analysis really fast. HANA is not the answer to life, the universe and everything (42 is!). The other SAP executives in their keynotes focused more on the value of HANA and the fact that customer use cases are of the essence when considering the position of HANA in an organization. One of the best panel discussions during Sapphire can be found here.
I believe that SAP still has to overcome a stigma in the user community. SAP applications have provided business process enhancements and enablement for a long time. Traditionally, SAP applications however have not offered a competitive advantage for customers. SAP’s HANA platform changes that, but it also requires existing SAP customers to realize that HANA can not only work naturally with SAP source data but any data of that matter. Existing customers often don’t even consider SAP as an innovative platform. Lots of educational work and evangelizing still has to happen internally in organizations. Some of the innovative use cases SAP shows on saphana.com help tell that story. But this is only the beginning. I am certain that the value of the HANA platform will be told by the HANA ecosystem – the community at large. “The power of all of us” is going to make HANA what it will become. Business priorities, the hunger for change, IT investments, skill sets, and risk mitigation have to be considered and balanced.
Don’t get me wrong – I am a huge fan of HANA, but let’s keep it real. Most customers are unclear how to embark on the HANA journey. How this platform fits into the roadmap and release strategy of an organization. SAP does not have all answers either. We are at the beginning of a new era, which implies that answers have to be developed together. Alex ‘Sandy’ Pentland, Director of the MIT Media Lab and pioneer in computational social science has described it very well. (Using my own words) – The collective intellectual property gathered and born through social media and forums propels us into a new realm of intellect; A collective hive mind. The path and evolution of HANA informed by all of us IS THE HUMAN FACE OF BIG DATA.
So get involved, ask questions, voice your opinion, and participate in shaping the future of HANA through any of these existing forums or create your own.
IFG (International Focus group for HANA)
SCN
SAPHANA.COM
HANA cook books
SAP Mentors on SCN
HANA Distinguished engineers on SCN
Silicon Angle’s theCube
HANA Blogs
Local SAP user groups (e.g. ASUG)
SAP TechEd
Twitter
Facebook
And direct interaction with anybody in the HANA ecosystem around the world.

So, thank you SAP, for adding to my excitement🙂

Practical Advice for HANA backups

Nearly every basis person I discuss the topic of SAP HANA in the data center is curious about three main areas. Data Protection (in terms of backup/recovery), High Availability, and Disaster Recovery. So, let’s discuss one at a time in a short series.

Practical Advice for HANA backups

I have designed a few Data Protection strategies in my days and when I think of backups I like to think in terms of recovery requirements (RTO, RPO) and data types. The same is true for HANA backups. So many times backups are an afterthought, yet recoverability should be a main topic in the design just like everything else in your design documents. Specifically, RTO (Recovery Time Objective – how long it takes you to restore a system), RPO (Recovery Point Objective – how much data you can accept to lose) and your system recovery SLA should drive the solution rather than just backing a system up and ending up with an RTO, RPO that will negatively surprise your CIO and your business when you have to recover a critical system. So, let’s establish first in which situation you would actually restore a Hana system. Considering the amount and type of data you need to recover, how you have Hana deployed (as a sidecar, BW on HANA, Suite on HANA), how far back you need to go, and the time you have for the exercise, it may be faster to “refresh” your data from the source system using your existing data import strategy rather than your backup images. Think first about which data you need to recovery and consider your options. Ideally, you have some possible scenarios already documented in your operational manual, so you don’t lose precious time. Let’s assume you have already established that a restore is in fact what you need to perform. Hana currently (as of SP5) offers 3 restore scenarios.
– Full restore of your last backup (changes and logs after the backup are lost),
– Full restore up to your latest possible consistent point in time (your backup and all available logs after the backup are used to recover the system), and
– Point-in-time restore (your backup and a subset of logs that were created after the backup are used to restore your system).
No subsets of data, specific tables or view for example, can be restored today; all three scenarios include first a complete restore of a full data backup; the only choice you have is how you want the logs to be applied after the initial restore. If you manage a scale out HANA system, I encourage you to perform a restore to the most current possible time or to your last backup, rather than a point in time restore, because of complexity. A point in time restore can be complex internally to HANA, due to the distributed nature of transaction logs across multiple nodes. Think of the restore process in two stages. Stage 1 restores all data and creates a system wide save point across all nodes, and stage 2 replays the logs to your desired point in time. If you run into log issue (missing or corrupted logs), you have to start the entire restore from the beginning, including stage 1. Log creation is managed by each node individually; no cross node consistent point in time view of logs exists, because each node manages its logs individually based on commits and log segments filling up. The data backup contains a system wide consistent snapshot; logs do not.
Hopefully you have previously backed up all your, data, DB logs, and configuration/kernel/log files, have stored these images outside of your HANA system, and have access to them now.
HANA offers data and log backups via “FILE” or “BACKINT” integration. FILE means, HANA writes to your choice of directories in the file system table, which should ideally be an NFS-mounted external resource. BACKINT means, data and transaction log backups are handled by your backup software through this interface as long as this software is certified by SAP. See currently supported backup software solutions here. http://www.sap.com/partners/directories/SearchSolution.epx
This covers now HANA data and HANA transaction logs. But what about the third and last data type – configuration, kernel, backup catalog, and log files? The “FILE” option has the significant advantage, that you can call the backup through a simple script, executed by cron for example on the master HANA node and can include a copy of these flat files onto the same NFS resource (using “cp” or “tar”). This enables you to have more or less the same PIT (point in time) protected across all HANA data types and the data is residing together on the same NFS mounted device. Sure you can run ad-hoc backups in the HANA studio, or schedule backups in the dba-cockpit of an externally connected SAP system, like a Solution Manager for example; but you are not including the flat files in these options.
HANA backup via NFS
To summarize the process for each data type,
A) your DB logs should be backed up automatically and continuously by HANA towards an NFS directory, so your logs are saved on a separate device immediately. This is determined by your HANA configuration settings in regards to logging in the global.ini.
B) Your data is backed up via the script execution as often as you require towards the same NFS device, in many cases this is done once a day. Be clear and descriptive about the target directory naming convention and the naming of the backup images. By default, HANA uses the same names each time you run a backup. If you want to keep your backups for one week for example, you need to assure that by naming your backup images accordingly, otherwise you overwrite your backup every day.
C) Your flat files are included in each backup set. This makes recovery a lot easier and you don’t have to research when your configuration files may have changed in relationship to the backup image you want to use. You also don’t have to manage separate OS backups on the HANA system to capture changes to these files. Everything is need together. Make sure to mount the external NFS device with the “nolock” option, otherwise your backup won’t start.
By now, you have figured out that I favor the script approach, because I have complete control over what happens, how the backup files are named, and what I want included in every backup. And yes, I am a control freak when it comes to backups, but, better safe than sorry!
One last point in regards to that NFS-mounted external device I keep referring to. Of course, you could use any NFS resource to facilitate these HANA backups; however, I believe using the right tool for the right job is important for operational success. I have very successfully implemented DataDomain systems, which are market leader in the space of PBBA (purpose build backup appliances). These systems are quickly installed, really easy to manage, include compression and best in class deduplication, offer encryption if you require, and include among many other protocols also NFS. And on top of that I can replicate from one DataDomain system to one or more other DataDomain systems in a different data center in a very efficient WAN optimized manner. You can also manage the retention of the logs on that system, so old logs are deleted automatically. If you want the most reliable, easy to manage, and best deduplication solution, there is nothing better in my humble opinion.
A few very helpful links on this topic:
Scheduling SAP HANA Database Backups in Linux –
https://service.sap.com/sap/support/notes/1651055

HANA backup script hint… a little bit more security, please! – 
http://scn.sap.com/community/hana-in-memory/blog/2011/10/22/hana-backup-script-hint-a-little-bit-more-security-please

Data Domain Site-
http://www.emc.com/backup-and-recovery/data-domain/data-domain-deduplication-storage-systems.htm#!