Skip navigation

This blog post describes a unique challenge that my team faced—achieving single sign-on in a retail environment, in which many people work on the same device—and how we solved that challenge.

 

One of oXya’s customers in Canada is a large retailer in the fashion sector, with ~ 500 stores all across Canada, and thousands of SAP users working in these stores. This customer receives a full SAP hosting service from oXya, meaning their SAP environment is managed by oXya’s team of SAP experts, and their SAP environment runs on Hitachi UCP Pro hardware at our datacenter. We also host all of their non-SAP infrastructure, also on Hitachi hardware.

 

This retail customer uses SAP in all of their stores. The cashiers in the stores, at the point of sale, use SAP in order to manage the store’s inventory, requisitions and other activities, and also for accessing non-SAP applications such as their email, time sheets, and more. The customer asked us to find a way to provide its users with a single sign-on experience for all the applications they use in the store.

 

 

The Technical Challenge

 

To explain the challenge, let’s first cover a more “standard” SAP environment. In such environment, when a user logs onto her Windows computer and onto a specific domain, she gets a Kerberos token which serves as evidence that she logged onto that domain. Many applications perform single sign-on this way—they trust that the Windows computer is logged onto the domain, and thus perform single sign-on to the application. SAP operates this way, and so do an endless number of other applications.

mannequins-811144_640.jpg

 

However, everything works differently in a retail (or kiosk) environment. In a retail store, the user does not log on and off a computer, every time she goes to work on the cash register. Instead, she logs on and off the actual application. As a result, no authentication is done at the actual workstation (and the same applies to any environment, in which the user does not log directly onto a computer, or to some type of a Point of Sale (POS) device).

 

Our challenge, then, was to find a way to perform a single sign-on and use the Active Directory authentication, yet do so without having the user actually logging onto their PC with their Active Directory account.

 

In SAP terms, the question that we faced was—how can you perform a single sign-on with an SAP ABAP application, like SAP Fiori or SAP NetWeaver Business Client (NWBC), yet be able to do it in a simple way, for example through the browser, without complicating the user’s life?

 

This challenge may sound simple, but it is actually quite complicated. Both Fiori and NWBC are based on SAP ABAP Webdynpro, which has been around for a long time. An SAP ABAP Webdynpro application can perform single sign-on using modern technologies like NetWeaver SSO. However, what you can’t get around with ABAP based systems is the fact that the user has to have a user account in the ABAP User Store. This means that you have to create user accounts for the user to access Fiori or NWBC which will provide the needed authorization. In the case of Fiori, you need to have two user accounts – one user in the NetWeaver Gateway system (where you access the Fiori applications), and a second user for the backend SAP system, meaning ERP or whatever system is in the back (usually SAP ERP).

 

In other words, these users have multiple user accounts with multiple passwords, and it wasn’t possible to synchronize these passwords. And as a reminder, the users in the store do not operate only within the SAP environment, but also need access to other applications, which meant even more user accounts and passwords for each user.

 

The customer asked us to find a way to authenticate against Active Directory for all of these applications—both SAP and non-SAP. They wanted a method of authentication that is seamless across all applications, using just a single username and password per each user.

 

 

The solution: SAML 2.0 and ADFS

draft-saml-logo-03.png

 

The solution we put together is based on combining the Security Assertion Markup Language 2.0 (SAML 2.0) protocol, together with Active Directory Federation Services (ADFS), to achieve the single sign-on we needed. This solution, based on combining existing technologies, is not commonly used nor widely known in the SAP market.

 

In fact, SAP has excellent integration with SAML 2.0, which is a well-known standard for single sign-on and authentication that almost every application supports. By combining it with Microsoft’s ADFS, we enabled the users to achieve a single sign-on, despite all the limitations described above.

 

Here’s how the solution works: all a user needs to do is to sign onto any SAML Application via the browser that uses ADFS as its identify provider. It doesn’t matter what device she’s using, meaning she can perform this application sign-in via a tablet, PC, or even iOS and Android devices; she does not need to authenticate in any special manner to the device nor to the computer. At this point the user has an SAML token, and will get automated authentication to any other application using ADFS as its SAADFS.pngML Identify Provider.

 

In the case of  SAP applications like NWBC and Fiori, while attempting to logon the user is redirected to ADFS which allows her to enter her Active Directory username and password, and authenticate. From there, the user is redirected back to the Fiori or NWBC application, and is logged on automatically.

 

This process is seamless to the users, and does not require anything except for entering the Active Directory credentials that they are used too. The user essentially has one username and one password, and doesn’t have to worry about managing different accounts and different environments.

 

The above solution, which the customer is very happy with, works across all of the different SAP applications, as well as all of the customer’s other, non-SAP applications. This results in very good experience to the users, enabling them a way to authenticate across all of their applications—SAP and non-SAP—saving them a tremendous amount of headaches, and of course dramatically lowering the number of helpdesk calls from the stores.

 

 

 

Sean Shea-Schrier is an oXya Service Delivery Manager, based in oXya’s Montreal, Canada branch. Sean is a veteran SAP expert with more than 10 years’ experience as SAP Admin and SAP consultant (both Basis and architect). Sean joined oXya more than 2 years ago to help build the Canadian team and onboard customers. oXya was acquired in early 2015 by Hitachi Data Systems.

Disaster recovery (DR) for SAP has always been a hot topic, since SAP is one of the most mission-critical environments for any organization. Research company IDC analyzed the cost to the company, should a critical application fail. They calculated that cost—if you’re a Fortune 1000 company—to be between $500,000 to $1 million per hour of downtime. They further calculated unplanned application downtime (all applications) costs a Fortune 1000 company between $1.25 billion to $2.5 billion every year.

 

I believe that in the case of SAP, the cost of hourly downtime is on the higher side, and can go even higher than $1 million per hour. Failure and downtime to the SAP environment can bring an entire organization, or at least large parts in it, to a halt.

 

For that reason, significant thought is invested in a DR plan for SAP. If your main site suffers a disaster, such as flooding, fire, major earthquake, and so on—the DR plan should get you up to running your normal operations in as short time as possible and with minimal data loss.

 

There are several DR optionsfireman-100722_640.jpg in the SAP world. The emergence of cloud technologies have added options that did not exist just a few years ago; there is also more flexibility today, for choosing the DR option that’s right for you. Of course, various options also carry different price tags. At oXya, we’ve been using all of the DR solutions I’ll cover here with our various customers, according to their needs and budget. There is no one single solution that fits everyone. In this blog post, I’ll list the various DR options we’re using with our customers, as well as the pros and cons for each of these options.

 

But before diving into the various DR options, let’s clarify two things:

 

1. We focus on the Production landscape. The SAP environment can be huge, with many landscapes and multiple servers. When speaking about DR, we limit the discussion to the Production environment. Due to cost considerations, organizations are usually not thinking about DR solutions for other landscapes.

 

2. DR in SAP world means copying the database log files. All the information handled by SAP is stored in the database (i.e. Oracle, DB2, MS-SQL). Any changes to the database are represented in the logs. By sending the log files to the remote/DR site, we can recover all the information for SAP to run properly on the DR site.

 

  

About Disaster Recovery Technologies: Synchronous and Asynchronous

 

This post deals with DR options for SAP, so I don’t want to dive too deep into core replication technologies. However, we can’t do with nothing, as these technologies play a critical role later on, especially when dealing with HANA. So, here’s a very short description about the two main replication technologies, also explaining which one we use for DR:

 

Synchronous: the database at your main site will not commit any database changes, before it received confirmation that this change has also been replicated to and committed at the DR site. This creates, in essence, two identical sites.

 

Asynchronous: the database at your main site acts normally and commits changes. All the changes are sent to the DR site, but committing (or not) the changes at the DR site does not affect the main site. By definition, there is always a lag between the main site and the DR site—the DR site lags after the main site. The size of lag depends new-orleans-81669_640.jpgon latency, which in its turn depends mostly on the distance between the two sites.

 

For a more thorough explanation of synchronous versus asynchronous technologies, see this HDS document; page 4 has a great explanation of synchronous versus asynchronous replication, including a nice comparison table.

 

As already mentioned, the latency between the two sites depends on the distance between them. If the two sites are distant enough, as required for true DR, the latency will be significant. If you try to implement a synchronous replication, this will bring the database performance (on the main site) to a crawl, and make it unusable. The reason is that every time there’s a small change, there will be a major wait until the order is also committed to the remote database, and that will crash the database. For that reason, any typical DR solution for SAP will use an asynchronous solution.

 

Another thing I’d like to clarify is the difference between a High-Availability solution (also known as Active-Active) and a Disaster Recovery solution (Active-Passive), because I heard in the past people relate to an HA solution as also a DR one, in parallel. A high-availability solution means two servers, usually within the same datacenter or at a very close proximity, that create a cluster and enable you to access and use them both at the same time. A high-availability solution is not a DR solution, or at least it’s a very bad DR solution. Think about some major disasters in the last decade, such as Hurricane Katrina in New Orleans, Hurricane Sandy in New Jersey, and the tsunami in Japan; a high-availability solution would have been totally destroyed in these cases, which brings us back to the point I made above – for a true DR solution, the DR site must be far away, hundreds and even thousands of miles away, in order to avoid the disaster impact. It also means, by definition, that the DR solution must be an asynchronous one.

 

 

Traditional Disaster Recovery for SAP

 

Traditionally, what we had in the SAP world was a main server at our main datacenter. In addition, we had a DR datacenter in which there was another server, usually identical to the server in the main datacenter. The traditional SAP approach to DR was to use “log shipping”. This means you gather the database logs at the main datacenter, and ship (send) them over the network (usually over MPLS) to the DR datacenter.

 

The traditional approach has been around since the beginning of SAP, and we’ve been using it for many years. It works great, and many customers are still using this method. This approach is very sturdy, works with any type of infrastructure, yet it’s the old fashioned way.

 

There are at least two drawbacks to the traditional approach: cost, uptime speed, and audits:

 

1. Cost: using this approach, we need to own both datacenters, and have servers in both of them (or we can lease space in a datacenter for DR purposes, but that’s still a major cost).

 

2. Uptime speed: newer DR technologies enable us to get the DR site up, running and operational in a shorter amount of time, compared to the traditional approach. I’ll discuss it shortly.

 

3. Audits: the traditional approach only replicated the database log files. While these are sufficient to get the SAP environment at the DR site up and running, there are additional log files that are created by the SAP system itself, whenever a job is performed, or there’s an error, and so forth. These files are not replicated when using the traditional approach. These SAP logs can be important for audits, for example.

 

The rise of cloud solutions has given us additional, newer options for DR. I’ll describe them from the cheapest to the most expensive one; all of these solutions have been used by oXya customers.

 

 

DR to the Public Cloud

 

Generally speaking, one of the cheapest solutions for DR would be to use a public cloud service, such as Amazon Web Services (AWS). If your server needs to be backed up, and it doesn’t have frequent changes (i.e. web server, front-end applications, or interface applications), then a public cloud can be an option. You backup to a server on AWS, and have that server “sit” there, turned off, so you don’t pay for that service/server until you bring the server up. You only bring it up when a disaster strikes your main servers, or once in a while for updates.

sky-383823_640.jpg

This is a very cheap option to achieve some type of Disaster Recovery, because you would pay very little so long as your servers are turned off (down).

 

For SAP however, you’ll need to keep you databases in sync, which means the database server on Amazon must stay online and continuously receive updates.

 

It’s important to emphasize that this method is NOT recommended for everyone due to security concerns of having your production data on a public cloud. You may also run into some difficulties in setting up all your SAP interfaces for the DR, within a public cloud (Bank interfaces, etc.). Still, this option can become relevant in cases where budgets are very small, and customers can’t afford to invest in one of the other, more expensive DR solutions for SAP. In such a case, some DR is better than none, so this solution can be considered.

 

How does it work, in practice? The method is quite similar to that of the Traditional DR method. You install all your DR servers on AWS, shut down all the applications servers (to avoid ongoing payment), and only keep the database server live, on a continuous base, to receive ongoing updates of the log files. You will then send the database logs from the main customer site to the DR database server on AWS. Once a disaster occurs you bring the other servers up, and can operate your SAP environment directly from Amazon.

 

This setup enables budget-strained SAP customers to obtain a fairly cheap SAP disaster recover option. It is a far cheaper option than having a physical server at another datacenter, because you only pay, on an ongoing base, for the uptime of the database server, that is kept in sync with the logs. All the other servers on Amazon are shut down, and cost almost nothing (you would still have to pay for the cost of the storage used by these servers).

 

This solution can be used with various providers, not necessarily just AWS. oXya, for example, provide this service through its own cloud, and there are additional solutions in the market such as Microsoft Azure. The idea behind all of these is similar – you only pay for servers that are actually being used.

 

 

SAP DR using VMware SRM

 

Another DR option to use with SAP is VMware SRM (Site Recovery Manager). For some of our customers, we implemented and are now using the VMware SRM method, instead of using the Traditional DR method. The difference is that the traditional method uses database-level replication, by sending the database log files. With the VMware SRM method, we perform a full server replication. This means we include all the additional files that are created as part of the SAP operations, such as the SAP logs. All of that additional data can now be replicated directly to the DR site.

vmware_srm_logo.jpg.png

With VMware SRM, you have a VMware farm on your primary datacenter. You would also have a VMware farm at the DR site, yet this is probably a smaller farm, to only satisfy the needs of the Production environment. Then, you perform a VMware SRM replication across these two VMware farms (in other words, you duplicate the full VMs, including SAN replication and the VM setups).

 

VMware SRM can be based either on Storage replication or on the vSphere hypervisor. Without going into the technicalities behind these two options, Storage replication is usually used when a very low Recovery Point Objective (RPO) is required. That option is somewhat less flexible and requires having the same type of high tier storage infrastructure on both sites.

 

The VMware SRM method allows you to have a full server replication to SAP, whereas before, with the traditional method, you only had database-level replication. In most cases, the database-level replication is enough, but you still have some work to do before you can get the DR system up and to par with the main, original site.

 

Therefore, a VMware-based replication will allow for a quicker/shorter Recovery Time Objective (RTO), which is the time for a business process (SAP, in our case) to be restored after a disruption. In addition, you keep all the files that are not residing within the database, and which are lost when using the Traditional DR for SAP.

 

 

HANA-specific Disaster Recovery

 

The last type of DR we should discuss is HANA-specific disaster recovery, because this one is a bit different. HANA usually runs on its own application server (its own appliance), or it can be installed as a Tailored Datacenter Integration (TDI) setup.

sap-hana.png

 

However, HANA has its own replication method. For customers who have HANA and want a DR solution, HANA offers a tool called HANA Replication, which replicates the entire HANA appliance to another site. There are several ways of doing that, but first let’s describe the typical setup for HANA.

In a typical setup on the main, Production site, you have one application server running the SAP application. In addition, you have the HANA database

running on its own appliance. This database setup is similar to how you did it prior to HANA – you could have had your database server separate from the application server (running an Oracle database, for example).

 

On the DR site, you need to have another HANA appliance, in order to replicate HANA, and also another application server. Let’s cover the HANA replication first, and then I’ll relate to the replication of the application server.

 

To replicate HANA from your main datacenter to a DR datacenter, you must have a second operational HANA database on the DR site (and yes, it’s quite costly, as my friend and colleague Melchior du Boullay covered last week in his blog post, Considerations Before Migrating To SAP HANA). You can have any combination of HANA appliance or TDI at your main datacenter and your DR site, that doesn’t matter, so long as both databases are operational and the DR database is at an equal or higher release level.

 

In theory, there are two ways for performing that replication, like any other replication – synchronous and asynchronous, with which we started. However, due to HANA’s enormous speed and performance (after all, it’s an in-memory technology), any attempt to implement a synchronous replication without sub-millisecond round trip latency will practically bring the HANA database performance to a crawl, and make it unusable. You will lose all the benefits of HANA. This is why in HANA’s case, using the asynchronous solution is the only practical solution for DR. Synchronous replication is really only an option for High Availability, where both HANA databases sit next to each other; and even then it requires careful consideration.

 

As for the application server itself, there are two methods for replication:

 

1. Using VMWare SRM: if you have your primary application server on VMware, you can use the SRM method we mentioned above, in order to keep it in sync with your original application server.

 

2. Install another server: alternatively, if you’re not using any kind of virtualization, then all you need is to install a fresh application server that has the exact same SID (same system number) as your original application server, and shut down this DR server. You can bring it up when you need to switch to the DR site, and it would work fine. The only thing you would lose are the SAP logs and potential interfaces files. But still, the SAP environment will work just fine, it will allow you to log into the system, and you will see all the transactions and all of your data.

 

 

Handling RPO in SAP

 

Recovery Point Objective (RPO) is defined as the maximum targeted period of time in which data might be lost, due to a major incident (disaster). In other words, how much data can you “afford” to lose in case of a disaster, as defined in your Business Continuity Plan. How is this handled in various SAP disaster recovery methods?

 

The answer is that it varies, depending on your DR solution. In the Traditional method, your RPO depends on the size of your database logs. The bigger the logs, the bigger the RPO. The smaller the logs, the smaller the RPO. However, if your logs are too small, then you’ll have performance impact, because you need to create a lot of files very frequently. Hence, there’s a balance to be had there. The RPO is always the result of a discussion between the customer and oXya’s SAP consultants, to define what is the acceptable RPO for the customer. Once the RPO is defined, oXya’s experts define the size of the database logs, in order to match that RPO.

 

For VMware SRM, the RPO can vary between zero (using synchronous storage level replication; again, not recommended across long distances) and 24 hours, depending on the replication settings. It’s important to clarify that 24 hours is not a realistic RPO in the SAP world, but rather the maximum RPO that is set by VMware SRM (page #48). A typical RPO if not using the storage replication option is 15 minutes.

 

 

So which DR method is preferred for SAP?

 

This is the million dollar question, which oXya’s SAP experts are frequently being asked.

 

The answer: there is no single disaster recovery solution that will be best for all cases. The DR solution needs to be adapted to the customer’s environment, and most importantly to the constraints of each customer. DR is always a compromise between how much money you’re willing to pay, and how much protection you get. oXya’s experts work within the constraints that you set, in order to build the best DR solution possible for your SAP environment.

 

The Traditional method is still being used by many of our customers, it is working very well, and it has proven over the years to be highly reliable. If a customer comes to us and asks about DR, and this customer has no specific constraints, then we start the discussion with the Traditional method, and explain to that customer the various constraints of that method. If the customer is comfortable with these constraints, then we will move forward with that.

 

If the customer requires a more sophisticated DR solution, then we discuss the VMware replication solution. However, implementing the VMware solution depends on whether the customer has already virtualized their SAP environment. If they are still running SAP on physical servers and are not considering virtualization, then SRM is irrelevant.

 

And if the customer has severe budget constraints, we will talk about having an AWS-type of DR solution. This means a relatively cheap DR solution, but it comes with its constraints, which I listed above.

 

 

 

 

Dominik Herzig is VP of Delivery for US & Canada at oXya. Has 10 years of SAP experience, starting as a Basis Admin, and then moving to SAP project management and to account management. Was one of the first few people to open oXya’s US offices back in 2007, and performed numerous projects of moving customers’ SAP environment to a private cloud, and including disaster recovery solutions.

HANA has been the hottest new technology from SAP in recent years. The innovative in-memory database, that is supposed to significantly accelerate the speed of applications, improve business processes, and positively affect the user experience, has been gaining significant interest from customers.

sap-hana.png

oXya, a Hitachi Data Systems company that provides enterprises with managed services for SAP, has significant experience with HANA migrations. To date, we have migrated to HANA about 30 of our 220 customers, or more than 10% of our customers. This is a much higher percentage than SAP’s official, overall HANA statistics, which stands at under 3% of SAP customers who have migrated to HANA. The number of SAP landscapes we have migrated is several times higher and well over 100, as we’ve migrated multiple landscapes for each customer.

 

oXya was one of the first companies in the world to perform HANA migrations. This blog’s goal is to help you, the SAP customer considering a migration to HANA, based on our extensive experience with SAP HANA. What are the main considerations you should look at, when considering a HANA migration? What may be some considerations against such a migration?

 

 

Cost: HANA is expensive

 

First discussion around any HANA migration is usually around cost, and especially the license cost. This is where you should ask yourself – am I ready to make this investment?

 

HANA is expensive. Of course, the discussion should be about HANA benefits and whether the migration is worth the ROI, but many customers don’t get to that stage; the investment itself is a barrier tall enough to prevent a migration, or delay the decision to some point in the future.

 

So what costs are involved with a migration to HANA?

 

1. HANA database cost. If you are currently running your SAP environment on an Oracle database (or MS-SQL, DB2 or any other), then you’re only paying annual fees for that database; the cost of purchasing the database itself had occurred in the past.

 

Migrating to HANA means you need to purchase an entirely new database, HANA. The initial database cost (Capex cost) will be at least a six-digit number in US dollars, and can easily go to a seven-digit number, depending on what you do at your SAP environment, which affects the size of the HANA appliance. We’ll get back to sizing later on, as that’s a critical point with HANA.

 

2. HANA annual maintenance fees. As a ballpark, your annual HANA license fees will be around 15% of what you’re paying to SAP for your other SAP licenses. So, for example, if you’re paying SAP one million dollars annually in maintenance fees, then you’ll pay around another 150,000 dollars every year, in HANA maintenance fees.

 

3. Cost of infrastructure for HANA. The prevalent method for installing HANA is on a dedicated appliance; the cost of that appliance depends on the size you purchase. HANA can also be installed as a Tailored Database Integration (TDI), where HANA is installed on larger, existing servers (compute, storage) in your datacenter, rather than as a separate appliance. In both cases, HANA requires significant hardware resources to run efficiently. The actual cost of hardware depends on the size of the HANA license.

 

4. HANA sizing. With HANA, you’re paying per gigabyte on the HANA appliance. This cost per GB applies to both the HANA license itself, and also to the hardware appliance. For example, if you’re running ECC on a 1TB appliance, then you will need to purchase a HANA license for a 1TB appliance. This means that the size of your SAP landscape and how your company use SAP on a daily basis have direct influence on your HANA cost.

 

Since you’ll be paying HANA fees per size, the correct sizing of your HANA appliance is very important. You don’t want to buy a HANA appliance that is too big, and pay for extra that you don’t use; you also don’t want to buy a HANA appliance that is too small, insufficient for your needs. Many companies are contracting oXya to perform the correct sizing for them, as correct sizing has a major effect on their cost, both appliance size and HANA license.

 

5. HANA Migration. One additional cost to consider is the cost of the migration project itself. Your in-house SAP team does not typically have experience to perform such migration projects. You need to hire the services of an expert company such as oXya, who has done many HANA migrations and has the knowledge, experience and best practices to perform the migration project for you.

 

 

HANA benefits: it’s not just about speed

 

HANA is not just about speed, and that’s important to understand. HANA is also about new functionality. There are many SAP functional consulting firms, who would help you in this area (oXya is not an SAP functional consulting firm). You should consider if you want to leverage that new functionality. If yes, then the new functionality is certainly a great reason to move to HANA.

 

However, if you’re only interested in increased speed for your current SAP applications, then HANA may not necessarily deliver on your expectations. While some people have attributed a 20x factor to HANA, with regards to increased speed, the truth is that you probably won’t get that. The speed of any process you have, like ERP or HR, can’t really go 20x faster for most of the major processes.

 

Don’t get me wrong – customer processes do run faster after a migration to HANA. In one representative case, for example, there was an improvement in the run time of a process from 15 hours to 8 hours. This is a great improvement, nearly 2x, yet the customer was disappointed. They expected much more from HANA. They thought the process will run in under an hour, meaning 10x faster, and that didn’t happen.denver_140821_0272_lo.jpg

So what is the challenge about processes and speed?

 

HANA is designed to run standard SAP processes, meaning processes that have been implemented and are running exactly like SAP designed them. In such a case, a given process can run significantly faster, even x10. This is one of the reasons why SAP keeps telling its customers to “go back to standard”; SAP wants customers to use the standard processes, those that SAP designed, so customers gain the maximum benefits from HANA.

 

However, the real world is quite different. Most SAP customers today have customized their SAP applications, with the help of functional consulting firms. I’d estimate that at least 90 percent of customers have modified the processes that SAP developed, in order to make the process better aligned with their specific business needs. However, these modifications affect the possible speed improvement with HANA, because these modified processes are not optimized for HANA. The result for most customers, from our experience, is that a migration to HANA brings improvements of “only” 1.5x to 2x in speed, and customers feel disappointed with that performance improvement.

 

The bottom line: if you’re planning a HANA migration just for the sake of performance, meaning making SAP faster, then that may not be a good enough reason for such migration. You’ll be paying a lot, and depending on the specific process—you may not get much improvement in speed, and certainly not what you’re hoping to get. I would therefore suggest that you plan your HANA migration carefully, taking into account additional benefits—and you won’t regret migrating to HANA!

 

 

POC is the way to go

 

Most of our customers are asking us to do a Proof of Concept (POC) for HANA, to see what such a migration will give them. We support this approach and encourage you to take the POC route, have a real test and see if the new functionality and improved speed meet your expectations.

 

POC has to be done with your real, live data, in order to know if the migration to HANA will meet your demands. Using your real, live data in a POC is a basic requirement, because that’s the only way you’ll know how your customization of SAP will be affected by the HANA migration. Using demo data from SAP is irrelevant, as it will not show you how the HANA migration will impact your specific SAP environment, especially in cases where functional customization was performed.

 

oXya is performing multiple POCs for multiple customers and prospects at any given moment. If you’d like more details, just contact us and we’ll discuss your specific requirements.

 

 

Considerations for delaying a migration to HANA

 

There are three main considerations that prevent customers from migrating to HANA. Furthermore, our experience shows that for most customers, a migration to HANA is not a definite “Yes” or “No” decision, but more a question of timing – when exactly to perform the migration, based on the following:

 

1. HANA cost and infrastructure refresh cycles: when I wrote about HANA costs at the beginning of this blog, I wrote mostly about the HANA license costs. Here, I’m referring specifically to the cost of hardware, and to refresh cycles for SAP infrastructure (servers and storage). A refresh cycle takes place every 3-4 years; customers depreciate their equipment over 3 years, but will often wait another (4th) year before buying new equipment.

 

Hence, a customer who has just ‘recently’ purchased new infrastructure (and ‘recently’ can be 1-2 years ago), and already spent a significant budget on that new SAP infrastructure, would usually not migrate to HANA at present time. They simply don’t have what to do with the existing (relatively new) infrastructure, and they won’t invest significant amounts in new infrastructure for SAP.

 

The exact same argument works the other way around. When a customer’s refresh cycle is coming up and the budget for new infrastructure is approved, that would be a good reason to move to HANA. The cost of moving to HANA may be a bit higher, compared to a regular, non-HANA infrastructure refresh, but that difference is easier to justify, due to the major infrastructure refresh.

 

2. Concerns regarding HANA maturity and migration path: we’re speaking with a huge and well-known retail brand about HANA, and they listed three concerns regarding this migration. The first concern was the already-mentioned cost; they told us the migration to HANA was too expensive for them. In addition to cost, their second concern was that HANA was not yet mature enough. They didn’t trust it for running the Business Suite on HANA, and for them to base their entire business on HANA.

 

Personally, I believe that HANA as a solution is mature, and for itself brings no risk to the organization. However, for that specific customer, there was another, third concern, which was a very complex, long and costly migration path to HANA. We analyzed the migration project and came to the conclusion that it would take about 18 months, from start to finish, since this customer has a very complex SAP environment, with many things that need to be taken into consideration. The bottom line is that this customer is right in not migrating to HANA at present, due to the complexity of the project, but this is not related to HANA maturity.

 

3. Issues with Product Availability Matrix. A third common reason, that prevents customers from migrating to HANA, has to do with product availability matrix. Some applications just can’t be migrated to HANA, and when such an application is mission-critical for a customer, that can prevent the entire migration project.

 

We have a customer that delays the migration to HANA, due to an application that is critical to their operations. That application is not from SAP, but it’s connected to the SAP system. The application’s version is not compatible with HANA, so HANA will not be able to read this application’s database. The customer also can’t upgrade the application, because that project is also very long and complex. The customer is planning to replace this application with another software package in about a year, to support a future migration to HANA.

 

The above example is relevant to many customers who have external, 3rd party products that are connected to SAP with interfaces. When these external products are critical to the business, yet can’t work with HANA, then such a scenario puts the entire HANA migration on hold. Before migrating to HANA, we must verify that all these products have a clear migration path to HANA, and they will continue to work with HANA.

 

 

Conclusions

 

As you read in this blog, a migration to HANA involves many considerations that need to be carefully analyzed and discussed. oXya, with about 600 SAP experts serving more than 220 enterprise customers worldwide, is having such discussions with customers on a daily basis.

 

If your organization is considering a HANA migration, feel free to reach out to us for consultation. You can ask questions or post comments here (I promise to answer), or contact your HDS rep about this.

 

 

 

About the author: Melchior du Boullay is VP of Business Development and Americas Geo Manager at oXya, a Hitachi Data Systems company. oXya is a technical SAP consulting company, providing ongoing managed services (run management) for enterprises around the world. In addition, oXya helps customers running SAP with various projects, including upgrades and migrations, including to SAP HANA. Enterprises migrating to HANA are using oXya to perform the HANA migration for them, since they usually don’t have the in-house skillset and prior knowledge to perform that migration.

Earlier this year, Hitachi Data Systems acquired oXya, a leading provider of SAP managed services. Nearly 600 SAP experts around the world serve more than 220 enterprise customers and nearly a quarter of a million SAP users. oXya’s SAP experts are at the forefront of SAP technology, being among the first in the world to test and implement new SAP technologies.

 

Next week, after the Labor Day holiday, we’re launching a blog series written by HDS/oXya’s leading SAP experts. Each of our experts has many years of SAP Basis experience. They head SAP managed services teams, interact with customers on a daily basis, consult to customers on new projects and initiatives, and are at the cutting edge of SAP technology. They hear from customers about their needs and challenges, and will address these in their blogs.

 

The HDS/oXya bloggers will cover new SAP technologies; various tips and best practices regarding implementations; trends in the market; and also various business challenges that SAP decision makers are facing, with how to tackle these.

 

So who are our bloggers? Below is an initial list of our bloggers, with additional HDS/oXya SAP experts joining over time:

  • Melchior Du Boullay, Americas Geo Manager & VP Business Development
  • Sean Shea-Schrier, Service Delivery Manager
  • Dominik Herzig, Account Manager, Delivery for US & Canada
  • Mickael Cabreiro, Account Manager & Expert Consultant, SAP Basis
  • Philippe Gosset, Director / Client Executive
  • Neil Colstad, VP Business Development oXya Cloud Application – System Integrators, Cloud Providers & Resellers

 

And some subjects that our experts will blog about in the coming weeks include:

  • Considerations before migrating to SAP HANA
  • Disaster Recovery for SAP – what are the various options available today, and differences between them?
  • HANA Multi-Tenant Database Containers (MDC): benefits and challenges with SAP Business Suite
  • From the traditional to the new SAP GUI – what you should know and what it gives you
  • Fiori and NWBC Authentication - deploying a seamless end-users experience using SAML 2.0 and ADFS
  • Outsourcing your SAP and other mission-critical applications: why you should consider it, and how to select an outsourcing partner?
  • Permanent technology learning: the SAP Basis challenge to stay at the cutting edge

 

We want to hear from you

 

Do you have a challenge with your SAP environment? Questions you’d like answered? Any specific topic you would like us to address? Let us know by responding to this blog, or to any other HDS/oXya blogger. We promise to answer all comments. If you suggest topics for blogs, we’ll do our best to have one of our experts write about that, and add that topic to the editorial calendar.

 

Looking forward to hearing from you all, and to have a live, fruitful discussion.

 

 

 

 

Ilan Vagenshtein is a veteran marketing and sales enablement expert for B2B technology companies, currently supporting SAP services marketing & sales enablement for HDS/oXya.

Recently, Janakiram MSV of Forbes wrote an interesting article titled "Hitachi Data Systems (HDS) is betting big on Smart Cities", discussing how HDS is making big strides into smart cities, Internet of Things, and analytics . If you haven’t read it yet, catch the article Here.  He covers Hitachi’s recent acquisitions and how they are slated to play a crucial role in delivering HDS’s social innovation solutions. 

Of special interest to me was his mention of our oXya acquisition, a system integrator focused on deploying SAP applications in cloud environments. This acquisition further demonstrates the extent of Hitachi’s commitment to the SAP ecosystem and integrating SAP applications into our social innovation solutions. In fact, this year at SAP TechEd we will be there as one company demonstrating our solutions for the SAP ecosystem.

 

Janakiram also talks about HDS’s partnership with system integrators such as Infosys for various projects in the public sector and other verticals. Infosys’ internal business processing system based on SAP and SAP HANA runs on Hitachi.  With more than a 150,000 users, Infosys runs of the world’s largest single instance (as of Nov 2014) of SAP Business Suite on HANA and it runs on Hitachi.

 

SAP is one of the most strategic partners for Hitachi and over the last year we have made some significant investments to create an entire eco-system around SAP. This collaboration has only gotten stronger since the release our first SAP HANA solution almost 3 years ago.  Since then, a lot has changed in the world of SAP HANA. Today SAP HANA platform is more than just BW running on an in memory data base(IMDB).  It is about all applications running on SAP HANA and Running Simple including the most recent release of SAP S/4 HANA. Hitachi is going beyond just creating SAP HANA solutions and working on integrating business critical SAP applications including SAP HANA into our social innovation solutions for a variety of industries including smarter cities, healthcare, and telco among others.  Stay tuned for exciting solutions to come and check our SAP Community Page for the latest news.


oXya_HDS_logo_reverse.jpg