Skip navigation

Playback our HDS webinar event:   https://www.hds.com/webtech/?commid=168793

 

CitizenStockHDS.jpg

 

 

October 14

The Advantage of Using Professionals for Run Management of Your SAP

Melchior du BOULLAY

Vice President Business Development and Geo Management for Americas

oXya

A Hitachi Data Systems Company


Scott LaBouillere

Vice President Business Development

oXya Cloud Applications

oXya

A Hitachi Data Systems Company

https://www.hds.com/webtech?commid=168793

One of the most common needs that oXya customers have been asking us about in recent years has been focused on using SAP on employees’ mobile devices (tablets and cellphones). Many companies worldwide are supporting the Bring Your Own Device (BYOD) option for their employees, and they want the employees to be able to use their own devices to connect to SAP and perform their work.

 

In addition, there are various instances in which the use of a mobile device is almost a must. Think about mobile employees, for example, who are traveling frequently as part of their job. Using their tablet or cellphone to connect to SAP and perform various actions will be significantly easier and a time saver for them, compared to opening their laptop and doing the same thing. There are additional instances in which mobile devices can bring a lot of value, such as working on a manufacturing floor, where a person can move around and perform tasks on a tablet, rather than be constrained to a desktop or a laptop that are not as mobile. And there are additional examples.

 

All of these things were difficult to perform with the old SAP GUI, which did not have any adaptive capabilities, and was therefore limited to use on desktops and laptops. Aware of these trends, SAP embarked on a journey to provide an answer to its customers. SAP wanted to create a better user experience for its customers, enable the use of mobile devices, and at the same time simplify  interactions to make it more efficient.

 

The result is Fiori, which is part of the new user experience developed by SAP. Based on HTML-5 and a browser-based GUI, Fiori provides a responsive design which means you can use Fiori apps on your mobile devices. In addition, Fiori provides ease of use that is far superior to what existed before, as well as significant improvements in efficiency for the users.

 

 

What is Fiori?

sap-fiori.jpg

 

Fiori is a framework for an advanced, responsive user interface. It is a set of SAP applications, that replaces known, standard SAP applications and processes. The emphasis on “standard” is important, because Fiori replaces applications and processes in which you are using the exact same standard processes defined by SAP, without any modifications made to this processes.

 

When SAP first launched Fiori, it supported about 200 applications. Today, with EHP-7 on ECC-6, Fiori supports more than 500 applications, enabling users to leverage the exact same SAP application and process on their mobile device, in an easier-to-use way.

 

Another benefit of Fiori is that it enables the user, with one interface, to interact with multiple backend applications (CRM, BI, SRM, etc.). Before Fiori, you had to install different clients on the user side.

 

 

Efficiency of Fiori

 

Fiori is much more than just a nicer GUI that also works on your mobile devices. SAP designed Fiori to provide significant efficiency improvements for its users. This efficiency improvements result in savings of 60-70% in the time of performing an SAP task by the user; in the number of mouse clicks the user needs to make during the process; in the number of screen changes; and in the number of fields that need to be filled.

SAP GUI (old)SAP Fiori UXEfficiency
Duration2:12 mins47 secs64% reduction
Clicks391171% reduction
Screen changes8275% reduction
Fields filled5260% reduction

 

Instead of just writing about these efficiency savings, take a look at this 2-minute video, which compares between two methods of performing the exact same business process, in the old SAP GUI and through the Fiori version of that application.

 

The process shown in the video, for Receivables Management, is typical for the efficiency improvement that users achieve when employing Fiori. It’s important to emphasize that the SAP application that you see in the video is exactly the same for both instances, only in one case we are using the old SAP GUI and in the second case it is the new Fiori interface.

 

 

Upgrading to Fiori

 

The process of upgrading to Fiori differs between customers, and depends on each customer’s SAP environment. On average and for most customers, assuming you’re already running the updated SAP environment (EHP-7 on ECC-6), the basic setup of the Fiori backend system takes between one week to 1.5 weeks (Fiori support is limited or does not exist in previous versions of ECC). This project includes installing a SAP NetWeaver Gateway, and the exact length of the project depends on how things are configured on the backend.

 

Once the backend infrastructure is set, it takes oXya 1-2 days to technically deploy each application to Fiori (again, for standard processes only, meaning the processes and screens that SAP configured and which you’re using as is, without modifications).

 


SAP Personas: the step beyond Fiori

 

As mentioned above, Fiori is just the beginning of the road, with regards to the new SAP GUI. Fiori only covers the standard SAP processes. It’s impossible to speak about Fiori, without understanding that when you’re moving to a new user experience, you also have to convert the screens and processes that you created for your company, and which are not SAP standard. To achieve that, we’re using SAP Personas.

 

Personas is a tool which helps to transition your modified processes and screens to the new user experience. Using NetWeaver Business Client, you can personalize your screens and processes without coding, leveraging WYSIWYG options.

 

Peter Spielvogel from SAP’s marketing wrote two nice blog posts explaining the difference between Fiori and SAP Personas, and also how to choose between these two.

 

 

Have you converted to Fiori?

 

To summarize, everything becomes simpler, more intuitive, and significantly more efficient, once you convert from the old SAP GUI to the new SAP interface. About 60% of oXya’s customers are in process of migrating to Fiori or have already completed this migration, and they are reporting significant value.

 

Here’s one example from a customer who moved to Fiori, and just one of the many benefits they achieved—this one in their quality management process, on the manufacturing floor—due to that migration. In the pre-Fiori days, with the old SAP GUI, they quality inspectors used designated scanners to scan finished products’ codes, and then transfer the information to the SAP system. Today, after moving to Fiori, they simply use their tablets to scan the codes (using the tablet’s camera), enter the scanned data directly into the Fiori application (using the tablet, of course), and save significant time on the entire process (plus save the cost of the designated scanners).

 

Have you already migrated to Fiori? If not, do you have this migration planned? Any questions or concerns? Please comment here, and I’ll address all comments.

 

 

Mickael Cabreiro is a Senior SAP Basis Consultant, currently based in oXya’s Montreal, Canada office. Mickael joined oXya in 2008, and has consulted to customers across Europe and Americas, capitalizing on his multilingual proficiency. oXya was acquired by Hitachi Data Systems in early 2015.

This blog post describes a unique challenge that my team faced—achieving single sign-on in a retail environment, in which many people work on the same device—and how we solved that challenge.

 

One of oXya’s customers in Canada is a large retailer in the fashion sector, with ~ 500 stores all across Canada, and thousands of SAP users working in these stores. This customer receives a full SAP hosting service from oXya, meaning their SAP environment is managed by oXya’s team of SAP experts, and their SAP environment runs on Hitachi UCP Pro hardware at our datacenter. We also host all of their non-SAP infrastructure, also on Hitachi hardware.

 

This retail customer uses SAP in all of their stores. The cashiers in the stores, at the point of sale, use SAP in order to manage the store’s inventory, requisitions and other activities, and also for accessing non-SAP applications such as their email, time sheets, and more. The customer asked us to find a way to provide its users with a single sign-on experience for all the applications they use in the store.

 

 

The Technical Challenge

 

To explain the challenge, let’s first cover a more “standard” SAP environment. In such environment, when a user logs onto her Windows computer and onto a specific domain, she gets a Kerberos token which serves as evidence that she logged onto that domain. Many applications perform single sign-on this way—they trust that the Windows computer is logged onto the domain, and thus perform single sign-on to the application. SAP operates this way, and so do an endless number of other applications.

mannequins-811144_640.jpg

 

However, everything works differently in a retail (or kiosk) environment. In a retail store, the user does not log on and off a computer, every time she goes to work on the cash register. Instead, she logs on and off the actual application. As a result, no authentication is done at the actual workstation (and the same applies to any environment, in which the user does not log directly onto a computer, or to some type of a Point of Sale (POS) device).

 

Our challenge, then, was to find a way to perform a single sign-on and use the Active Directory authentication, yet do so without having the user actually logging onto their PC with their Active Directory account.

 

In SAP terms, the question that we faced was—how can you perform a single sign-on with an SAP ABAP application, like SAP Fiori or SAP NetWeaver Business Client (NWBC), yet be able to do it in a simple way, for example through the browser, without complicating the user’s life?

 

This challenge may sound simple, but it is actually quite complicated. Both Fiori and NWBC are based on SAP ABAP Webdynpro, which has been around for a long time. An SAP ABAP Webdynpro application can perform single sign-on using modern technologies like NetWeaver SSO. However, what you can’t get around with ABAP based systems is the fact that the user has to have a user account in the ABAP User Store. This means that you have to create user accounts for the user to access Fiori or NWBC which will provide the needed authorization. In the case of Fiori, you need to have two user accounts – one user in the NetWeaver Gateway system (where you access the Fiori applications), and a second user for the backend SAP system, meaning ERP or whatever system is in the back (usually SAP ERP).

 

In other words, these users have multiple user accounts with multiple passwords, and it wasn’t possible to synchronize these passwords. And as a reminder, the users in the store do not operate only within the SAP environment, but also need access to other applications, which meant even more user accounts and passwords for each user.

 

The customer asked us to find a way to authenticate against Active Directory for all of these applications—both SAP and non-SAP. They wanted a method of authentication that is seamless across all applications, using just a single username and password per each user.

 

 

The solution: SAML 2.0 and ADFS

draft-saml-logo-03.png

 

The solution we put together is based on combining the Security Assertion Markup Language 2.0 (SAML 2.0) protocol, together with Active Directory Federation Services (ADFS), to achieve the single sign-on we needed. This solution, based on combining existing technologies, is not commonly used nor widely known in the SAP market.

 

In fact, SAP has excellent integration with SAML 2.0, which is a well-known standard for single sign-on and authentication that almost every application supports. By combining it with Microsoft’s ADFS, we enabled the users to achieve a single sign-on, despite all the limitations described above.

 

Here’s how the solution works: all a user needs to do is to sign onto any SAML Application via the browser that uses ADFS as its identify provider. It doesn’t matter what device she’s using, meaning she can perform this application sign-in via a tablet, PC, or even iOS and Android devices; she does not need to authenticate in any special manner to the device nor to the computer. At this point the user has an SAML token, and will get automated authentication to any other application using ADFS as its SAADFS.pngML Identify Provider.

 

In the case of  SAP applications like NWBC and Fiori, while attempting to logon the user is redirected to ADFS which allows her to enter her Active Directory username and password, and authenticate. From there, the user is redirected back to the Fiori or NWBC application, and is logged on automatically.

 

This process is seamless to the users, and does not require anything except for entering the Active Directory credentials that they are used too. The user essentially has one username and one password, and doesn’t have to worry about managing different accounts and different environments.

 

The above solution, which the customer is very happy with, works across all of the different SAP applications, as well as all of the customer’s other, non-SAP applications. This results in very good experience to the users, enabling them a way to authenticate across all of their applications—SAP and non-SAP—saving them a tremendous amount of headaches, and of course dramatically lowering the number of helpdesk calls from the stores.

 

 

 

Sean Shea-Schrier is an oXya Service Delivery Manager, based in oXya’s Montreal, Canada branch. Sean is a veteran SAP expert with more than 10 years’ experience as SAP Admin and SAP consultant (both Basis and architect). Sean joined oXya more than 2 years ago to help build the Canadian team and onboard customers. oXya was acquired in early 2015 by Hitachi Data Systems.

Disaster recovery (DR) for SAP has always been a hot topic, since SAP is one of the most mission-critical environments for any organization. Research company IDC analyzed the cost to the company, should a critical application fail. They calculated that cost—if you’re a Fortune 1000 company—to be between $500,000 to $1 million per hour of downtime. They further calculated unplanned application downtime (all applications) costs a Fortune 1000 company between $1.25 billion to $2.5 billion every year.

 

I believe that in the case of SAP, the cost of hourly downtime is on the higher side, and can go even higher than $1 million per hour. Failure and downtime to the SAP environment can bring an entire organization, or at least large parts in it, to a halt.

 

For that reason, significant thought is invested in a DR plan for SAP. If your main site suffers a disaster, such as flooding, fire, major earthquake, and so on—the DR plan should get you up to running your normal operations in as short time as possible and with minimal data loss.

 

There are several DR optionsfireman-100722_640.jpg in the SAP world. The emergence of cloud technologies have added options that did not exist just a few years ago; there is also more flexibility today, for choosing the DR option that’s right for you. Of course, various options also carry different price tags. At oXya, we’ve been using all of the DR solutions I’ll cover here with our various customers, according to their needs and budget. There is no one single solution that fits everyone. In this blog post, I’ll list the various DR options we’re using with our customers, as well as the pros and cons for each of these options.

 

But before diving into the various DR options, let’s clarify two things:

 

1. We focus on the Production landscape. The SAP environment can be huge, with many landscapes and multiple servers. When speaking about DR, we limit the discussion to the Production environment. Due to cost considerations, organizations are usually not thinking about DR solutions for other landscapes.

 

2. DR in SAP world means copying the database log files. All the information handled by SAP is stored in the database (i.e. Oracle, DB2, MS-SQL). Any changes to the database are represented in the logs. By sending the log files to the remote/DR site, we can recover all the information for SAP to run properly on the DR site.

 

  

About Disaster Recovery Technologies: Synchronous and Asynchronous

 

This post deals with DR options for SAP, so I don’t want to dive too deep into core replication technologies. However, we can’t do with nothing, as these technologies play a critical role later on, especially when dealing with HANA. So, here’s a very short description about the two main replication technologies, also explaining which one we use for DR:

 

Synchronous: the database at your main site will not commit any database changes, before it received confirmation that this change has also been replicated to and committed at the DR site. This creates, in essence, two identical sites.

 

Asynchronous: the database at your main site acts normally and commits changes. All the changes are sent to the DR site, but committing (or not) the changes at the DR site does not affect the main site. By definition, there is always a lag between the main site and the DR site—the DR site lags after the main site. The size of lag depends new-orleans-81669_640.jpgon latency, which in its turn depends mostly on the distance between the two sites.

 

For a more thorough explanation of synchronous versus asynchronous technologies, see this HDS document; page 4 has a great explanation of synchronous versus asynchronous replication, including a nice comparison table.

 

As already mentioned, the latency between the two sites depends on the distance between them. If the two sites are distant enough, as required for true DR, the latency will be significant. If you try to implement a synchronous replication, this will bring the database performance (on the main site) to a crawl, and make it unusable. The reason is that every time there’s a small change, there will be a major wait until the order is also committed to the remote database, and that will crash the database. For that reason, any typical DR solution for SAP will use an asynchronous solution.

 

Another thing I’d like to clarify is the difference between a High-Availability solution (also known as Active-Active) and a Disaster Recovery solution (Active-Passive), because I heard in the past people relate to an HA solution as also a DR one, in parallel. A high-availability solution means two servers, usually within the same datacenter or at a very close proximity, that create a cluster and enable you to access and use them both at the same time. A high-availability solution is not a DR solution, or at least it’s a very bad DR solution. Think about some major disasters in the last decade, such as Hurricane Katrina in New Orleans, Hurricane Sandy in New Jersey, and the tsunami in Japan; a high-availability solution would have been totally destroyed in these cases, which brings us back to the point I made above – for a true DR solution, the DR site must be far away, hundreds and even thousands of miles away, in order to avoid the disaster impact. It also means, by definition, that the DR solution must be an asynchronous one.

 

 

Traditional Disaster Recovery for SAP

 

Traditionally, what we had in the SAP world was a main server at our main datacenter. In addition, we had a DR datacenter in which there was another server, usually identical to the server in the main datacenter. The traditional SAP approach to DR was to use “log shipping”. This means you gather the database logs at the main datacenter, and ship (send) them over the network (usually over MPLS) to the DR datacenter.

 

The traditional approach has been around since the beginning of SAP, and we’ve been using it for many years. It works great, and many customers are still using this method. This approach is very sturdy, works with any type of infrastructure, yet it’s the old fashioned way.

 

There are at least two drawbacks to the traditional approach: cost, uptime speed, and audits:

 

1. Cost: using this approach, we need to own both datacenters, and have servers in both of them (or we can lease space in a datacenter for DR purposes, but that’s still a major cost).

 

2. Uptime speed: newer DR technologies enable us to get the DR site up, running and operational in a shorter amount of time, compared to the traditional approach. I’ll discuss it shortly.

 

3. Audits: the traditional approach only replicated the database log files. While these are sufficient to get the SAP environment at the DR site up and running, there are additional log files that are created by the SAP system itself, whenever a job is performed, or there’s an error, and so forth. These files are not replicated when using the traditional approach. These SAP logs can be important for audits, for example.

 

The rise of cloud solutions has given us additional, newer options for DR. I’ll describe them from the cheapest to the most expensive one; all of these solutions have been used by oXya customers.

 

 

DR to the Public Cloud

 

Generally speaking, one of the cheapest solutions for DR would be to use a public cloud service, such as Amazon Web Services (AWS). If your server needs to be backed up, and it doesn’t have frequent changes (i.e. web server, front-end applications, or interface applications), then a public cloud can be an option. You backup to a server on AWS, and have that server “sit” there, turned off, so you don’t pay for that service/server until you bring the server up. You only bring it up when a disaster strikes your main servers, or once in a while for updates.

sky-383823_640.jpg

This is a very cheap option to achieve some type of Disaster Recovery, because you would pay very little so long as your servers are turned off (down).

 

For SAP however, you’ll need to keep you databases in sync, which means the database server on Amazon must stay online and continuously receive updates.

 

It’s important to emphasize that this method is NOT recommended for everyone due to security concerns of having your production data on a public cloud. You may also run into some difficulties in setting up all your SAP interfaces for the DR, within a public cloud (Bank interfaces, etc.). Still, this option can become relevant in cases where budgets are very small, and customers can’t afford to invest in one of the other, more expensive DR solutions for SAP. In such a case, some DR is better than none, so this solution can be considered.

 

How does it work, in practice? The method is quite similar to that of the Traditional DR method. You install all your DR servers on AWS, shut down all the applications servers (to avoid ongoing payment), and only keep the database server live, on a continuous base, to receive ongoing updates of the log files. You will then send the database logs from the main customer site to the DR database server on AWS. Once a disaster occurs you bring the other servers up, and can operate your SAP environment directly from Amazon.

 

This setup enables budget-strained SAP customers to obtain a fairly cheap SAP disaster recover option. It is a far cheaper option than having a physical server at another datacenter, because you only pay, on an ongoing base, for the uptime of the database server, that is kept in sync with the logs. All the other servers on Amazon are shut down, and cost almost nothing (you would still have to pay for the cost of the storage used by these servers).

 

This solution can be used with various providers, not necessarily just AWS. oXya, for example, provide this service through its own cloud, and there are additional solutions in the market such as Microsoft Azure. The idea behind all of these is similar – you only pay for servers that are actually being used.

 

 

SAP DR using VMware SRM

 

Another DR option to use with SAP is VMware SRM (Site Recovery Manager). For some of our customers, we implemented and are now using the VMware SRM method, instead of using the Traditional DR method. The difference is that the traditional method uses database-level replication, by sending the database log files. With the VMware SRM method, we perform a full server replication. This means we include all the additional files that are created as part of the SAP operations, such as the SAP logs. All of that additional data can now be replicated directly to the DR site.

vmware_srm_logo.jpg.png

With VMware SRM, you have a VMware farm on your primary datacenter. You would also have a VMware farm at the DR site, yet this is probably a smaller farm, to only satisfy the needs of the Production environment. Then, you perform a VMware SRM replication across these two VMware farms (in other words, you duplicate the full VMs, including SAN replication and the VM setups).

 

VMware SRM can be based either on Storage replication or on the vSphere hypervisor. Without going into the technicalities behind these two options, Storage replication is usually used when a very low Recovery Point Objective (RPO) is required. That option is somewhat less flexible and requires having the same type of high tier storage infrastructure on both sites.

 

The VMware SRM method allows you to have a full server replication to SAP, whereas before, with the traditional method, you only had database-level replication. In most cases, the database-level replication is enough, but you still have some work to do before you can get the DR system up and to par with the main, original site.

 

Therefore, a VMware-based replication will allow for a quicker/shorter Recovery Time Objective (RTO), which is the time for a business process (SAP, in our case) to be restored after a disruption. In addition, you keep all the files that are not residing within the database, and which are lost when using the Traditional DR for SAP.

 

 

HANA-specific Disaster Recovery

 

The last type of DR we should discuss is HANA-specific disaster recovery, because this one is a bit different. HANA usually runs on its own application server (its own appliance), or it can be installed as a Tailored Datacenter Integration (TDI) setup.

sap-hana.png

 

However, HANA has its own replication method. For customers who have HANA and want a DR solution, HANA offers a tool called HANA Replication, which replicates the entire HANA appliance to another site. There are several ways of doing that, but first let’s describe the typical setup for HANA.

In a typical setup on the main, Production site, you have one application server running the SAP application. In addition, you have the HANA database

running on its own appliance. This database setup is similar to how you did it prior to HANA – you could have had your database server separate from the application server (running an Oracle database, for example).

 

On the DR site, you need to have another HANA appliance, in order to replicate HANA, and also another application server. Let’s cover the HANA replication first, and then I’ll relate to the replication of the application server.

 

To replicate HANA from your main datacenter to a DR datacenter, you must have a second operational HANA database on the DR site (and yes, it’s quite costly, as my friend and colleague Melchior du Boullay covered last week in his blog post, Considerations Before Migrating To SAP HANA). You can have any combination of HANA appliance or TDI at your main datacenter and your DR site, that doesn’t matter, so long as both databases are operational and the DR database is at an equal or higher release level.

 

In theory, there are two ways for performing that replication, like any other replication – synchronous and asynchronous, with which we started. However, due to HANA’s enormous speed and performance (after all, it’s an in-memory technology), any attempt to implement a synchronous replication without sub-millisecond round trip latency will practically bring the HANA database performance to a crawl, and make it unusable. You will lose all the benefits of HANA. This is why in HANA’s case, using the asynchronous solution is the only practical solution for DR. Synchronous replication is really only an option for High Availability, where both HANA databases sit next to each other; and even then it requires careful consideration.

 

As for the application server itself, there are two methods for replication:

 

1. Using VMWare SRM: if you have your primary application server on VMware, you can use the SRM method we mentioned above, in order to keep it in sync with your original application server.

 

2. Install another server: alternatively, if you’re not using any kind of virtualization, then all you need is to install a fresh application server that has the exact same SID (same system number) as your original application server, and shut down this DR server. You can bring it up when you need to switch to the DR site, and it would work fine. The only thing you would lose are the SAP logs and potential interfaces files. But still, the SAP environment will work just fine, it will allow you to log into the system, and you will see all the transactions and all of your data.

 

 

Handling RPO in SAP

 

Recovery Point Objective (RPO) is defined as the maximum targeted period of time in which data might be lost, due to a major incident (disaster). In other words, how much data can you “afford” to lose in case of a disaster, as defined in your Business Continuity Plan. How is this handled in various SAP disaster recovery methods?

 

The answer is that it varies, depending on your DR solution. In the Traditional method, your RPO depends on the size of your database logs. The bigger the logs, the bigger the RPO. The smaller the logs, the smaller the RPO. However, if your logs are too small, then you’ll have performance impact, because you need to create a lot of files very frequently. Hence, there’s a balance to be had there. The RPO is always the result of a discussion between the customer and oXya’s SAP consultants, to define what is the acceptable RPO for the customer. Once the RPO is defined, oXya’s experts define the size of the database logs, in order to match that RPO.

 

For VMware SRM, the RPO can vary between zero (using synchronous storage level replication; again, not recommended across long distances) and 24 hours, depending on the replication settings. It’s important to clarify that 24 hours is not a realistic RPO in the SAP world, but rather the maximum RPO that is set by VMware SRM (page #48). A typical RPO if not using the storage replication option is 15 minutes.

 

 

So which DR method is preferred for SAP?

 

This is the million dollar question, which oXya’s SAP experts are frequently being asked.

 

The answer: there is no single disaster recovery solution that will be best for all cases. The DR solution needs to be adapted to the customer’s environment, and most importantly to the constraints of each customer. DR is always a compromise between how much money you’re willing to pay, and how much protection you get. oXya’s experts work within the constraints that you set, in order to build the best DR solution possible for your SAP environment.

 

The Traditional method is still being used by many of our customers, it is working very well, and it has proven over the years to be highly reliable. If a customer comes to us and asks about DR, and this customer has no specific constraints, then we start the discussion with the Traditional method, and explain to that customer the various constraints of that method. If the customer is comfortable with these constraints, then we will move forward with that.

 

If the customer requires a more sophisticated DR solution, then we discuss the VMware replication solution. However, implementing the VMware solution depends on whether the customer has already virtualized their SAP environment. If they are still running SAP on physical servers and are not considering virtualization, then SRM is irrelevant.

 

And if the customer has severe budget constraints, we will talk about having an AWS-type of DR solution. This means a relatively cheap DR solution, but it comes with its constraints, which I listed above.

 

 

 

 

Dominik Herzig is VP of Delivery for US & Canada at oXya. Has 10 years of SAP experience, starting as a Basis Admin, and then moving to SAP project management and to account management. Was one of the first few people to open oXya’s US offices back in 2007, and performed numerous projects of moving customers’ SAP environment to a private cloud, and including disaster recovery solutions.

HANA has been the hottest new technology from SAP in recent years. The innovative in-memory database, that is supposed to significantly accelerate the speed of applications, improve business processes, and positively affect the user experience, has been gaining significant interest from customers.

sap-hana.png

oXya, a Hitachi Data Systems company that provides enterprises with managed services for SAP, has significant experience with HANA migrations. To date, we have migrated to HANA about 30 of our 220 customers, or more than 10% of our customers. This is a much higher percentage than SAP’s official, overall HANA statistics, which stands at under 3% of SAP customers who have migrated to HANA. The number of SAP landscapes we have migrated is several times higher and well over 100, as we’ve migrated multiple landscapes for each customer.

 

oXya was one of the first companies in the world to perform HANA migrations. This blog’s goal is to help you, the SAP customer considering a migration to HANA, based on our extensive experience with SAP HANA. What are the main considerations you should look at, when considering a HANA migration? What may be some considerations against such a migration?

 

 

Cost: HANA is expensive

 

First discussion around any HANA migration is usually around cost, and especially the license cost. This is where you should ask yourself – am I ready to make this investment?

 

HANA is expensive. Of course, the discussion should be about HANA benefits and whether the migration is worth the ROI, but many customers don’t get to that stage; the investment itself is a barrier tall enough to prevent a migration, or delay the decision to some point in the future.

 

So what costs are involved with a migration to HANA?

 

1. HANA database cost. If you are currently running your SAP environment on an Oracle database (or MS-SQL, DB2 or any other), then you’re only paying annual fees for that database; the cost of purchasing the database itself had occurred in the past.

 

Migrating to HANA means you need to purchase an entirely new database, HANA. The initial database cost (Capex cost) will be at least a six-digit number in US dollars, and can easily go to a seven-digit number, depending on what you do at your SAP environment, which affects the size of the HANA appliance. We’ll get back to sizing later on, as that’s a critical point with HANA.

 

2. HANA annual maintenance fees. As a ballpark, your annual HANA license fees will be around 15% of what you’re paying to SAP for your other SAP licenses. So, for example, if you’re paying SAP one million dollars annually in maintenance fees, then you’ll pay around another 150,000 dollars every year, in HANA maintenance fees.

 

3. Cost of infrastructure for HANA. The prevalent method for installing HANA is on a dedicated appliance; the cost of that appliance depends on the size you purchase. HANA can also be installed as a Tailored Database Integration (TDI), where HANA is installed on larger, existing servers (compute, storage) in your datacenter, rather than as a separate appliance. In both cases, HANA requires significant hardware resources to run efficiently. The actual cost of hardware depends on the size of the HANA license.

 

4. HANA sizing. With HANA, you’re paying per gigabyte on the HANA appliance. This cost per GB applies to both the HANA license itself, and also to the hardware appliance. For example, if you’re running ECC on a 1TB appliance, then you will need to purchase a HANA license for a 1TB appliance. This means that the size of your SAP landscape and how your company use SAP on a daily basis have direct influence on your HANA cost.

 

Since you’ll be paying HANA fees per size, the correct sizing of your HANA appliance is very important. You don’t want to buy a HANA appliance that is too big, and pay for extra that you don’t use; you also don’t want to buy a HANA appliance that is too small, insufficient for your needs. Many companies are contracting oXya to perform the correct sizing for them, as correct sizing has a major effect on their cost, both appliance size and HANA license.

 

5. HANA Migration. One additional cost to consider is the cost of the migration project itself. Your in-house SAP team does not typically have experience to perform such migration projects. You need to hire the services of an expert company such as oXya, who has done many HANA migrations and has the knowledge, experience and best practices to perform the migration project for you.

 

 

HANA benefits: it’s not just about speed

 

HANA is not just about speed, and that’s important to understand. HANA is also about new functionality. There are many SAP functional consulting firms, who would help you in this area (oXya is not an SAP functional consulting firm). You should consider if you want to leverage that new functionality. If yes, then the new functionality is certainly a great reason to move to HANA.

 

However, if you’re only interested in increased speed for your current SAP applications, then HANA may not necessarily deliver on your expectations. While some people have attributed a 20x factor to HANA, with regards to increased speed, the truth is that you probably won’t get that. The speed of any process you have, like ERP or HR, can’t really go 20x faster for most of the major processes.

 

Don’t get me wrong – customer processes do run faster after a migration to HANA. In one representative case, for example, there was an improvement in the run time of a process from 15 hours to 8 hours. This is a great improvement, nearly 2x, yet the customer was disappointed. They expected much more from HANA. They thought the process will run in under an hour, meaning 10x faster, and that didn’t happen.denver_140821_0272_lo.jpg

So what is the challenge about processes and speed?

 

HANA is designed to run standard SAP processes, meaning processes that have been implemented and are running exactly like SAP designed them. In such a case, a given process can run significantly faster, even x10. This is one of the reasons why SAP keeps telling its customers to “go back to standard”; SAP wants customers to use the standard processes, those that SAP designed, so customers gain the maximum benefits from HANA.

 

However, the real world is quite different. Most SAP customers today have customized their SAP applications, with the help of functional consulting firms. I’d estimate that at least 90 percent of customers have modified the processes that SAP developed, in order to make the process better aligned with their specific business needs. However, these modifications affect the possible speed improvement with HANA, because these modified processes are not optimized for HANA. The result for most customers, from our experience, is that a migration to HANA brings improvements of “only” 1.5x to 2x in speed, and customers feel disappointed with that performance improvement.

 

The bottom line: if you’re planning a HANA migration just for the sake of performance, meaning making SAP faster, then that may not be a good enough reason for such migration. You’ll be paying a lot, and depending on the specific process—you may not get much improvement in speed, and certainly not what you’re hoping to get. I would therefore suggest that you plan your HANA migration carefully, taking into account additional benefits—and you won’t regret migrating to HANA!

 

 

POC is the way to go

 

Most of our customers are asking us to do a Proof of Concept (POC) for HANA, to see what such a migration will give them. We support this approach and encourage you to take the POC route, have a real test and see if the new functionality and improved speed meet your expectations.

 

POC has to be done with your real, live data, in order to know if the migration to HANA will meet your demands. Using your real, live data in a POC is a basic requirement, because that’s the only way you’ll know how your customization of SAP will be affected by the HANA migration. Using demo data from SAP is irrelevant, as it will not show you how the HANA migration will impact your specific SAP environment, especially in cases where functional customization was performed.

 

oXya is performing multiple POCs for multiple customers and prospects at any given moment. If you’d like more details, just contact us and we’ll discuss your specific requirements.

 

 

Considerations for delaying a migration to HANA

 

There are three main considerations that prevent customers from migrating to HANA. Furthermore, our experience shows that for most customers, a migration to HANA is not a definite “Yes” or “No” decision, but more a question of timing – when exactly to perform the migration, based on the following:

 

1. HANA cost and infrastructure refresh cycles: when I wrote about HANA costs at the beginning of this blog, I wrote mostly about the HANA license costs. Here, I’m referring specifically to the cost of hardware, and to refresh cycles for SAP infrastructure (servers and storage). A refresh cycle takes place every 3-4 years; customers depreciate their equipment over 3 years, but will often wait another (4th) year before buying new equipment.

 

Hence, a customer who has just ‘recently’ purchased new infrastructure (and ‘recently’ can be 1-2 years ago), and already spent a significant budget on that new SAP infrastructure, would usually not migrate to HANA at present time. They simply don’t have what to do with the existing (relatively new) infrastructure, and they won’t invest significant amounts in new infrastructure for SAP.

 

The exact same argument works the other way around. When a customer’s refresh cycle is coming up and the budget for new infrastructure is approved, that would be a good reason to move to HANA. The cost of moving to HANA may be a bit higher, compared to a regular, non-HANA infrastructure refresh, but that difference is easier to justify, due to the major infrastructure refresh.

 

2. Concerns regarding HANA maturity and migration path: we’re speaking with a huge and well-known retail brand about HANA, and they listed three concerns regarding this migration. The first concern was the already-mentioned cost; they told us the migration to HANA was too expensive for them. In addition to cost, their second concern was that HANA was not yet mature enough. They didn’t trust it for running the Business Suite on HANA, and for them to base their entire business on HANA.

 

Personally, I believe that HANA as a solution is mature, and for itself brings no risk to the organization. However, for that specific customer, there was another, third concern, which was a very complex, long and costly migration path to HANA. We analyzed the migration project and came to the conclusion that it would take about 18 months, from start to finish, since this customer has a very complex SAP environment, with many things that need to be taken into consideration. The bottom line is that this customer is right in not migrating to HANA at present, due to the complexity of the project, but this is not related to HANA maturity.

 

3. Issues with Product Availability Matrix. A third common reason, that prevents customers from migrating to HANA, has to do with product availability matrix. Some applications just can’t be migrated to HANA, and when such an application is mission-critical for a customer, that can prevent the entire migration project.

 

We have a customer that delays the migration to HANA, due to an application that is critical to their operations. That application is not from SAP, but it’s connected to the SAP system. The application’s version is not compatible with HANA, so HANA will not be able to read this application’s database. The customer also can’t upgrade the application, because that project is also very long and complex. The customer is planning to replace this application with another software package in about a year, to support a future migration to HANA.

 

The above example is relevant to many customers who have external, 3rd party products that are connected to SAP with interfaces. When these external products are critical to the business, yet can’t work with HANA, then such a scenario puts the entire HANA migration on hold. Before migrating to HANA, we must verify that all these products have a clear migration path to HANA, and they will continue to work with HANA.

 

 

Conclusions

 

As you read in this blog, a migration to HANA involves many considerations that need to be carefully analyzed and discussed. oXya, with about 600 SAP experts serving more than 220 enterprise customers worldwide, is having such discussions with customers on a daily basis.

 

If your organization is considering a HANA migration, feel free to reach out to us for consultation. You can ask questions or post comments here (I promise to answer), or contact your HDS rep about this.

 

 

 

About the author: Melchior du Boullay is VP of Business Development and Americas Geo Manager at oXya, a Hitachi Data Systems company. oXya is a technical SAP consulting company, providing ongoing managed services (run management) for enterprises around the world. In addition, oXya helps customers running SAP with various projects, including upgrades and migrations, including to SAP HANA. Enterprises migrating to HANA are using oXya to perform the HANA migration for them, since they usually don’t have the in-house skillset and prior knowledge to perform that migration.

Earlier this year, Hitachi Data Systems acquired oXya, a leading provider of SAP managed services. Nearly 600 SAP experts around the world serve more than 220 enterprise customers and nearly a quarter of a million SAP users. oXya’s SAP experts are at the forefront of SAP technology, being among the first in the world to test and implement new SAP technologies.

 

Next week, after the Labor Day holiday, we’re launching a blog series written by HDS/oXya’s leading SAP experts. Each of our experts has many years of SAP Basis experience. They head SAP managed services teams, interact with customers on a daily basis, consult to customers on new projects and initiatives, and are at the cutting edge of SAP technology. They hear from customers about their needs and challenges, and will address these in their blogs.

 

The HDS/oXya bloggers will cover new SAP technologies; various tips and best practices regarding implementations; trends in the market; and also various business challenges that SAP decision makers are facing, with how to tackle these.

 

So who are our bloggers? Below is an initial list of our bloggers, with additional HDS/oXya SAP experts joining over time:

  • Melchior Du Boullay, Americas Geo Manager & VP Business Development
  • Sean Shea-Schrier, Service Delivery Manager
  • Dominik Herzig, Account Manager, Delivery for US & Canada
  • Mickael Cabreiro, Account Manager & Expert Consultant, SAP Basis
  • Philippe Gosset, Director / Client Executive
  • Neil Colstad, VP Business Development oXya Cloud Application – System Integrators, Cloud Providers & Resellers

 

And some subjects that our experts will blog about in the coming weeks include:

  • Considerations before migrating to SAP HANA
  • Disaster Recovery for SAP – what are the various options available today, and differences between them?
  • HANA Multi-Tenant Database Containers (MDC): benefits and challenges with SAP Business Suite
  • From the traditional to the new SAP GUI – what you should know and what it gives you
  • Fiori and NWBC Authentication - deploying a seamless end-users experience using SAML 2.0 and ADFS
  • Outsourcing your SAP and other mission-critical applications: why you should consider it, and how to select an outsourcing partner?
  • Permanent technology learning: the SAP Basis challenge to stay at the cutting edge

 

We want to hear from you

 

Do you have a challenge with your SAP environment? Questions you’d like answered? Any specific topic you would like us to address? Let us know by responding to this blog, or to any other HDS/oXya blogger. We promise to answer all comments. If you suggest topics for blogs, we’ll do our best to have one of our experts write about that, and add that topic to the editorial calendar.

 

Looking forward to hearing from you all, and to have a live, fruitful discussion.

 

 

 

 

Ilan Vagenshtein is a veteran marketing and sales enablement expert for B2B technology companies, currently supporting SAP services marketing & sales enablement for HDS/oXya.

Recently, Janakiram MSV of Forbes wrote an interesting article titled "Hitachi Data Systems (HDS) is betting big on Smart Cities", discussing how HDS is making big strides into smart cities, Internet of Things, and analytics . If you haven’t read it yet, catch the article Here.  He covers Hitachi’s recent acquisitions and how they are slated to play a crucial role in delivering HDS’s social innovation solutions. 

Of special interest to me was his mention of our oXya acquisition, a system integrator focused on deploying SAP applications in cloud environments. This acquisition further demonstrates the extent of Hitachi’s commitment to the SAP ecosystem and integrating SAP applications into our social innovation solutions. In fact, this year at SAP TechEd we will be there as one company demonstrating our solutions for the SAP ecosystem.

 

Janakiram also talks about HDS’s partnership with system integrators such as Infosys for various projects in the public sector and other verticals. Infosys’ internal business processing system based on SAP and SAP HANA runs on Hitachi.  With more than a 150,000 users, Infosys runs of the world’s largest single instance (as of Nov 2014) of SAP Business Suite on HANA and it runs on Hitachi.

 

SAP is one of the most strategic partners for Hitachi and over the last year we have made some significant investments to create an entire eco-system around SAP. This collaboration has only gotten stronger since the release our first SAP HANA solution almost 3 years ago.  Since then, a lot has changed in the world of SAP HANA. Today SAP HANA platform is more than just BW running on an in memory data base(IMDB).  It is about all applications running on SAP HANA and Running Simple including the most recent release of SAP S/4 HANA. Hitachi is going beyond just creating SAP HANA solutions and working on integrating business critical SAP applications including SAP HANA into our social innovation solutions for a variety of industries including smarter cities, healthcare, and telco among others.  Stay tuned for exciting solutions to come and check our SAP Community Page for the latest news.


oXya_HDS_logo_reverse.jpg

SAPMexico.bmp

HDS was proud fot host the inaugural SAP Inside Track Mexico City event for the SAP Mentors in Mexico City at::  Santa Fe, instalaciones HDS Hitachi Data System,  Prolg. Paseo de la Reforma 1015, Torre B Piso 1, Santa Fe, Ciudad de Mexico CP 01376 with a full agenda we co-created together for the SAP Inside Track Mexico City - 09 of September ... | SCN


SAP Track event - twitter image.jpg

The attendees learned about our HDS and SAP partnership throughout the day.  All attendees were eligible to win prizes from Hitachi and other sponsors, such as SAP INSIDER STORE and SAP PRESS.

 

Captura2.JPGCaptura3.JPG

 

Captura4.JPG

 

Several attendees were tweeting photos with via www.twitter.com/InsidetrackMEX  www.twitter.com/sapmentors and www.twitter.com/HDSGlobalAccts with hash tags #SITMEX and #sapinsidetrackmexico

 

Interested to attend another session in Mexico?  Feel free to email:   sapinsidetrackmexico@gmail.com

or follow along within SCN at:  SAP Inside Track o SCNCommunity Events - Community Events - SCN Wiki

 

 

Captura2.JPG

 

Captura3.JPG

 

Captura1.JPG

Captura1.JPG

 

Captura4.JPG

 

To learn more about the HDS / SAP Partnership, you can tune into Generosa Litton





Gain Deeper Insights from Big Data & Mobility to Engage Shoppers Anywhere, Anytime

Register Today

Envision the shopper of tomorrow strolling into a store. Inside, her shopping experience is customized to the products and service she wants. Her needs are anticipated, and she finds the products she wants at the prices she expects.  She pays with her phone, her shopping is delivered, and her loyalty recognized and rewarded.

This vision may seem futuristic, but, tomorrow is actually happening today.

How do you gain deep insights to make data-driven decisions and implement digital strategies that drive traffic, increase basket and convert shoppers?  As part of The Future of Retail webinar series you will learn how to:

  • Deliver compelling shopper engagement across all channels
  • Anticipate and fulfill your customers' expectations
  • Personalize offers that deliver measurable value
  • Leverage your stores as a fulfillment network to deliver more options for your customers

 

 

 

 

 

Date: August 25, 2015

 

Time: 12:00 noon ET

 

Featured Speakers:

 

Gerry Yeo, Principal, Retail Industry Value Engineering, SAP

 

Jayant Chhallani, Global SAP Practice Head – Retail & hybris, Atos

 

 

Register Today

 

 

It's not a world record attempt, rather a test of efficiency. The consolidated approach for the scale-up appliances reduces hardware and operational costs, optimizes time-to-value for existing hardware and shortens implementation cycles. A maximum of eight scale-up systems with different memory sizes and operating systems can be configured with Hitachi Virtual Storage Platform G600 and a maximum of six scale-up systems can be configured on Hitachi Virtual Storage Platform G400. Read our Reference Architecture Guide to learn more.

 

Tailored Datacenter Integration program allows HANA customers to leverage existing hardware and infrastructure components for their HANA environment. Our testing and certification means you can use your VSP with your preferred server. We hope that server is one of ours, we understand you may already have your HANA server. We can also offer SAP HANA appliance “all-in-one-box” approach in our Hitachi Unified Compute Platform Select for SAP HANA converged-infrastructure solutions.

HANA Node Scalability.png

We also offer the Hitachi Storage Adapter for SAP Landscape Virtualization Management. This adapter implements the interface for managing Hitachi storage systems and allows you to easily integrate these into SAP LVM. You dynamically provision and de-provision LUNs on the Hitachi storage arrays as well as attach and detach LUNs to and from resources that have been configured for access over the Fiber Channel protocol.

 

The VSP G400 and VSP G600 are certified SAP HANA enterprise storage systems. The VSP family allows you to run workloads at peak performance with best-in-class flash performance, deliver 100% data availability - guaranteed, and simplify IT operations and be more responsive to business needs with HDS software-defined infrastructure.

The retail industry is facing unprecedented changes. Shoppers want a personalized shopping experience across all channels. “Click & Collect” as well as “same day shipping” is becoming the norm. The supply chain has become customer centric and hence, bringing efficiency in the entire retail value chain has become more important than ever.

Omni-Channel Retailing, Channel-wise pricing and promotions, Customer behavior & experiences, Seasonality and Trends are the forces making forecasts challenging.

A demonstrated forecasting & replenishment system is needed to minimize inventories but still maximize product availability.  SAP Forecasting & Replenishment is a proven solution for retailers of all sizes, in all retail segments, for all locations (Store, Distribution Center, multi-echelon) and can be deployed integrated with SAP Retail or as a stand-alone solution integrated to existing non-SAP systems.

Join us to learn how you can leverage SAP F&R to optimize your inventory levels & improve customer experience.

Hitachi-Indydesktop.jpg

For the past 18 months, it seems that everywhere we turn we are hearing about SAP HANA. It runs fast (like our Indy Car!). It does amazing things. It does real time analytics. It’s expensive. These are all comments I’ve heard from customers over the past year. While many have investigated it, few have gotten out of the proof of concept phase given the lack of a true compelling event. That’s about to change now that SAP has announced the next generation of Business Suite 4 (SAP S/4HANA).

 

Running on it’s advanced in-memory platform, SAP S/4HANA will bring forward a number of innovations and the ability to allow customers to do things in real time that they couldn’t previously do.

 

So how can you leverage this to its maximum potential in your environment? One way HDS has made this easier for customers is by leveraging the LPAR (logical partitioning) technology within its compute blades. LPAR allows a customer to logically divide computing resources such as CPU, memory and IO devices on “bare metal” via firmware, providing better isolation. The ability to partition a server allows customers to run multiple instances of SAP on one physical blade: development, test and production. Since LPAR technology allows you to split the server into multiple ones, there is no more risk of “noisy neighbours” causing disruption to the other tenants on the blade. This also means that customers can maximize their investment in technology, adding compute power as they need to grow while not wasting any resources on their current systems.

 

Efficiency is key given the latest published reports that state that roughly “30% of servers deployed worldwide have not delivered information or computing services in the last six months” (http://www.datacenterknowledge.com/archives/2015/06/03/report-30b-worth-of-idle-servers-sit-in-data-centers/). This wasted compute power translates to about $30B in assets sitting idle.

 

HDS has been partnering with SAP for many years and offers a certified Unified Compute Platform (UCP) for SAP HANA stack that comes pre-configured and ready to use, accelerating your time to market. In today’s day and age where everything is required on demand, this solution not only allows a customer to achieve this, but also allows them to reutilize the components for other uses in the event this is required in the future. No waste, maximum efficiency with no compromise in performance.

By Gary Chen, IDC Research Manager, Cloud and Virtualization System Software

Sponsored by HDS

 

Over the past decade, virtualization has grown from a lab experiment to the standard way to deploy servers today. While virtualized servers have become the majority, there are still many bare metal servers in operation. The toughest workloads to virtualize are high-performance mission-critical Tier 1 applications, and many customers are just starting to tackle these apps. There have been many advances over the years that make virtualization more ready than ever to take on the toughest applications.

From a pure performance point of view, the hypervisor has seen numerous optimizations and takes advantage of all the latest virtualization acceleration features in today's CPUs, which reduces the overhead of virtualization. Also, the rest of the virtualization software ecosystem has grown and matured tremendously. Core virtualization packages encompass features such as monitoring, management, storage, and networking, while third-party tools have all become virtualization-aware, making managing a complex virtualized application more reliable than ever.

Hardware has been tuned for virtualization as management use cases became prevalent on servers. Besides the virtualization acceleration features found in the silicon, we've also seen core counts scale out and RAM sizes increase, allowing customers to pack more virtual machines (VMs) onto a server and also run very large VMs. Beyond just the server, the rise of converged infrastructure now integrates storage and networking. Besides the convenience of deployment and easier management for customers, this model also allows for testing and certification across the multiple hardware components and performance tuning — capabilities that are particularly critical when virtualizing Tier 1 applications.

All these factors have made the virtualization of Tier 1 apps possible today, allowing customers to maintain required levels of availability and performance while reaping the many agility and cost-saving benefits of virtualization. A recently released whitepaper by IDC highlights the virtualization of Tier 1 apps on converged infrastructure, and features a case study of a customer that was able to virtualize SAP applications, a complex Tier 1 application used by many customers. Today, even the newest SAP HANA application, which is an in-memory, high-transaction-rate database, is approved for production use in virtualized environments. SAP HANA would be considered a "torture test" application by any measure, and yet today is able to be virtualized, attesting to how far virtualization technology has advanced — the limits of which continue to be expanded.

For more information, please read the recent IDC White Paper, "Virtualizing Tier 1 Applications on Converged Infrastructure" here Virtualizing Tier-1 Applications on Converged Infrastructure.

StockArtHDSCommunityBlogIntelMay2015.jpg


By Jim Fister, Intel Corporation


I have a friend…

 

Okay, at this point you’re already thinking that I’m kidding with you.  All of us Intel automatons don’t have lives or friends, we just relentlessly execute to Moore’s Law and develop complex Powerpoint decks that highlight the feats of bit-twiddling that make things run incredibly fast.  Well, hey, we do have lives, and we do make friends. I’ll leave it to your imagination how we find the time.

So anyway, I have this friend.  He’s a guy I hired a couple years back (“AHA!” you say…) as an intern, probably one of the better performance-oriented coders I’ve ever met.  The stuff he does with digital image processing is incredible. He came on full time to work for Intel for another friend of mine, and then he got caught by, “the thing.”

 

Big data called.

 

Today he’s the CTO of his own drone company.  They fly over farm fields taking high-resolution, high-spectrum photos. They then mesh all the images together and do special image processing using a large pile of cloud-based servers to look for water problems, or insect infestations, or the like.  All of this data is stored and digitally farmed over time as the analog farmers fix the crops using his data.

So if you’re following along, a former Intel guy is making little robot airplanes, and there are massively parallel systems somewhere in the ether making crop growing more efficient. Life is glorious.  Big data is glorious.  Heck yea.

 

If you’re less prone to molding your own Kevlar airplanes and programming flight systems to match camera shutter speeds, then Big Data can still be for you.  The biggest enterprises across the world still benefit from data analytics, and it's possible to do your own digital farming on your own fertile fields.  Let’s face is, if anything is growing faster than Moore’s law, it’s digital data.  And as all that data comes in, there’s a strong need to use it before it gets stale.

 

And, well… that’s a problem too.  Those crops the farmers are pulling likely took half a year to grow and have weeks of usable harvest time before they turn into so much primeval goo.  Not so with your enterprise data.  You’re getting it fast every minute of the day in the global economy, and about half of it probably isn’t worth a whole lot after a few hours, if not a few seconds.  Decisions have to be made on harvesting your best stuff before a human really has time to add anything positive to the equation.  So I guess even Intel processors need some friends, too.

 

Take SAP, who decided years ago that database architectures fundamentally had to change to keep up with the demands of modern data analytics.  That was about the same time that Intel was starting to conceptualize a new high-end system architecture.  So after a little bit of crop rotation, we jointly harvested some pretty cool stuff. SAP created HANA, a database architecture that runs totally in memory using scores of parallel processing threads. And Intel pulled up a nice crop of Xeon® E7 v2 processors that provided a significant amount of processing threads and memory where the data could rest.

 

Even that wasn’t enough. We needed a solid friend like Hitachi to produce the Unified Compute Platform (UCP) – a massively-parallel, robust, reliable, and manageable system. The system has significant features like embedded logical partitioning (LPAR) that can simplify the delineations between production and test.  It utilizes the latest, hot-off-the-press Xeon E7 v3 processor in a symmetric multi-processing (SAP) configuration along with the SAP engine to give any enterprise its own equivalent to a flying drone with a digital window into fields of data.  Who knew that a big server could be so nimble in the air, at least in the abstract?

 

Enterprise is glorious. Big data is still glorious.

 

Heck yea.

 

The point is that traditional business processing doesn’t work anymore.  Sure, you can run your business like you always do, but the pace of innovation is just way too fast these days.  When you have seconds to make a decision, a traditional database is minutes away from the answer.  With SAP HANA and the SAP S4 Business Suite a Hitachi UCP system can change the way that you fundamentally do business.  If your crops are the first ones to market, you get the advantage over everyone else. A lot of the algorithms being used are fine for the business, it’s the pace that needs to change.  Where tradition meets innovation, that’s the transition that easily moves a company into the era of Big Data Analytics.

 

Intel and SAP will keep providing the base tools for innovation, and Hitachi is there to build the vehicle for getting your decisions to the market first.  With friends in the field like that, I think you’ll be pretty happy to plant that next round of data where it can grow and thrive.

 

Jim Fister grew up playing in the dirt in Ohio to the point where his mother despaired of ever keeping him clean.  He spends his time these days kicking up the clods around Big Data Analytics and the Internet of Things, unless he’s somewhere in the mountains of Oregon getting fresh air.

Recently, Atos launched state of the art solution for retailers. These solutions will help retailer to increase customer footfalls across channels, build intimate customer experience, improve supply chain and reduce technology cost.

Atos retail solutions leverage SAP HANA and the Hitachi Data Systems Unified Compute Platform (UCP) which is optimized for SAP HANA® deployments.  HDS UCP solutions simplify the deployment of SAP HANA while also helping  accelerate the time to value of Atos retail solutions.  

Some of these solutions are detailed below:

Atos Advanced Market Basket Analysis powered by SAP HANA

For retailers & wholesalers, inventory is usually the largest single asset on the balance sheet and the cost of that inventory is the single largest expense item on the income statement. As per Ted Hurlbut, inventory carrying costs are estimated to often represent 25 - 30% [4] of the value of inventory on hand. And for an average Fortune 1000 company, a modest 5% decrease in inventory cost translates into a $20 million increase in profits [5]. Traditionally, retailer has been using different processes such as Vendor Managed Inventories (VMI) etc. to get rid of the burden of inventory carry cost. Although innovations and solutions such as VMI enable organizations to view historical sales, they have minimal to no capabilities for predicting future sales.

With the help of predictive analytics, the assortments can be planned more efficiently, so that best performing articles/products are identified and sufficient stock made available to meet demand, while a lower priority is placed on slower moving items.

Customers who buy one product may typically purchase another product at the same time for any number of reasons — bread & butter, for example. With predictive analytics, once purchasing patterns are identified, an increase in the sales of the first product can trigger an automated decision to increase the inventory of both the first product and its complementary product, helping to maximize total sales volume and customer satisfaction.

Atos Advance Market Basket Analysis (A-MBA) powered by SAP HANA, uses Apriori algorithms to analyze huge amounts of POS TLOG data in order to identify and quantify customer buying patterns, preferences and behaviors. These patterns helps retailers to sell larger basket sizes by identifying products that drive drag-along sales by identifying geographic trends, local demand and performance. These patterns will also help plan & optimize assortments to help drive higher sales & profits across channels.

With the help of massive parallel processing and in-memory computing of SAP HANA, the data can be analyzed at the store level; specific offers can be formulated and rolled out at a local level.

Atos STORe (Strategic, Tactical & Operational analytics for Retail)

As per RSR [0], winning retailers (74%) consistently put more emphasis on making decisions using experience/intuition AND data when compared to laggards (50%) [1]. Winning retailers are moving on to be able to execute with more precision, based on what they learn from customer insights, and to react more quickly to shifts in demand. Specifically, that means improved inventory effectiveness and improved Marketing effectiveness.

Data visualization and predictive modeling are becoming important to retailers’ “basic” BI & Analytical capabilities. Winners are moving faster to deliver insights for operational use through mobile access, alerts, web browser access, and scorecards & dashboards. Key to analytics strategy for “operationalizing” insights is a mobile technology strategy for store employees, and particularly store managers. Today’s most successful retailers really do empower their employees with information and decision-making authority, and customers notice. Both Apple and Amazon.com Customer Service Representatives are empowered to make very high-value customer-related decisions. Today, this solution is under development.

Atos STORe (Strategic, Tactical & Operational analytics for Retail) will help retailers become winners. Atos STORe is powered by SAP HANA, which has pre-built content of more than 200+ KPI across merchandising, supply chain, store operations & multi-channel retailing. Atos STORe is also mobile enabled across platforms with real-time dashboards & reports.

This will help retailer take informed decisions at the moment of truth. This will not only empower the frontline store associate serve the customer better but will also help the senior management take real-time decision on the move.

Location based Personalized Promotions for Retailers powered by SAP HANA

eCommerce (or etailing as it is called) has already cross $1 Trillion ceiling in 2013[3] worldwide & expected to grow at a CAGR of 18.3%. Yet, as per Forrester survey, a whopping 52% of the customers are willing to pay 1-5% more to purchase a product in a physical store rather than buying it online & wait for the product to be shipped [2]. David Geisinger, head of retail business strategy for eBay said, "Physical stores just aren't going away".

Today, retailers are facing two major challenges with respect to the stores. First, increase foot falls into the stores i.e. how to achieve faster ROI on the investments done in the stores i.e. and second, how to increase the wallet share of the customers.

Few interesting examples used around the globe:

Meatpack – (Shoe store in Guatemala) Hijack: It used GPS technology to detect customers of its app when they are in competitor stores, before sending them a message with a discount. It would start at 99% and drop by 1% every second. So, the faster users got to a store, the better the discount.

B&Q App: An excellent response to the growing use of mobiles in stores and the 'threat' of show rooming. Rather than rewarding purchases with loyalty points, the B&Q Club app gives customers a reason to go in-store by offering exclusive discounts on various products.

Carrefour Smart Shopper: A mobile app designed to enhance the in-store experience for the Chinese market. The app uses location sensing technology, a social shopping list, and an ad system that enables retailers to engage with their customers while in store. Customers can also use the app to navigate to the product they want while in store.

And this is where, Atos Location based Personalized Promotions (LbPP) powered by SAP HANA can help retailers enable real time data analysis on billions of POS TLog line items. Atos LbPP will help retailer push right offers & promotions to the right customer in a geo-fenced area derived based on the demographic, preferences, historical purchases etc. Atos LbPP will help the retailers increase the foot fall to the stores as well as increase the wallet share by pushing the right offers & promotions to the customer.

This will help retailers improve the ROI on the marketing spend as well as provide a unified customer experience across channels. This solution is available for co-innovation with customer.

SAP Forecasting & Replenishment powered by SAP HANA

Omni-channel is evolving and maturing. Order anywhere & collect anywhere is now a new de-facto of retailing. The U.S. Department of Commerce estimates that 94.5% of all retail sales still happen in a store but that’s down from 98% just a few years ago. And that doesn’t gauge activities like buy online/return in-store. As the shift piles up, fulfilling online demand moves from a rounding error in store plans to something that must be forecasted and managed on its own. And forecasting demand for online is not as simple as treating the channel as a flagship or extra-large store - demand comes from a much larger base of consumers spread across a much larger geography than any store ever must accommodate. Planning merchandise in that situation is far more complex than the already complex requirements for planning at individual store levels. SAP F&R is available for demo with vanilla scenarios in sapCC landscape.

SAP Forecasting & Replenishment application enables retailers to accurately understand, predicts, and manages the balance between inventory and customer service. The software helps them to increase productivity while reducing order and delivery costs. It includes various aspects in its analysis, such as inventory and delivery data and improves on-shelf availability. Improved fill rates of shelves, Improved shelf turnaround, improved on-time delivery, better store-in-stock rates & reduced inventory because forecast is accurate.

It can help retailers reduced inventory levels up to 15+% with increased service levels up to 98.5%