The Non Disruptive Innovation with Disruptive Technology

We all like innovation. We always see an opportunity for improvement, whether it’s in a product or in business process. Actually the economy is based on a permanent strive for change and the worst for it is stagnation. From time to time we experience massive technology changes and they are disruptive, sometimes very disruptive.

Think about ocean liners replaced by jets, break bulk freighters by container ships, land line phones by cellular phones, disk databases by in memory databases. The replacement technology allows for completely a different schedule, cost, flexibility or in case of the databases new business processes. What do we do with the existing infrastructure? Can it continue to be used? Can it be refurbished?

What happened to the ocean liners – they became hotels or cruise ships, but only for a short time. yes, some freighters were converted in the early days of container logistics, but again they didn’t fit in the long run. And how did our life change with cellular phones. People don’t remember any more the lines in front of a telephone booth at the airport or on a busy street. So, the changes will happen, they are part of the innovation process. To fight the changes is counterproductive, it costs extra energy and may not even not even work.

Despite all that SAP talked about non disruption as a strategy. Other IT companies claim they promised 30 years ago full upward compatibility and can show to keep the promise. Let’s have a look where these strategies make sense and where they will fail.

There are currently a few mega trends in IT, changing the way it supports business:

  1. SaaS, applications running in the cloud and offered as a service. Completely new are the generic shared service ones, like marketplaces,    business networks, etc providing the same set of services to many clients while connecting trading partners. Also more traditional enterprise applications are offered as a complete service and run for each client separately, despite sharing some system services through multi tenancy concepts to reduce cost, pretty much like the shared services in an apartment building.
  2. The IoT (internet of things) will flood us with data coming from a myriad of sensors to report the well being or problems of expensive machinery. What was already yesterday standard for aircrafts, we will soon see in drilling machines or even toasters.
  3. The sprawling social networks have become a basic part of our live and as such give a testimonial about what we like, don’t like (remember thumbs up/down) and have become a vital source of information for business.
  4. On a much smaller scale, because it’s happening inside the applications, we see how in memory databases replace disk based ones at a rapid pace.

How does SAP play out the ‘non disruption’ strategy when faced by these mega trends? If you want to deal with textual data, digest billions of sensor messages and be able to work as an SaaS application in the cloud, SAP opted for a completely new platform for it’s enterprise applications. HANA is not only an in memory database using columnar store instead of row store but offers libraries for business functions, predictive math algorithms, OLTP and OLAP functionality in one system, distributed data management for data marts or IoT solutions.

Technology wise HANA is truly disruptive but that doesn’t mean everything has to change at least not instantly. Let’s have a look at the ERP system of SAP. Being a success story for over 20 years, thousands of companies have invested billions to set up the system, maintain it over the years and developed customer specific add ons for competitive advantage. There is a tremendous business value captured in the system configuration and the captured data. SAP kept both intact moving forward from anydb to HANA. No data will be lost and the configuration parameters stay intact. Thanks to one of the great standards in IT, the SQL interface, all programs  can continue to run unchanged. That’s the first step and guarantees a smooth transition from anydb to HANA. But HANA is disruptive and the unbelievable speed improvements allow us to drop some concepts of the 90ties to guarantee short response times. In sERP SAP could show that the transactional update of hierarchical aggregates as introduced in the days of MIS (management information system) are not necessary any more. Instead any kind of aggregation for reporting or analytical purposes is now happening on demand. Also the various replicas of transactional data in different sorting sequences are no longer a performance benefit. Once running on HANA all the programs accessing those data objects have to be changed. but it’s happening semi automatic. The old data structures are replaced by SQL views with the identical functionality and a similar name. The programs continue to run without any further change. Now we can drop the redundant data structures and gain a 2x reduction in the overall data footprint.

Now the question is, shall we stop here, or do we continue to take advantage of the new properties of a columnar store in memory? The traditional aggregation rules, implemented as a program maintaining the totals are more than 20 years old and not very important any more. Now many different selections, aggregations, comparison, predictions are possible because the transactional data is kept at lowest level of granularity and all further processing happens in algorithms on demand and not as part of a transaction any more. New programs will be added and supercede the old functionality but they come in parallel and as such continue to support the ‘non disruptive’ paradigm. There is a disadvantage with this strategy –  it takes more time. But it’s worth it. All customers can move gradually forward, keeping major accomplishments of the past unchanged. A similar approach is being used for the introduction of the new UI. The FIORI apps drop in in parallel and the user or user groups have time to adjust to the new layout and interaction model.

More radical changes come as an option. All enterprise applications will go to the cloud and be managed by the service provider. The system maintenance will accelerate significantly. The dramatic reduction in complexity of the data model and the removal of technically more critical update tasks let to a system in which data inserts and read only data retrieval dominate. When less data changes are happening in a system the stability and availability of the system increase. Most of the application components are by definition read only and can therefore join the system at any time. The dependency of code and data is now at a completely different level. This is a prerequisite for a successful use as SaaS.

It sounds surprising that the change to the HAHA platform is the bases for the advances, but that was always the idea of platforms. They over services all applications need and shield them from the ongoing changes in technology. The final product, sERP looks and feels fundamentally different, solves problems which were unthinkable yesterday and is still carrying the business configuration and the enterprise data to the future nearly without changes.

The reduction of the complexity of transactional data has even more dramatic consequences. We see now a reduction in the data footprint of 10 – 20x while keeping all data in memory. If we split data into actual (necessary to conduct and document business) and historical (no changes allowed any more) we can further optimize the database processes and reduce the amount of data kept in memory.

There were two reasons to split up enterprise systems into erp, crm, srm, scm, plm, hcm as transactional systems and a separate business data warehouse. First the sheer size of the systems outgrow the single computer capacities and we split them up. Second, once we had independent subsystems, we could develop them at different speed using different technologies. Having them all moved to a brand new platform, the HANA platform, both the size and the speed argument are not valid any more. All systems can be reintegrated now, eliminating the enormous data transfer between the systems. The management of one single system with the above components is times easier and less costly, especially considering the advances in maintenance strategy as mentioned above. The separate data warehouse still has value but many of the operational reporting and analytics can now come back to the transactional system. Capacity concerns aren’t any longer valid, the replication of the actual data partition is the answer and contributes to HA (high availability) one the other hand.

Running in the cloud it becomes much easier to integrate the simplified business suite with other services in the cloud. The future enterprise solutions will make use of all the generic business services like ARIBA, Concur, Fieldglass, Success Factors and many others. The last question  is, will eventually everything run in the cloud? No, but it will run first in the cloud. There is no principal limitation for cloud software to run on premise. The financial terms may be different, the maintenance rythm will be different  but all innovation finally will spill down to the on premise versions, even, if technically viable, including the ones on non HANA platforms.

We do mitigate the consequences of disruptive innovation, but we do not continue to carry on the past forever as nobody boards an ocean liner any more to go new york, or lives without a cellular phone, or ships cargo as a discrete item. We carry forward the established business processes, complement them with new ones and finally phase some of  them out.

By the way this all should happen without any downtime for the business.

Source: http://www.saphana.com/community/blogs/blog/2014/10/28/the-non-disruptive-innovation-with-disruptive-technology