Skip navigation
1 2 3 Previous Next

Innovation Center

92 posts

Traditionally Fiber channel networks have been considered more secure than Ethernet networks. This is owing to the fact that it is hard to snoop traffic on a fiber channel network since one needs a tap to analyze traffic on a fiber channel network.

 

Basic security in fiber channel networks starts with zoning. Essentially every server that has access to data on an enterprise storage array is populated with two or more fiber channel HBAs. These HBA’s send and receive data from the arrays and are called initiators in the industry parlance. Each array appears as a target for the initiators and virtual disk drives called luns are made available via these targets to the initiators.

 

Zoning is a basic mechanism whereby one manually specifies which initiator can communicate with which target port. Each enterprise array has multiple target ports to which luns are published.

 

When an initiator and a target are connected to a fiber channel fabric switch they perform a fabric login called a FLOGI. On successful login, they register their addresses called WWNs with the Name Server running on the fiber channel switch. Each WWN is unique to an initiator or target and is assigned by the manufacturer much like a mac address is assigned to a network card.

 

As part of the deployment process, a storage administrator configures what is known as zoning whereby he or she creates typically binary sets of initiator-target pairs that may overlap. Once configured the initiator cannot communicate with targets and associated luns outside the binary set.

 

Now consider a typical server in the datacenter hosting up to 100 virtual disk drives called luns.

50 of these luns may be dedicated to an oracle database server and the remaining 50 may be exported via NFS to clients.  Consider a scenario where a trojan ransomware infiltrates the NFS share and thus gains access to the other 50 luns where Oracle data is stored. It could potentially take over the Oracle database files and lock out the process by encrypting the files.

 

Now consider a traditional scenario where the zoning mechanism plays out as below.

 

  1. Initiators and Targets login to the Fiber channel name server via a Fabric Login or FLOGI
  2. Zoning is configured allowing specific initiators to talk to specific targets
  3. On zoning, a state change notification is triggered whereby an initiator that is in the same zone as the target logs in into the target via a PLOGI
  4. Next, a Process Login or PRLI is initiated whereby communication via the FC4 layer typically SCSI can occur between the initiator-target pair

 

Now, what if an application layer login to the nameserver were introduced after the PLOGI and prior to the PRLI. Each application could be assigned a GUID and an administrator could specify which application can talk to which luns published on the target.

 

One could take this further with the nameserver on the switch acting as a key exchange server and forcing the application and the target to authenticate using a symmetric or asymmetric key exchange.

 

The only weak link here is the case where a rogue process manages to spoof GUIDs of authentic processes. This can be potentially worked around by making the GUID partly dependent on a binary hash of the process executable.

A client recently asked me if I could develop a report which analyzes the primary and secondary area of their business. I told her if she could elaborate the meanings of "primary" and "secondary" markets, I would be glad to help. She and her analysis team went away and discussed; after a few days the team still could not come to a consensus. Simply, one can define what constitutes primary and secondary markets one way and another person in another way.

 

I understood the client’s vision and the difficulty that she was facing in articulating the exact specifications of primary and secondary market. But the lack of definitions gave me an opportunity to be creative. And I had to consider that clients typically prefer tools that are intuitive than most sophisticated tools that they do not understand.

 

Imagine a situation like this. A task that involves two participants. The first participant is a regular person and the second participant is a car technician. On a table there are two sets of the following, one set for each person: 100 nails, a wooden plank, a regular hammer, and a sophisticated powered hammer from the shop. Each participant is given 5 minutes to nail as many nails as they can on the wooden plank.

 

Given that the regular person has no experience with the power tool compared to a seasoned car technician who has an ample amount of experiences, which tool does each participant use for their tasks? Due to the time constraint, the regular guy probably went with the basic hammer, rather than trying to learn and use the sophisticated tool from the shop. Even though the powered hammer is the ideal tool for the task, any untrained person would choose a tool that he or she is more comfortable with especially in an environment with resource constraints.

 

In this spirit, while developing the report, I decided to introduce percentiles as a simple statistical tool that captures the client’s intent.

 

Percentile is a familiar statistical method that is often seen in, for example, ACT and SAT scores. The School of Public Health in Boston University defines percentile as “a value in the distribution that holds a specified percentage of the population below it.” If a student receives a test score in the 95th percentile, her score is above 95 percent of the population or is within top 5%. Simple enough.

 

By integrating percentiles into a map, the combination can capture the areas with high business activity. Activities can be abstracted to be any measurable volume. For the purpose of this article, the workbook linked to this article uses the technique which combines percentiles and maps using public data from the City of Chicago (Figure 1).

 

Fig. 1 - Food Inspection Density in Chicago and surrounding areas
fig1.JPG

 

The city’s food inspection data includes information on the status, types and locations of inspections conducted by the officials. The heatmap represents the volume of inspections. Percentiles are calculated over the distribution of inspections. Zip codes are used to aggregate the individual locations on the map. With this, you can see zip codes with more inspections are assigned to darker red colors and areas with sparse volumes are colored with shades of green.

 

Fig. 2 - Areas of top 1% inspections in ChicagoFig. 3 - Area of top 1% inspections did not pass
fig2.jpgfig3.jpg

 

Looking at Figure 2, by selecting Top 1% (which is 99th percentile) of the number of inspections, we can see that zip codes 60614 and 60647 have the highest number of inspections compared to other areas. We can take a step further and select the ‘Failed’ value under the Result dropdown (Figure 3). From this, we see that 60614 has the most food safety violations in the metropolitan area and represents top 1% of the distribution.

 

Fig. 4 - Inspections in EvanstonFig. 5 - Areas of top 1% inspections in Evanston
fig4.jpgfig5.jpg

 

Now instead of looking at the bigger market, let’s shift our focus to a suburb of Chicago and see how the technique responds to the change. In the City dropdown, I selected Evanston as seen in Figure 4. As expected, the market in Evanston is much smaller with only two zip codes compared to Chicago metropolitan area. In statistics verbiage, the population distribution shrunk in size. Population distribution is a distribution of inspections.

 

The change in the market and population distribution does not affect the core concept and functionality of percentiles. When the Top 1% is selected, as seen in Figure 5, we can see that only one of the two zip codes remains, 60201 with 7 inspections. Note that zip code 60614 which we have seen before with all values checked under the City dropdown no longer shows in the visualization, because 60614 is not part of Evanston.

 

The technique is able to retrieve relevant information before calculating the percentile distribution. Even though the size of the markets changed between the City of Chicago and suburban Evanston, the technique adapts to the change in the population distribution while retaining the concept of the top and bottom markets. The top 1% of food code violations in a large metropolitan city like Chicago as well as the top 1% of violations in smaller areas like Evanston are both alarming. The top 1% of Chicago is of more concern because the market is much bigger. But regardless of the size of the regions, the top 1% represents the primary area of concern.

 

The percentile technique provides quantitative descriptions to qualitative business concepts. At the beginning of the article, I shared a story where the client and her team struggled to articulate the meanings of primary and secondary markets, especially when the scope of the business changes depending on which subsets of their business are selected (Chicago metropolitan and Evanston). By using the report that integrated the percentile technique, one can elaborate what a person means by primary and secondary markets.

 

For example, a marketing manager proposes that for the upcoming quarter, her team plans to expand the business’s primary market by introducing a new internet campaign. If an executive asks her what she means by "primary market", she can direct to the specific percentile figure such as the area where top quarter (75th percentile) of sales originated. The technique enables users to have precise communications when formulating business strategies.

 

The technique illustrated is an example of a template model which can be adopted to provide insights to a wide range of business problems. In the food inspections report, we saw that the report adapts to a change of environment from a large city to a suburb when retrieving relevant top areas of inspections. The technique can be applied to capture where a local office is attracting its revenue or where the highest achieving students are coming from. The technique does not have to depend on a geographical map. We can abstract the notion of a map into its elemental structure, a mathematical space. Without going into technical details, this technique requires two things: a space like a geographical map and a population distribution where percentiles can be calculated.

 

One example that does not use geographical space is that we can map medical profiles of patients who have cancer. The former notion of the map can be replaced with medical profiles of each person. Specific areas, zip codes, from previous example can be replaced with features within medical profiles such as age groups, eating habits, genetic attributes, symptoms and previous diagnosis. The population distribution includes the volumes of each feature in the medical profiles. Instead of comparing zip codes in the food inspection example, this case explores which features contribute the most to breast cancer. For example, the model may show that genetic defect BRCA1 as the top 1% (or 99th percentile) feature across medical profiles of many breast cancer patients.

 

The technique is a model and not a solution to business problems. Because the model says that genetic mutation in BRCA1 is within the top 1% of features (i.e. most common among patients) for breast cancer does not necessarily mean that the mutation causes cancer. Milton Friedman, a prominent figure in economics, explains the scope of models in his essay "Positive Economics":

“abstract model and its ideal types but also of a set of rules…. The ideal types are not intended to be descriptive; they are designed to isolate the features that are crucial for a particular problem.” - Milton Friedman in "Positive Economics"

The strength of this technique is its ability to highlight the important elements of the business problem at hand: it could be the area where large portion of revenue is generated, the area of largest food code violations, or a leading feature that affects cancer. The technique has the ability to highlight specific features by simplifying the business problem and the environment with assumptions and rules. In the food inspection report, the bad neighborhood with food code violation is only determined by the volume of inspections. Further analysis is required to conclude definitely that indeed the identified area is plagued with kitchens infested with roaches. The model is effective in its ability to deliver a clear answer given the simplified environment. And the simplicity in the model allows both technical and business users to understand the scope of the model.

 

The article is the first part of a series where I will unveil techniques that I found useful in consulting in the data industry. While I find sophisticated techniques attractive, I believe choosing a parsimonious technique as a consultant is more prudent than a methodology that cannot be easily utilized by clients in their decision makings. The future parts of the series will continue to unveil specific examples and techniques that are aligned with the spirit of the story.

 

External Links:

Food Inspection Density Map of Chicago

Food Inspection Data

Posted by Nirvana Farhadi  Nov 20, 2017

 

 

Today heralds a historic day for us at Hitachi Vantara! We are extremely honored to be hosting and strategically collaborating with the FCA (Financial Conduct Authority) and the BoE (Bank of England), Grant Thornton, and other key Financial Services stake holders, in holding a two week TechSprint to explore the potential for model-driven machine-readable regulation.

 

This incredible two-week Tech Sprint, will be exploring how technology can provide solutions to the challenges firms face in implementing their regulatory reporting obligations. If successful, this opens up the possibility of a model driven and machine executable and readable, regulatory environment that could transform and fundamentally change how the financial services industry understands, interprets and then reports regulatory information!

for more information on the event please go to:

https://www.fca.org.uk/firms/our-work-programme/model-driven-machine-executable-regulatory-reporting

 

A big thank you to all participants and strategic collaborators who have joined us today on the opening of this event! Participants are listed below.

 

  • Financial Conduct Authority
  • Bank of England
  • Grant Thornton
  • Hitachi
  • HSBC
  • Credit Suisse
  • Santander
  • JWG
  • Linklaters
  • University College Cork
  • The Information Society Project at Yale Law School
  • Stanford University
  • Governor Software
  • Immuta
  • Lombard Risk
  • Model Drivers
  • Regnosys
  • Willis Towers Watson

#FCAsprint#HitachiVantara #BoE #GrantThornton #JWG #modeldrivers  #HSBC#CreditSuisse#santander

 

For any information, and before communicating on this event please ensure you contact:

Niv@hitachivantara.com

Scott NaceyMichael HayJames KorolusG-OTP HubFinancial Services CollaborationKen WoodKarl KohlmoosJason Beckett Francois Zimmermann

Containers have been fueling the full-stack development implementation in webscale companies for a number of years now. Financial services being risk adverse are typically slow to jump on new trends, but have started adopting containers at a very fast rate now that they are going mainstream.  If you’re a financial institution and you aren’t at least taking a serious look at microservices and containers…you might be behind already.

Case in point: FINRA (a non-profit that is authorized by Congress) secures over 3,700+ securities firms. They have to go through 37+Billion transactions per day with over 900 cases of insider trading a year, and over 1500+ companies getting disciplinary actions levied against them. http://www.finra.org/newsroom/statistics.

 

When you look at year over year, you start to get the picture that fines, fraud and insider trading are growing at as rapid a pace as data and technology change. The amount of data sources you need to traverse in a small amount of time is huge. Going through that many transactions a day (with around 2.5 cases happening each day) means that the queries to run through that much data can take hours unless you factor in containers and the ability to blend data sources quickly, and return results fast. It’s like a data-driven arms race.

 

SpaceX-768x512.jpeg

 

This is where containers can help and are already driving the financial services industry across regulatory, operational, and analytical areas. Here are a few areas where I think containers are most impactful:

  • Bare Metal – Customers are increasingly looking for bare metal options to quickly spin up and spin down containers and microservices. This helps in two ways.  One they get to reduce licensing fees for hypervisors and secondly the speed at which they can utilize the hardware is greater.  This buys them the economy of scale, and a good ROI with a software-defined data center (SDDC) and software-defined networking (SDN) being two large drivers of this trend.
  • Automation – I’m a huge fan of automation, and when it comes to digital platforms that need little to no human interaction banking and finance are no stranger to this. People are prone to error, where automation is only as fallible as its programming.  Traditionally there has been a lot of analysts tied to these banking and finance queries, and having to parse through large amounts of data. One example of automation is the fact that you no longer need to interface with a teller to go to your bank branch. Personal connections and customer interaction are quickly being replaced with the ability to open your mobile phone and transfer that money anywhere you want, pay that bill, or send money to your friends all with the click of a button. I can tell you what I spent, where I spend it, and what category it falls in within seconds. All of this without ever talking to a teller, or needing some fancy analyst.  Automation is the answer and it’s no different with containers.
  • New Regulations – Governments always want to know where, who, and how that money moves. Compliance and fraud are at an all-time high. Just look no further than the Bangladesh bank where over 80 million dollars was stolen by hackers to realize this is a serious concern and could have been worse. https://www.reuters.com/article/us-usa-fed-bangladesh/bangladesh-bank-exposed-to-hackers-by-cheap-switches-no-firewall-police-idUSKCN0XI1UO.  Several hundred million and a misspelling of “Shalika Foundation” to “Shalika Fandation” saved Bangladesh bank from having potentially over 1 Billion dollars stolen. In this case, human hackers not automating helped, but far worse are the cybersecurity risks involved for the bank.  They can’t afford to miss any transactions happening anywhere they operate.
  • Cybersecurity – Financial security as noted above is a big real-time, data-driven operation that requires tactics and tools that are responsive, and can scale. This is again where container environments thrive. They can help identify and prevent things such as money laundering, intrusions like the one above unless the hackers misspell something and take the human element out of the picture.  Cybersecurity threats are on the rise, and it takes nothing more than not keeping up with the latest security patches to have a big impact once they get into your environment.  Target, Visa, Sony, Equifax – and their customers - have all learned what can happen with a breach.
  • Scale of transactions – As with the FINRA example above, as we get increased access to our money, with more ability to move that money quickly, financial institutions need to keep up.  With data growing 10x, and unstructured data growth at 100x, the need to parse through the transactions quickly is becoming ever more challenging.  Containers and scale-out micro-services architectures are the keys to solving this puzzle.

 

I can remember as a kid I had a paper register with how much money I had in it, and once a month or so I could take my allowance to Fifth-Third Bank and they would write my new total, and deposit my money. My mom would also keep her checkbook up to date, and it would have every transaction she ever did, from ATM to checks, religiously kept in it. I can’t tell you the last time I was in a bank, let alone kept a register log. They still send me one, but I think it’s still in the box somewhere with those checks I don’t use often unless forced to. Financial institutions now need to have all my transactions and have them accessible quickly. They need to watch for fraudulent transactions, where I am, how much I’m taking out a day, and what my normal spending pattern looks like to stop identity theft. Tough to do without heavy analytics in real time, even tougher without containers.

 

So what are the limitations of current systems?  Why not just keep doing what we’ve been doing?

 

There’s the old adage about doing things the way you always have and expecting a different result.  VM’s are like my old register and are well suited to those old monolithic applications. Not that there is anything wrong with the way I used to go to the teller to make transactions, it’s just clunky, slow and expensive. VM’s are the equivalent of the teller.  They aren’t responsive and they can’t meet the scale of modern distributed systems. Scale-up being the answer in the past (more CPU, more memory, more overhead, expensive licenses, and maintenance). go big or go home doesn’t work in today’s world. These dedicated clusters might work hard sometimes, but more times than not you’re scaling a large system up for those “key” times when you need them. With a highly scalable architecture, you’re able to scale up and down quickly based on your needs, without overbuying hardware that sits idle.  I won’t even touch on the benefits of cloud bursting, and being able to quickly scale into the cloud environment.

 

Secondly, integration for traditional architectures is difficult as you had to worry about multiple applications, and integration environments, drivers, hypervisors, and golden images just to get up and running.  How and where the data moved was second to just getting all the parts and pieces put together. Scale-out compostable container architectures that were designed to come together to address specific problems like data ingestion, processing, reactive, networking etc. (e.g. Kafka, Spark, Cassandra, Flink) solve the issues of complex integration.  These architectures are centered around scaling, tackling large data problems, and integrating with each other.

 

  So to answer the question whether financial services are ready for containers, the answer is undoubtedly yes. I would almost say they can’t survive without them.  Today’s dated VM systems aren’t ready to tackle the current problems, and they certainly don’t scale as well. In my next blog, I’ll go through some stacks and architectures that show how you can get significant results specifically for financial services.

 

Casey O'Mara

European banks will need to have Open Banking APIs in place by January 2018.

This whiteboard video explains how to enable your API platform and keep existing systems safe.

 

Implementing Open Banking APIs.png

Open banking APIs have become a financial services industry hot topic, thanks to two regulatory decisions.

The first is in the UK, where the Competition and Markets Authority (CMA) started investigating competition in retail banking. They produced a report last year which proposed several recommendations and requirements.

A principal recommendation was to improve competition in retail banking. To achieve this, the CMA decided traditional banks should expose their customer data to third parties that would deliver additional retail banking services.

In parallel with the CMA, the European Commission started its second review of the Payment Services Directive. That review also proposed that banks, with customer consent, should expose customer data to third parties, who could then potentially deliver superior services.

 

Four challenges of implementation

 

From talking to our existing banking customers, we have identified four challenges of introducing an open banking API.

The first is being compliant in time.  These are requirements from the CMA and a directive from the European Commission. The API need to be in place at the start of 2018, which leaves banks little time at this point.

Second is improving customer experience. Retail banks across Europe are increasingly focused on delivering new and improved customer experiences.

Third is competition. The principal aim of introducing open banking APIs is to allow other service providers to utilise the data, then offer new and improved services to retail banking customers.

Finally one that doesn’t come up very often, but we think is important, is the operational risk that building and exposing APIs places on traditional systems.

 

Typical existing core systems

 

No bank started life as they are today.  The majority have built up core systems over many years through mergers and acquisitions. Furthermore, they’ve delivered lots of different services over those years too.

Those systems as a result have become interlinked, inter-joined, and incredibly complex. They are traditional architectures and they scale up.

What I mean by scale up, is that if they run out of bandwidth to deliver new services, that is fixed by installing and implementing a bigger system, or a bigger storage device. Scale up systems are capital intensive and take time to become productive.

We should consider how existing systems are managed and changed. Due to the complexity, banks must make sure that those systems are reliable and secure. To achieve this, they wrap rigorous change control and management processes around the systems.  As a result, any major change, which exposing these APIs certainly is, equates to a substantial risk.

There is one other aspect that’s worth considering too. Banks know how many transactions existing core systems need to process.  By opening this API, that becomes unpredictable. The volume and shape of the transactions that those APIs will generate, is difficult to predict.

 

Database extension alternative

 

Instead of using existing core systems, our view is that most banks will build a database extension or caching layer. In this alternative when a customer consents and the bank exposes their data to third parties, banks will extract that data out of their existing core systems, transform it for the new style database, and then populate the database extension with the data.

This alternative provides several benefits. First, banks can quickly become compliant and provide open banking APIs. This solution will scale out, so as banks add more customers to this process, they can scale easily.

More importantly, expect forward thinking banks to use the API to add new services. Potentially they will start to incorporate lots of different data sources. Not only traditional data, but geospatial data, weather data and social media data too.

This would enable banks to deliver a rich set of services to their existing customers through the open banking API and potentially monetise them.

 

Moving data from existing systems to the new database

 

Most banks will have several tools which extract data out of systems and populate business information systems and data warehouses.

Extracting data from traditional systems, transforming it and blending it so that you can use it in these new, agile scale out systems however requires something different. A lot of older tools, which have been very good at doing extracting data, aren’t effective at new style transformation processes.

One tool which is effective at this is Pentaho, which specialises at transforming data from traditional sources and then blending different data sources so that they can offer a richer set of services.

 

Monetizing the API layer

 

Regardless of the approach a bank takes, it will need to support open banking APIs from the start of next year. This leaves little time to become compliant and because that’s just a cost to them right now, we do believe that quickly the more forward-thinking banks will want to extend the capability of those open banking APIs, to develop new revenue streams and monetise them.

We at Hitachi think this is an exciting time, not only for fintech start-ups but traditional banks too, who through these directives, have been given an opportunity to deliver something new to their customers.

 

If you would like to learn more about how Hitachi can help you introduce Open Banking APIs get in touch with via LinkedIn or learn more about our financial services offering here.

As a brand and as a company, Hitachi is known for building high-quality solutions – from proton beam therapy and high-speed, mass transit bullet trains to water-treatment offerings and artificial intelligence (AI) technology – offerings that make the world a better place to live. For this reason, we hold ourselves to the highest of standards and we sweat the details. We know that, in many of these cases of social innovation, failure on our part could have dire or disastrous consequences.

 

Of course, we can’t make the world a better place alone. We need partners who will sweat the details. Partners like Intel, who, with their introduction of the latest Intel Xeon family of processors or their work on computational performance of a deep learning framework, demonstrate their intense focus on innovation, quality and performance.

 

As we continue to examine Intel Xeon family of processors, we see unlimited potential to accelerate and drive our own innovations. New capabilities can help us achieve greater application performance and improved efficiency with our converged and hyperconverged Hitachi Unified Compute Platform (UCP). And, as Intel pushes the envelope even further with next-generation field-programmable gate array (FPGA) technologies as well, we estimate that users could see upwards of a 10x performance boost and a significant reduction in IT footprint.

 

In important vertical markets like financial services, we have seen tremendous success around ultra-dense processing with previous generation Intel processing technologies. There, we are able to capture packets at microsecond granularity, filter in only financial services data, and report on both packet performance plus the financial instruments embedded in the packets. We can’t wait to see how Intel’s latest processing advancements help us exceed expectations and today’s state of the art.

 

We look forward to the road ahead. In the meantime, we’ll keep sweating the details and working with partners like Intel who do the same.

 

The regulatory burdens placed on financial services organisations has reached unprecedented levels.  From data security and access with GDPR to investor protection and the various themes in MiFID II/MiFIR, businesses are besieged by new regulations on an almost monthly basis.

 

According to Business Insider, from the 2008 financial crisis through 2015, the annual volume of regulatory publications, changes and announcements has increased by a staggering 492%. It is an issue I have addressed not only at the numerous events I have spoken at and attended since joining Hitachi, but throughout my career.

 

Understandably organisations are looking for ways to ease this regulatory burden through automating onerous processes, and are looking at ways to make the Risk, Compliance, Operations and Audit (ROCA) line of business more cost effective, efficient and take away the resource burdens that these organisations currently face.

 

After all the business of these organisations is not ROCA, rather they are in the business of generating revenue, which these functions clearly don’t. The need to ease this burden, has seen the rapid rise of RegTech or Regulatory Technology.

 

The idea behind RegTech is that it harnesses the power of technology to ease regulatory pressures. As FinTech innovates, RegTech will be needed to ensure that the right checks and balances are quickly put in place so that organisations do not fall short on their regulatory obligations.

 

RegTech is not just about financial services technology or regulations, it is broader that and can be utilized in numerous industries such as HR, oil & gas, pharmaceutical etc. With RegTech, the approach is to understand the “problem” (be it operational, risk, compliance or audit related), see which regulations it will be impacted by this problem, and solve it using technology.

 

RegTech is a valuable partner to FinTech, although some refer to it as a sub-set of Fintech, in my view RegTech goes hand-in-hand with FinTech - it should work in conjunction with financial technology innovation.

 

RegTech focuses on technologies that facilitate the delivery of regulatory requirements more efficiently and effectively than existing capabilities. RegTech helps to provide process automation, reduce ROCA costs, decrease resource burdens and creates efficiency.

 

FinTech by its nature, is disruptive. It aims to give organisations a competitive edge in the market. When FinTech first took off one of its main disruptions was the creation of algorithmic and high frequency trading systems, at lightening speeds.

 

As these FinTech innovations have become faster, more in depth and more intricate, regulators across the globe have sought to establish some boundaries to prevent fraud, protect consumers and standardise the capabilities of this technology. 

 

The accelerated pace at which FinTech has been adopted and is constantly innovating, means the regulators have struggled to keep up. Now however, far reaching and broader regulations are being established regularly – hence the requirement for RegTech to help manage this plethora of rules and procedures. RegTech is particularly relevant within the ROCA arena, where having oversight of the regulations is deep within their remit.

The financial services industry is heavily regulated, through myriad interlinking global regulations. These regulations are implemented through reports – whether it’s through trade/transaction/ position/periodic reporting or through some sort of disclosure.  Reports are the lifeblood of regulation and are based on data - therefore data is a crucial part of compliance. 

 

At the core of most regulations is the need for financial services organisations to locate, protect and report on the data and information held within their systems.  The regulations require not just audit trails, but each report must demonstrate exactly how data is handled both internally and externally. 

 

Reporting and regulation is unavoidable for all financial services organisations.  FinTech, which is just developing and not regulated yet, will catch up very quickly, as the regulators quicken their pace in keeping up-to-date with innovation and possible disruptions.

 

The challenge is collating and curating this level of information from the existing systems within the banks, within the deadlines specified by the regulations. This why RegTech exists and plays such a key role.

 

At a very fundamental level, RegTech helps financial services organisations to automate many of the manual processes, especially those within legacy systems, whether that be reporting, locating customer data, transactional information or systems intelligence. 

 

The crucial element here is not only the legacy and aging systems still held within many financial institutions - where data is stored in everything from warehouses to virtual arrays, and therefore locating and retrieving information from such becomes a huge challenge - but the legacy thinking of leadership in organisations is also problematic.

 

Many of these organisations are led by individuals whose only thought is the next 6 months. As Warren Buffet, however stated “someone is sitting in the shade today because someone planted a tree a long time ago.” Leadership need to think strategically.

 

The Recent WannaCry Ransomware attack is a perfect example of the dark side of legacy thinking and systems. Had leadership in those effected organisations made strategic infrastructure investments, replacing existing systems which are vulnerable to attack with modern systems implemented with the correct governance, systems and controls, this attack would not have caused as much harm as it did.

 

By using RegTech to automate these tactical and manual processes, it streamlines the approach to compliance and reduces risk by closely monitoring regulatory obligations. Vitally, it can lower costs by decreasing the level of resource required to manage the compliance burden. And RegTech can do so much more than just automate processes.

 

Organisations are using it to conduct data mining and analysis, and provide useful, actionable data to other areas of the business, as well as running more sophisticated aggregated risk-based scenarios for stress-testing, for example.

 

Deloitte estimates that in 2014 banks in Europe spent €55bn on IT, however only €9bn was spent on new systems. The balance was used to bolt-on more systems to the antiquated existing technologies and simply keep the old technology going. 

 

This is a risky and costly strategy. The colossal resource required to keep existing systems going, patched and secure, coupled with managing the elevated levels of compliance requirements will drain budgets over time. Beyond that, the substantial risk associated with manually sourcing data, or using piecemeal solutions presents the very real risk of noncompliance.

 

RegTech is not a silver bullet and it is not going to solve all the compliance headaches businesses are suffering from. However, as the ESMA (European Securities Markets Authority) recently stated firms must “embrace RegTech, or drown in regulation”.

 

RegTech will play a leading role, especially when used to maximum effect. Take, as an example, reporting.  We know through our research than this is an industry-wide challenge; on average a firm has 160 reporting requirements under different regulations globally, each with different drivers and usually with different teams producing those reports.

 

By using RegTech, not only could those team resources be reduced, but the agility and speed with which reports can be produced will ensure compliance deadlines are adhered to. Additionally, resources can then be focused elsewhere, such as on driving innovation and helping to move the company forward. 

 

Rather than focusing on what a burden the regulations are, by using RegTech organisations will see them as an opportunity to get systems, process and data in order, and to use the intelligence and resources to drive the company to greater successes. To take it one step further, I believe regulation does not hinder or stifle innovation - but in fact breeds creativity and innovation.

 

If you would like to learn more about RegTech and my work with Hitachi follow me on Twitter and LinkedIn.

Last week I visited Hannover Messe, the world’s largest Industrial Automation & IoT Exhibition for the first time and I have to say, I was overwhelmed by the sheer size and scale of the event. With 225,000 visitors and 6,500 exhibitors showcasing their Industrial Automation & IoT capabilities, the race is definitely on to take a share of the massive future of the IoT market opportunity in these sectors. There are varying estimations but according to IDC the IoT market is expected to reach $1.9 Trillion by 2021.

 

What I learnt in Hannover was that the industrial and energy sectors are on the cusp of a huge digital data explosion. Why? Because like all industries they are under pressure to innovate and embrace new technologies that will significantly accelerate the intelligence, automation and capabilities of factory production lines, reduce manufacturing defects and fault tolerance levels, as well as improve the reliability, performance and TCO of all kinds of industrial machinery as they become digitised. Technologies that will drive this data explosion will include, machine generated data, sensor data, predictive analytics data, as well as enhanced precision robotics and Artificial Intelligence data. All of this data will give a competitive edge and valuable insight to companies who deploy these technologies wisely and who use the data generated in the right way to drive their business forward intelligently and autonomously.

 

These innovations arguably make Hannover Messe one of the most relevant exhibitions in the IoT space today and last week Hitachi was able to showcase the full power of its Industrial Automation and IoT capability. This included Lumada IoT solutions, real industrial assets, advanced research and it’s humanoid robotics vision through EMIEW, a customer service robot being developed by Hitachi to aid society in public and commercial places with information whatever your language, which generated huge interest from attendees.

 

Hitachi had a large team of IoT experts present who talked very deeply to the technologies and use cases its customers need to advance and digitise their businesses. To say Hitachi inserted itself into the IoT conversation last week is an under-statement, Hitachi is serious about this business and this was further reflected through the extensive global brand advertising campaign in and around the show which included prominent adverts in Hannover’s main railway station, Hannover Messe sky walkways and a number of global media publications, all driving the 225,000 visitors to it’s 400sq metres booth to experience its IoT solutions.

 

As I left Hannover, I came away with two key takeaways. Firstly, IoT is with us here and now, and with this broad level of investment being made by companies and the focus and the potential returns for businesses, you can start to understand how it will drive the next huge wave of industrial change. My second takeaway is the potential Hitachi has to be a dominant force in IoT and the ambition it has to be the market leader. Last week the company made a giant stride towards achieving that goal. You can follow the conversation on Twitter by searching #HM17.

 

Image-1.jpg

Help! Thinking Differently

Posted by Scott Ross Employee Mar 20, 2017

 

Help! The Beatles said it best. We'll come back to that later.

 

Beep! Beep! Beep! My alarm wakes me up and the sudden realisation sets in. Like many other money savvy 20 somethings a calendar reminder on my smartphone alerts me that, on this occasion, my car insurance expires soon. The inexorable battle through the web of compare-supermarkets online is about to commence. Before I set about my task I go into the kitchen to pour a cup of tea and discover that my milk is off. I nip across to the shop to pick up some more and pay for it with my smartphone. I pour my tea, take a deep breath and off I go.

 

Naturally I start with my current provider and run a renewal quote on their website. Having spent the past 12 months as their customer I had high hopes for this to be the standard for others to compete with. After some not-so-easy site navigation and a lot of persistence I managed to get a renewal quote. Shockingly, this was significantly more than my current agreement despite nothing changing other than my age. Having struggled through their website for the best part of an hour re-entering my personal information that they could (and should) have very easily auto-completed for me, needless to say I was far from pleased with the outcome.

 

 

Next, to the plethora of comparison sites. I use a quick google search to review which is best. I select my suitor and off I go. I discover that this website is significantly easier to navigate which somewhat alleviates the painstaking process of having to enter the exact same details that I’ve already spent the best part of my morning entering into my current provider’s site. That pleasing process was enhanced further by the next page, the results! They were staggering. A large number of providers were offering a greater level of service at a considerably lower price. “How can that be”, I asked myself? For now I had to focus on which offer was best and ponder the fact later.

 

I review the top three policies independently, through both a google search and by using the site’s own review tool, and finally settle on my desired option. Two clicks later and I’m on the new provider’s website, details already filled in, quote as per the comparison site and a blank space waiting to complete the transaction. All I needed to do now was fill in my payment details and it was complete. Easy. I would have a new provider once my old contract ends…or so I thought!

 

 

Having settled on a new provider I go about cancelling my current service before the auto-renewal kicks in and I lose my hard-earned new policy. I call the contact centre, give my details and ask to cancel. The operator asks a few questions about “why” and then begins to offer discounts and price matching against what I’ve just signed up to. Why couldn’t they offer this level of service upfront? Why does it take me leaving for them to offer something better? In today’s economy where not just the savvy, but everybody is looking to get more for their money, why would a business continually act like this? This, in my opinion, shows a poor level of customer knowledge and more importantly a poor customer experience.

 

Quickly I begin to realise that many organisations across all consumer industries are acting in a similar way. In fact only the ‘new-age’ organisations can offer something different and even then are they maximising their potential? This got me thinking (back to the title) “Help! I need somebody, not just anybody!” I need my current provider to look after me. To help me. Even better, do it for me. I need them to navigate through the renewal journey for me. To offer me a bespoke service, price, whatever…designed to meet my needs, my characteristics. Act in my best interest. Maybe this is a euphoria / utopia that we may never get to however I can’t help but imagine a world where ‘the man’ is looking out for me. Providing targeted messaging about me, my spend, how and where to spend better, wiser, cheaper. Unlike The Beatles, most organisations aren’t drowning in their own success and, instead, are screaming out for a different kind of help! But what if they weren’t? Imagine a world where your bank offers you a discount to spend at your regular coffee spot, knows you’re paying above average on your street for home insurance and provides an alternative, automatically moves your savings to the best available rate, suggest alternative insurance products based on your driving style/health/lifestyle, the list is endless.

 

 

The point of this story is the power of insight, experience and Internet of Things (IOT). If our providers harnessed the data they already have (or could have) and turned this into valuable information, they would be more relevant to us and in return (we) would be better off. We are, as a consumer, looking for greater value and what better way than our existing providers changing the game. One example could be taking the comparison game to us - offering their services bespoke to our needs, after all they already know us. Another could be to improve the journey through their website, making it easier to transact. What if my bank knew my milk was already off and alerted me to buy more and attached a special offer to their message?! By empowering their staff, systems and processes even the oldest traditional organisations can realise the advantage. Increasing their customer insight and ultimately improving customer experience will bring about new markets, greater revenues and thriving customer loyalty.

 

Don't miss my next blog to see if we can work it out!

 

 

If you would like to learn more about Hitachi and how we can help Financial Service organisations click here.

 

To learn more about me visit my LinkedIn profile here.

Why Digital Transformation must be a strategic priority in 2017

 

It’s with good reason that Digital Transformation has become the latest watchword in our industry; organisations across the world are finally seeing the profound business advantages of leveraging digital technology.

According to a 2016 Forbes Insights and Hitachi survey of 573 top global executives, Digital Transformation sits at the top of the strategic agenda. Legendary former General Electric CEO, Jack Welch sums up perfectly why Digital Transformation has become a board-level priority: “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.”

 

Organisations are seeing such an unprecedented rate of change all around them, that Digital Transformation is no longer a ‘nice to have’; it is a ‘must have’ for corporate survival.  To discover the state of enterprises’ Digital Transformation project in 2017, Hitachi partnered with UK technology publication, Computer Business Review (CBR), to survey IT decision makers in its readership on their efforts. While not scientifically representative of enterprises across the UK or Europe, the research provides some enlightening anecdotal evidence.

 

In this blog, I’ll explore some of those findings and discuss why I think 2017 will be the year of Digital Transformation.

In the UK, just under two-thirds of CBR readers revealed they are through the emergent stages of Digital Transformation, and consider their organisation to be at an intermediate stage in their journey. Only one in ten described themselves as beginners, with one in four stating they are leading the pack when it comes to transforming their businesses.

 

blog 2.png

We’ve found similar scenarios within many of our customers. Some are teetering on the edge, while others, such as K&H Bank, the largest commercial bank in Hungary, are already reaping the rewards. Through upgrading its storage solutions, K&H Bank has halved the time it takes for new business information to arrive in its data warehouse, ready for analysis and cut its data recovery time by half. This enables H&K Bank to get quicker insights into its business and react faster than its competitors.

 

It is exactly this type of optimisation that is fuelling Digital Transformation. By cultivating improved internal processes and competencies it drives tangible business benefits. In fact, according to CBR readers, just under two-thirds identified improving internal operations as the top driver for Digital Transformation, while a quarter highlighted customer experience.

 

Of course, while Digital Transformation can provide both optimised operations and improved customer experience, by initially focusing on internal programmes, any issues can be overcome and learnings understood.   Take for example, Rabobank in the Netherlands. The finance and services company has transformed its compliance operations by optimising its operations through a new platform. This strategy enables simplified access to structured and unstructured data needed for investigations, easing the regulatory burden on the bank.

 

blog pic 1.png

 

This kind of Big Data analysis combined with other technologies such as cloud computing and the Internet of Things (IoT), are at the core of many successful Digital Transformation stories. Cloud computing for example, was cited by 67% of readers surveyed as helping them to progress along their digital journey.

 

Indeed, our customers have demonstrated a keen interest in cloud technology as an integrated element of a Digital Transformation strategy. Deluxe, a US-based finance organisation, is benefitting from improved flexibility, security and control through Hitachi’s enterprise cloud offerings. By moving to a private cloud within a managed services environment, it now has the technology to integrate acquisitions, deploy next-generation applications and accelerate its time-to-market.

 

Other technologies, such as data analytics, cited by 20% and IoT cited by 10% of readers, are likely to grow in popularity as more powerful technology is developed. Although Artificial Intelligence (AI) is increasing in awareness, with innovative rollouts from organisations such as Enfield Council, it is not currently a strategic focus for UK businesses as on their Digital Transformation journey - cited by only 3% of readers.  This is likely to change however as more and more applications for the technology are discovered.

 

What our survey highlighted was not if organisations are starting and progressing their Digital Transformation journey, but when and how far they are along the path. That’s not to say it’s easy. But there is help along the way - my colleague Bob Plumridge recently shared three excellent pieces of advice regardless of where you are in your journey.  And, most importantly, the rewards are worth it. Improving internal operations and processes will help drive increased innovation and therefore improve customer experience. Embarking on Digital Transformation will also help keep your pace of change ahead of the competition, just like Jack Welch advised.

Last week my team hosted an exciting event at the Four Seasons in Houston, TX progressing our efforts in this vertical.  It was an event that mixed users, partners and customers plus the many faces of Hitachi.  Our aim was two pronged:

  1. Be inspired through the continued exploration of new challenges from the industry, and
  2. Validate areas we're already progressing, and adjusting based upon user feedback.

Doug Gibson and Matt Hall (Agile Geoscience) kicked us off by discussing the state of the industry and various challenges with managing and processing Seismic data.  It was quite inspiring and certainly revealing to hear where the industry is investing across Upstream, Midstream and Downstream -- the meat, Upstream used to be king, but investments are moving to both Midstream and Downstream.  Matt expressed his passions about literally seeing the geological progression of the Earth through Seismic Data. What an infectious and grand meme!

full_section_earth_surface_included.png?format=2500wMore generally, I believe that our event can be seen as "coming out party" for works we began several years ago -- you'll continue to hear more from us as we work our execution path.  Further, being inspired by one Matt Hall we ran a series of un-sessions resulting in valuable interactions.

 

O&G Summit - 11.jpg

The Edge or Cloud?

In one of the un-sessions, Doug and Ravi (Hitachi Research in Santa Clara) facilitated a discussion about shifting some part of analytics to the edge for faster and more complete decision making.  There are many reasons for this and I think that the three most significant are narrow transmission rates, large data (as in velocity, volume and variety), and tight decision making schedules.  Even though some processes (especially geologic ones) may take weeks, months or years to conclude when urgency matters a round trip to a centralized cloud fails!  Specifically, HSE (Health, Safety and Environment) related matters, plus matters related to production of both oil and gas mandate rapid analysis and decision making.  Maybe a better way to say this is through numerical "orders of magnitude" -- specific details are anonymized to "protect the innocent."

  • Last mile wireless networks are being modernized in places like the Permian Basin with links moving from satellite (think Kbps) to 10Mbps using 4G/LTE or unlicensed spectrum.  Even these modernized networks may buckle when faced with terabytes and petabytes of data on the edge.
  • Sensing systems from companies, like FOTECH, are capable of producing multiples of terabytes per day, which join a variety of other emerging and very mature sensing platforms.  Further digital cameras are also present to protect safety and guard against theft.  This means that the full set of Big Data categories (volume, velocity and variety) exists on the edge.
  • In the case of Seismic exploration systems, used to acquire data, designs include "converged-like" systems placed in ISO containers to capture and format Seismic Data potentially up to the scale of 10s of petabytes of data.  Because of the remote locations these exploration systems operate in there is a serious lack of bandwidth to move data from edge to core over networks.  Therefore, services companies literally ship the data from edge to core on tape, optical or ruggedized magnetic storage devices.
  • Operators of brown-field factories with thousands of events and tens of "red alarms" per day desire to operate more optimally.  However, low bit rate networks and little to no storage in the factory, to capture the data for analysis, suggest something more fundamental is needed on the edge before basic analysis of current operations can start.

This certainly gets me to think that while the public cloud providers are trying to get all of these data into their platforms there are some hard realities to cope with.  Maybe a better way classify this problem is as trying to squeeze an elephant through a straw!  However, many of the virtues of cloud are desirable so what we can we do?

 

Progressing Cloud to the Edge

Certainly the faces of Hitachi have (industry) optimized solutions in the market already that enrich data on the edge, analyze + process to skinny down edge data, and business advisory systems capable of improving edge related processes.  However, my conclusion from last week is that resolutions to these complex problems are less about what kind of widget you bring to the table and more about how you approach solving a problem.  This is indeed the spirit of the Hitachi Insight Group's Lumada Platform because it includes methods to engage users, ecosystems and brings tools to the table as appropriate.  I was inspired to revisit problem solving (not product selling) because Matt Hall said, "I was pleased to see that the Hitachi folks were beginning to honestly understand the scope of the problem" as we closed our summit.

 

Is O&G the poster child for Edge Cloud?  It seems that given the challenges uncovered during our summit plus other industry interactions the likely answer is yes.  Perhaps the why is self evident because processing on the edge, purpose building for the industry and mixing in cloud design patterns is obvious as stacks are modernized.  It is the "how" part I believe deserves attention.  Using Matt's quote, from the last paragraph, guides us on how to push cloud principals to the edge.  Essentially, for this industry we must pursue "old fashioned" and sometimes face-to-face interactions with people that engage in various parts of the O&G ecosystem like geologists, drilling engineers, geophysicists, and so on.  Given these interactions which problems to solve, their scope and depth become more obvious and even compelling.  It is then when we draft execution plans and make them real that we will resolve to build the cloud at the edge.  However, if we sit in a central location, read and imagine these problems we won't develop sufficient understanding and empathy to really do our best.  So, again yes Oil and Gas will engender edge clouds, but it is the adventure of understanding user journeys that guides us on which problems matter.

 

Attributions

  1. Top Banner Picture - Author: Stig Nygaard, URL: Oil rig | Somewhere in the North Sea... | Stig Nygaard | Flickr, License: Creative Commons
  2. Seismic Image - Relinked from Burning the surface onto the subsurface — Agile, and from the the USGS data repository.

Data is the New Oil

Posted by Harry Zimmer Employee Jan 31, 2017

There are at least 20 terms today that describe something that was once called Decision Support (more than 30 years ago). Using computers to make fact based intelligent decisions has continually been a noble goal for all organizations to achieve. Here we are in 2017 and the subject of Decision Support has been by enlarge forgotten. The following word cloud shows many of the terms that are used today. They all point to doing super advanced Decision Support. Some of the terms like Data Warehouse have been replaced (or super-ceded) with the concept of the Data Lake. Others old terms like Artificial Intelligence (AI) have been re-energized as core to most organizations IT plans for this year.

 

Picture1.png

 

In discussing this whole set of topics with many customers around the world over the past 12 months, it has become clear to me that in general most companies are still struggling with the deployment and in some cases the ROI around this whole set of topics. The education levels are also all over the map and the sophistication of systems is inconsistent.

 

In my upcoming WebTech (webinar) I will be sharing a new model that takes into account the old and the new. It will provide some foundational educationl in under an hour that should provide incredible clarity – especially for ‘C’ level executives.

 

I have developed what I think is a very useful model or architecture that can be adapted and adopted by most, if not all organizations. This model provides an ability to self-assess where the organization is exactly and what the road forward will look like. All of this has been done with the goal of achieving the state of the art goals in 2017.

 

The model is a direct plug-in to the industry-wide digital transformation initiative. In fact, without the inclusion of the model – or something similar to it – a digital transformation project will most likely fail.

 

The other direct linkage is to another hot topic: Internet of Things (IoT). Here too there is a direct linkage to the model. In fact, as IoT becomes mainstream across all organizations, it will be a valuable new source of data using the evolving world of sensor technologies.

 

I hope you are able to join me for this WebTech. I am sure you will find that it will be extremely valuable to you and your organization and spur a ton of follow-on discussion.

 

To register for my upcoming WebTech, click here.

 

For additional readings:

  • Storytelling With Data, by Cole Nussbaumer Knaflic
  • 80 Fundamental Models for Business Analysts, by Alberto Scappini
  • Tomorrow Today, by Donal Daly
  • Predictive Analytics for Dummies, by Bari, Chaouchi, & Jung

In Superman 30, the Man of Steel theorises that the latest incarnation of his enemy Doomsday, emits so much energy that when he emerged he boiled the ocean. Not an easy task, even for a super villain; and certainly out of reach for mere mortals.

 

So why are some enterprises taking this approach with Digital Transformation projects? Moreover, if overreaching doesn’t work what steps should be taken?

 

Hitachi recently partnered with Forbes Insights, interviewing nearly 600 C-level executives from North America, Latin America, Europe and Asia-Pacific. The global research revealed that a transition toward digital maturity involves five major steps, some of which are proving easier to take than others.

 

1. Top Strategic Priority

Half of executives polled said their organisation will be vastly transformed within two years. I expect the figure is actually higher in Europe, where companies are already on the move. One bank we work with even has a permanent board member dedicated and responsible for its Digital Transformation.

 

The realisation has dawned in boardrooms that growth and survival are now tied-up with digital capabilities.

 

2. Enterprise-wide approach

The research revealed that cross-functional teams are not adequately involved in developing or implementing strategy, with the bulk of this work done by IT. In our experience this is no longer the case across Europe or the Middle East. According to Gartner for example, shadow IT investments, purchases outside of CIO control, often exceed 30 percent of total IT spend.

 

I recently attended an IDC event in Dubai dedicated to Digital Transformation in the banking and finance sector. The congress was dominated by line of business executives from sales and marketing rather than IT leaders. Each session and attendee I spoke with shared an active interest in making Digital Transformation, the cornerstone of their company strategy.

 

3. Focused on business outcomes

The ability to innovate was the top measure of success for 46% of companies polled, and it’s something I hear a lot from customers.

 

The ability to innovate cannot be achieved through technology alone, enterprises instead should seek partners who they can trust to solve the underlying technical and architectural challenges to deliver a solution that addresses and enables business outcomes.

 

161107.highres.digital.jpg

 

One thing the report does not consider however, is what will happen to those who fail to invest in digital capabilities. Failure to modernise cyber security systems for example, is an issue regularly covered by media outlets. Prof Richard Benham, chairman of the National Cyber Management Centre has even predicted that in 2017 "a major bank will fail as a result of a cyber-attack, leading to a loss of confidence and a run on that bank." 

 

Digital Transformation isn’t essential to just growth, but survival too.

 

4. Untapped potential of data and analytics

Only 44% of companies surveyed see themselves as advanced leaders in data and analytics. In my opinion, this is a conservative estimate. Some businesses may be guarding achievements – as poker players do. The term poker face relates to the blank expressions players use to make it difficult to guess their hand.  The fewer the cues, the greater the paranoia among other players.

 

You could speculate that some businesses may be keeping their best weapons secret too.

Besides, nobody wants to be that company who brags one day and is overtaken the next.

 

But we should take comfort from those companies in Europe making solid progress. K&H Bank, the largest commercial bank in Hungary, has halved the time it takes for new business figures to arrive in its data warehouse, ready for analysis and cut its data recovery time by 50%. Or consider Rabobank in The Netherlands, which has gained “control of the uncontrollable” and mastered the handling of its data for compliance purposes.

 

5. Marrying technology with people power

When dealing with challenges associated with Digital Transformation, the survey found that people are organisations’ biggest obstacle. Investing in technology is ranked lowest – indicating companies may already have the technology but not the skills. Support from strategic technology partners can help bridge the gap here.

 

These obstacles also bring me back to my original warning – don’t try to boil the ocean. An organisation might race ahead and invest heavily in technology but without the right culture and know-how, it could waste an awful lot of money at best, and lose senior support at worst.

 

So, what can your business learn from this research? Here are three things that you could do, within a safe temperature range:

 

  1. Hire new talent to challenge the status quo. Your current team may not have the fresh vision needed to shift your enterprise to mode 2 infrastructure. You need an intergenerational mix of young, enthusiastic staff and seasoned experts
  2. Nominate a senior executive as the Digital Transformation ambassador. Organisations need a senior sponsor to push the agenda. To overcome engrained ways of doing things, you need people with strong messages that can cascade down
  3. Be bold and take calculated risks. One bank in Europe has even banned its CIO from buying mode 1 IT infrastructure – meaning the bank has no choice but to embrace a more agile digital environment (rather than fall back on the devil it knows). Another bank in The Netherlands took the bold step of replacing its hierarchical structure with small squads of people, each with end-to-end responsibility for making an impact on a focused area of the business.

 

To achieve Digital Transformation, enterprises need to push the internal idea of ‘safe’ as far as possible. As Mark Zuckerberg declared “in a world that’s changing really quickly, the only strategy that is guaranteed to fail is not taking risks”. If a business does so iteratively and learns from its mistakes, it won’t run the risk of trying to boil the ocean and failing from the sheer magnitude of the task.

 

If you would like to learn more about the research, I recommend you download the full report here.

In the post that Shmuel and I published last month (The Many Core Phenomena) at the end we hinted about some upcoming news.

Hitachi has demonstrated a functional prototype running with HNAS and VSP to capture finance data and report on things like currency market movements, etc. (more on this in the near future).

Well there is obviously more to the story than just this somewhat vague statement, and Maxeler Technologies announced our mutual collaborations around a high fidelity packet capture and analytics solution.  To provide a bit more detail I'm embedding a video, narrated by Maxeler's Itay Greenspan, within this post.

 

Joint Maxeler HDS CPU-less Packet Capture Plus AnalyticsCommentary
As HDS and Maxeler set out on our collaborative R&D journey we initially were inspired by market intelligence related to an emerging EU Financial Services directive called MIFID II.  This EU directive, and its associated regulation, was  designed to help the regulators better handle High Frequency Trading (HFT) and so called Dark Pools.  In other words to increase transparency in the markets.  Myself, Shmuel Shottan, Scott Nacey, and Karl Kholmoos were all aware of HFT efforts because we ran into folks from a "captive startup" that "spilled the beans."  Essentially, some Financial Services firms were employing these "captive startups" to build FPGA based HFT solutions, enabling money making in a timespan faster than the blink of an eye.  So as Maxeler and HDS approached our R&D we assumed a hypothetical use cases which would enable the capture and decode of packets at the speed equivalent to HFT.  We then took the prototype on the road to validate/invalidate the hypothesis and see where our R&D actions would fit in the market.  Our findings were surprising, and while the prototype did its job of getting us in the door we ultimately ended up moving in a different direction.

 

As the reader/viewer can see in the video we leveraged many off the shelf components/technologies -- we actually used generation -1 tech, but heck who's counting.  As stated in the video, we accomplished our operational prototype through the use of Maxeler's DFE (Data Flow Engine) network cards, Dataflow based capture/decode capability executing on Dataflow hardware, a hardware accelerated NFS client, Hitachi's CB500, Pentaho, and Hitachi Unified Storage (HUS).  While related in the video, a point worthy of restating is: All of the implemented hardware accelerated software functions fit on about 20% - 30% of the available Dataflow hardware resources, and since we're computing in space more than a super majority of space remains for future novel functions.  Furthermore, the overall system from packet capture to NFS write does not use a single server side CPU cycle! (Technically, the NFS server, file system and object/file aware sector caches are all also running on FPGAs.  So, even on the HUS general CPUs are augmented by FPGAs.) 

 

As serial innovators we picked previous generation off the shelf technologies for two primary reasons.  The first and most important was to make the resulting system fit into an accelerate market availability model -- we wanted the results to be visible and reachable without deep and lengthy R&D cycles.  Second, was an overt choice to make the prototype mirror our UCP (Universal Compute Platform) system so that when revealed we could be congruent with our current portfolio and field skill sets.  Beyond these key points, a secondary and emergent benefit is that the architecture could readily be extended to support almost any packet analysis problem.  (While unknown to us at the time the architecture also resembles both the Azure FPGA accelerated networking stack and is close to a private version of Amazon EC2 F1 lending further credibility to it being leading edge and general purpose.)   Something that was readily visible, during our rapid R&D cycle, is Maxeler's key innovation lowering the bar for programming an FPGA from needing to be an Electrical Engineer/Computer Engineer to being a mere mortal developer with knowledge of C and Java.  For reference, what we've historically observed is a FPGA development cycle which takes no less than 6-months for a component level functional prototype, and in the case of Maxeler's DFEs and development toolchain we witnessed 3-4 weeks of development time for a fully functional prototype system.  This is well dramatic!  For a view on Maxeler's COTS derived FPGA computing elements (DFEs), and our mutual collaboration let me quote Oskar Mencer (Maxeler's CEO).

Multi-scale Dataflow Computing looks at computing in a vertical way and multiple scales of abstraction: the math level, the algorithm level, the architecture level all the way down to the bit level. Efficiency gained from dataflow lies in maximizing the number of arithmetic unit workers inside a chip, as well as a distributed buffer architecture to replace the traditional register file bottleneck of a microprocessor. Just as in the industrial revolution where highly skilled artisans get replaced by low-wage workers in a factory, the super high end arithmetic units of a high end microprocessor get replaced by tiny ultra-low energy arithmetic units that are trained to maximize throughput rather than latency. As such we are building latency tolerant architecture and achieve maximum performance per Watt and per unit of space.

 

The key to success in such an environment is data, and therefore partnership between Maxeler and Hitachi Data Systems is a natural opportunity to maximize value of storage and data lakes, as well as bring dataflow computing closer to the home of data. (Oskar Mencer)

Projecting into the now and ahead a bit, firstly we're "open for business."  Maxeler is in the HDS TAP program and we can meet in the market, engage through HDS and when it makes sense we (HDS) are keen to help users directly realize extreme benefits.  As for targets we need a tough network programming or computing problem where the user is willing to reimagine what they are doing.  In the case of the already constructed packet capture solution we could extend, with some effort, from financial packet analysis to say cyber packet forensics, Telco customer assurance, low attack surface network defenses and so on.  For other potential problems (especially those in the computing space) please reach out. With respect to projecting a bit in the future, I want to pull forward some of Oskar's words from his quote to make a point: "[Computing] per unit space."  This is something that I really had to wrap my head around to understand and I think it is both worthy of calling out and explaining a bit -- the futurism aspect will come into focus shortly.  Unlike CPUs which work off of complex queuing methodologies computing in time, Maxeler's DFEs and more generally FPGAs compute in space.  What that means is that as data is flowing through the computing element it (the data) can be processed at ultra low latencies and little cost.  This is nothing short of profound because it means that in the case of networking the valueless action of moving data from system A to system B can now provide value.  This is in fact what Microsoft's Azure FPGA acceleration efforts for Neural Networks, Compression, Encryption, Search acceleration, etc. are all about.  To drive the point home further, what if you could put a networking card in a production database system and through the live database log ETL the data, via a read operation, immediately putting it into your data warehouse?  This would completely remove the need for an entire Hadoop infrastructure or performing ELT, and that means computing in space frees data center space.  Putting ETL on a programmable card is my projection ahead to tease the reader with now possible use cases, and further ETL logic executing on a Dataflow card gets down to computing in space not time! 

Michael Hay

The Many Core Phenomena

Posted by Michael Hay Employee Dec 13, 2016

Introduction

Have you ever started an effort to push something, figure out it is like pushing a rope, pause, and then realize someone else picked it up and is pulling you instead?  Well, that is what F1 (not the car racing) did for the market’s use of many core computing.  For Hitachi our usage of FPGAs, custom chips and standard CPUs, also known as many core computing, is something we do because it elegantly and efficiently solves customer problems.  Even if it isn't fashionable it provides benefits and we believe in benefiting our customers.  So in a very real sense the self initiated mission that Shmuel and I embarked on more than 10 years ago has just come full circle.

 

About a decade ago we (Shmuel Shottan and Michael Hay) began both an open dialogue and to emphasize our efforts utilizing FPGA technologies in combination of “traditional” multi-core processors.  The product was of course HNAS and the results were seriously high levels of performance levels unachievable using only general purpose CPUs.  Another benefit is that the cost performance of such an offering is naturally superior (both from a CAPEX and OPEX perspective) than an architecture utilizing only one type of computing.

 

Our dialogue depicted an architecture in which the following attributes defined the implementation:

  • High Degree of parallelism - Parallelism is key to performance. Many systems attempt to achieve such parallelism. While processors based implementations can provide some parallelism (provided the data has parallelism), as demonstrated by traditional MIMD architectures, (Cache coherent or message passing), such implementations require synchronization that limits scalability. We chose fine grain parallelism by implementing in FPGA state machines.
  • Off-loading - Off-loading allows the core file system to independently process metadata and move data while the multi-core processor module is dedicated to data management. This is similar to traditional coprocessors (DSPs, Systolic Arrays, Graphics engines). This architecture provides yet another degree of parallelism.
  • Pipelining - Pipelining is achieved when multiple instructions are simultaneously overlapped in execution. For a NAS system it means multiple file requests overlapping in execution.
…”So, why offload a network  file system functions?  The key reason for it was the need to achieve massive fine grain parallelism. Some applications indeed lend themselves well to achieving parallelism with multiplicity of cores, most do not. Since a NAS system will “park” on network and storage resources, any implementation that requires multiplicity of processors will create synchronization chatter larger than the advantage of adding processing elements beyond a very small number. Thus, the idea of offloading to a “co-processor” required the design from the ground up of an inherently parallelized and pipelined processing element by design. Choosing a state machine approach and leveraging this design methodology by implementing in FPGAs provided the massive parallelism for the file system, as the synchronization was 'free'…” (Shmuel Shottan)

shmuel award 2.pngThe implementation examples given in the above quote were about network file system function, dedupe, etc.  At the time the reference was published it seemed like a depiction of an esoteric technology, and did not resonate well within the technical community. Basically, it was rather like pushing on a rope meaning it was kind of interesting exercise in futility for us. People were on the general purpose CPU train and weren’t willing to think differently.  Maybe a better way to say it:  The perceived return from the investments in writing for FPGAs, GPUs and other seemingly exotic computing devices was low.  If you did the math it wasn't so, but at the time that was the perception.  Another attribute of time times Moore’s law still had plenty of gas.  So again, our message was like pushing on a rope lots of effort very little progress.  Our arguments weren't lost on everyone.  The action was tracked by HIPC, and Shmuel was invited to deliver a talk at the HiPC conference. Additionally, within the Reconfigurable Computing track of IEEE people paid attention.

 

Mr. Moore strike one for the general purpose CPU: Intel Acquired Altera, an FPGA company.

In a world where the trend of commoditization in storage, server and networking has been addressed multiple times and “accepted” by many, the addition of FPGA as an accepted “canvas” for the developers to paint on is natural and welcomed. Intel, the world largest chip company, is copying a page from its play book: Identify a mature and growing market and then embed it into their chipsets.  Does anyone remember that motherboards used to include only compute and core logic elements? Intel has identified graphics and networks, SAS connectivity (now NVMe) and core RAID engine as ubiquitous in the previous decades, and indeed most NIC, HBA,  RAID and Graphic chips providers vanished.

 

Companies who leverage the functions provided by Intel’s chipsets for supply chain efficiency while continuing to innovate and focus on the needs of the users not satisfied with mediocrity will excel. (Look at NVIDIA with their focus on GPUs and CUDA). Others, who elected to compete with Intel on cost, perished. The lesson is that when a segment matures and becomes a substantial market segment, expect commoditization, and innovate on top of and in addition to the commodity building blocks since one size does not fit all.

 

Some carry the commoditization arguments further. Not only are all networking, storage and compute systems predicted to run on the same Intel platform from one’s favorite OEM, but even software will become just “general purpose software”, thus, efforts put into software development are a waste since Linux will eventually do just as well.  This hypothesis is fundamentally flawed. It fails to distinguish between leveraging a supply chain for hardware and OSS for SW when applicable, but ignores to acknowledge that innovation is not and will never be dead.

 

IoT, AI and other modern applications will only create more demands on networking, compute and storage systems. Innovation should and will now include enabling certain relevant applications to be implemented with FPGAs and when relevant custom chips. Hitachi, being the leading provider of enterprise scalable systems is best positioned to lead in this new world.  Honestly, we know of no other vendor better positioned to benefit from such a future trend. There are several reasons for this, but the most important is that we recognize that innovation isn’t about OR but AND.  For example general purpose CPUs and FPGAs, scale-up and scale-out, etc.  Therefore, we don’t extinguish skills due to fashion we invest over the long term returning benefits to our customers.

 

Mr. Moore second swing and strike two:  Microsoft Catapults Azure into the future

Microsoft announced that it has deployed hundreds of thousands of FPGAs (field-programmable gate arrays) across servers in 15 countries and five different continents. The chips have been put to use in a variety of first-party Microsoft services, and they're now starting to accelerate networking on the company's Azure cloud platform.

 

In addition to improving networking speeds, the FPGAs which sit on custom, Microsoft-designed boards connected to Azure servers can also be used to improve the speed of machine-learning tasks and other key cloud functionality. Microsoft hasn't said exactly what the contents of the boards include, other than revealing that they hold an FPGA, static RAM chips and hardened digital signal processors.  Microsoft's deployment of the programmable hardware is important as the previously reliable increase in CPU speeds continues to slow down. FPGAs can provide an additional speed boost in processing power for the particular tasks that they've been configured to work on, cutting down on the time it takes to do things like manage the flow of network traffic or translate text. 

 

Azure CTO Mark Russinovich said using the FPGAs was key to helping Azure take advantage of the networking hardware that it put into its data centers. While the hardware could support 40Gbps speeds, actually moving all that network traffic with the different software-defined networking rules that are attached to it took a massive amount of CPU power.

"That's just not economically viable," he said in an interview. "Why take those CPUs away from what we can sell to customers in virtual machines, when we could potentially have that off-loaded into FPGA? They could serve that purpose as well as future purposes, and get us familiar with FPGAs in our data center. It became a pretty clear win for us."

"If we want to allocate 1,000 FPGAs to a single [deep neural net] we can," said Mike Burger, a distinguished engineer in Microsoft Research. "We get that kind of scale." (Microsoft Azure Networking..., PcWorld)

That scale can provide massive amounts of computing power. If Microsoft used Azure's entire FPGA deployment to translate the English-language Wikipedia, it would take only a tenth of a second, Burger said on stage at Ignite. (Programmable chips turning Azure into a supercomputing powerhouse | Ars Technica )

Third strike you’re out - Amazon announces FPGA enabled EC2 instances the new F1
The paragon in the commodity movement, Amazon Web Services not Intel, has just made bets on Moore’s Law but not where you might think.  You see, if we change the game and say that Moore was correct about a team of processing power then Mr. Moore’s hypothesis is still proved true on FPGAs, GPUs and other special purposed elements.  Both of us believe that if software is kind of like a digital organism then the hardware is the digital environment organisms live on.  And just like in the natural world both the environment and the organisms evolve independently and together dynamically.  Let’s tune into Amazon’s blog post announcing F1 to gather a snapshot on their perspective.

"Have you ever had to decide between a general purpose tool and one built for a very specific purpose? The general purpose tools can be used to solve many different problems, but may not be the best choice for any particular one. Purpose-built tools excel at one task, but you may need to do that particular task infrequently.

 

"…Computer engineers face this problem when designing architectures and instruction sets, almost always pursuing an approach that delivers good performance across a very wide range of workloads. From time to time, new types of workloads and working conditions emerge that are best addressed by custom hardware. This requires another balancing act: trading off the potential for incredible performance vs. a development life cycle often measured in quarters or years….

 

"...One of the more interesting routes to a custom, hardware-based solution is known as a Field Programmable Gate Array, or FPGA. In contrast to a purpose-built chip which is designed with a single function in mind and then hard-wired to implement it, an FPGA is more flexible. It can be programmed in the field, after it has been plugged in to a socket on a PC board….

 

"…This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications…. (Developer Preview..., Amazon Blogs)

Hitachi and its ecosystem of partners has led the way and continues to lead in providing FPGA and as relevant custom chip based innovations to many areas such as genomics, seismic sensing, seismic analysis, financial risk analysis, big data search, combinatorics, etc.

 

The real game, Innovation is about multiple degrees of freedom

To end our discussion let’s start by reviewing Hitachi’s credo.

[Our aim, as members of Hitachi,] is to further elevate [our] founding concepts of harmony, sincerity and pioneering spirit, to instill a resolute pride in being a member of Hitachi, and thereby to contribute to society through the development of superior, original technology and products.

 

Deeply aware that a business enterprise is itself a member of society, Hitachi [members are] also resolved to strive as good citizens of the community towards the realization of a truly prosperous society and, to this end, to conduct [our] corporate activities in a fair and open manner, promote harmony with the natural environment, and engage vigorously in activities that contribute to social progress.

This credo has guided Hitachi employees for more than 100 years and in our opinion is a key to our success.  What it inspires is innovation across many degrees of freedom to improve society.  This freedom could be in the adoption of a clever commercial model, novel technologies, commodity technologies, new experiences, co-creation and so on.  In other words, we are Hitachi Unlimited! 

 

VSP FPGA Blade - sketch.jpeg

During the time of pushing on the rope – with respect to FPGA and custom chip usage –  if we had listened to the pundits and only chased general purpose CPUs, when the market picked up and pulled, we’d not be in a leadership position.  Given this innovation rope here are four examples:

 

Beyond these examples, there are many active and innovative FPGA and custom chip development tracks in our Research pipline.  So we are continuing the intention of our credo by picking up other ropes and pushing abundantly to better society!