Genady Chybranov

7 steps to successfully implement AI in financial services

Blog Post created by Genady Chybranov Employee on Aug 6, 2018

Couple of weeks ago, I was running webinar on implementing AI in Financial Services. I have received multiple comments and questions since then, so I decided to summarise the main points in this posts.


Step 1. Educate all the relevant stakeholders involved in a project on AI capabilities

One of the common reasons AI/ML project fail are unrealistic expectations or not sufficient business support. The root cause of those issues lies in lack of understanding of AI/ML capabilities. As one of the most hyped topics recently it has a lot of predictions, opinions, overviews circulating in media and internet. The views range from AI is completely going to replace humans and all of us can happily retire to other extreme when the technology is just an empty trend. Hence the first step in successful AI project is to educate business stakeholders on the potential AI has and be upfront of the limitations too. I found that the least technical the explanation the easier it is to understand especially if the explanation complimented with industry specific examples and use-cases. Be practical!


Step 2. Start with well-defined business use case

In order to significantly increase success chances of any innovation project we need to start with a well-defined business case. The business case should include clear definition of what is the problem we are solving, who are we solving it for, what would be success criteria and how long would it take. The clearer the better, qualify and quantify. It should be no surprise that projects with highest impact and shortest implementation time are easier to get support and funding. After all everyone want to be part of success story.


Step 3. Understand the risks related to AI solution and have contingency plan

It is very important to understand technology limitations and risk associated with it. During the webinar I covered 3 main:

  1. Accuracy. One of the points often missed in building a business case for ML implementation is how accurate results need to be. For example, how confident we are on our estimate of customer repaying the loan? 90%, 70 % or 30 % what level of risk is acceptable for the particular use case. AI models often gives a probability of outcome and we need to be comfortable with some uncertainty.
  2. Biases. It is important to remember that the humans deciding on what training data to use and which data points to take in consideration. As humans we subjectable to biases and can introduce them to our ML models.
  3. Regulations. Financial services industry is heavily regulated, and regulators are catching up with technology trends. Therefore, organisations need to be prepare in justifying decisions taken by their AI systems to the relevant authorities.


Step 4. Think about data needed in terms of variety and volume

The quality of AI decision in directly correlated with volume, quality and variety of data. Garbage in – garbage out is very applicable of AI. While deciding on AI business case it is crucial to evaluate required data availability and quality also considering how difficult it to integrate this data into analytics pipeline.

Quality of models could be significantly improved by providing additional data point coming from external data source like IoT devices, mobile sensors, video analytics, social media and others. Since this data resides outside of organisation it is important to consider how it would be integrated and governed.


Step 5. Leverage resource eco-system available

Access to talent is one of the main obstacles in AI adoption. Pragmatic organisation can leverage wide eco-system to improve their ROI and delivery speed. I covered 5 different areas to get relevant resources:

  1. Internal Resources is a very common first choice if organisation has strong data science team. Obvious full control over the solution developed in-house and easier integration. Few cons are the cost of internal resource maybe still high and it takes time to developed new project from scratch.
  2. Start-ups collaboration is another popular option. In this case organisation gets big some parts of solution of the shelf using innovative technology which is a good way to speed up the project and learn new tech. Another pro is start-ups are usually cheaper than traditional vendors. However, there are some significant risk that needs to be evaluated when dealing with the start-up: ability to scale and support, funding and level of maturity, current investors (what if it is the major competitor), procurement and integration cycle need to be adjusted for start-up culture.
  3. Traditional vendors are becoming more and more involved in AI projects. Tools and solutions are more mature and come with enterprise level support. All this comes with a price though and it often maybe quite challenging to integrate multi-vendor solution.
  4. Academia is often underutilised source of highly skilled data scientists. Usually research institutes are looking in applying their IP in industry projects. It is a good way to tap on cutting edge research and technology. Commercial organisations need to be aware of cultural defences engaging with academia especially when it relates to speed.
  5. Bonus option is co-creation with large technology company. Hitachi, as an example, has very large R&D departments that build new generation of products. We often co-create with our customers solutions involving emerging technologies. It is a great way for us to test technology in real life applications. Our clients receive access to top R&D resources to build bespoke solutions for them.

I would advice to evaluate all 5 options when choosing resources mixing and matching them according to the needs.


Step 6. Optimise data preparation and simplify feature engineering

Data preparation and feature engineering are most time-consuming tasks mostly performed manually. Organisation should consider using tools to automate and increase efficiency. In this blog you can find how Hitachi data scientists optimised time required for it from weeks to just hours: and



Step 7. Scale by automating model deployment and re-training

The model accuracy diminishes as the world around changes and it no longer represented by set of old training data. That is why models need to re-train often to keep required accuracy levels. Many fraud detection models are updated daily. We’ve created tools that allow to automate and operationalise this process with very little manual intervention. I would recommend reading this blog to understand more


If you are interested in listening to webinar recording it is available here: