In a nutshell
Edelweiss is a leading financial conglomerate providing a broad range of financial products across domestic and global geographies. It had a turnover of USD 582M in FY 2015. Edelweiss Group worked on number of acquisitions and mergers in the period 2014-2016. So, the IT department landed up with multiple business applications. This led to inflated requirement of servers and licences. One of the things which they decided to work on was taking a re-look at their servers and proprietary software licences.
Challenges, not problems
Edelweiss had a rapidly increasing customer base generating large transaction volumes across its myriad business units. Edelweiss needed efficient systems to transact, analyse and report on the data generated to effectively target their products and services. For this, they had deployed several multi core servers with MSSQL as the database engine. But, the organisation was facing bottlenecks in transaction processing and reporting and was faced with significant investments in licenses and hardware to meet current and future business needs.
Solutions, not ideas
The client had invested 100+ person years of time in development 5000+ stored procedures having complex business logic. Most Stored Procedures heavily use temporary tables, complex subqueries, CTE based updates and other powerful constructs. The client has a large in-house team developing .Net and MS-SQL based applications and infrastructure management over the years.
Our team attacked the project on multiple fronts:
- They first translated the schema using standard tools, loaded a year’s test data into the system, and tested the data for correctness of the data. This also gave us confidence in the schema mapping accuracy.
- They sat down and translated the stored procedures one by one, by hand. In the process, some data types needed to be converted, and some types which were native to the original database but not present in Postgres needed to be “synthesized”.
- They set up the infrastructure stack including Postgres, PGPool, and other tools, to deliver reliable and scalable access, and performed load testing to see what number of concurrent connections could be sustained. They also set up mirror databases for near-real-time replication, so that queries could be directed to the read-only slave servers.
- They sat with the client’s application development team and guided them in modifying their dotNet application code to access Postgres. This part of the puzzle was relatively easy.
- They then ran both systems in parallel for a few weeks, where they imported the batch-mode data into both databases and performed aggregate queries on both databases to see if the results matched. Every mismatch was weeded out, till there was 100% confidence in data consistency.
- Finally, on one fateful night, the cutover to the new system was done.
After our client went live with the new database, our team stayed on-site and later on-call for a few months to ensure that the systems were stable. There were one or two reports of hung connections in that time, which were successfully addressed.
The system is still in daily use.
Now that’s value
There was a significant reduction in report processing times of between 30 – 40%. No AMC and license cost were incurred for the Community Maintained PostgreSQL engine. No new hardware was needed to meet estimated data volumes for next 5 years.