From investment decisions to insurance policies, there are many situations in financial services that require running complex calculations across a very large number of scenarios. Cloud computing promises to make this faster and cheaper, but how does it really work?

Do You Want Results Faster, Cheaper, or Both?

When preparing to run extensive scenario analyses in the cloud there are two primary considerations: compute capacity and database access. You may have a complex algorithm or model that only requires a small amount of data and just needs lots of processing power to run many calculations in parallel. You may have simpler models that do not need much computing power but have to ingest a massive amount of data. Or you may have functions that need to scale both compute and database capabilities. 

Then you need to consider how to manage the cost. In some cases the processes run overnight on in-house servers and produce results without a directly attributable cost, but not fast enough for today’s market demands. In other cases there is no solution for delivering the desired output. That leaves the essential question: how quickly do you need results for the application to be successful?

Scaling Compute for Faster App Response

In one example, a Beacon client wanted to build a web app to dynamically generate graphs for an insurance policy with sliders controlling some key inputs, such as desired coverage or monthly payment. Using the app, users enter their basic info, such as age, height and weight, and lifestyle info such as smoking and drinking habits. The calculation does not involve a huge amount of data, but takes about a half second for each chart. Sequentially running the hundreds of desired graphs took more than 5 minutes, significantly more than the 10 – 15 seconds the client was targeting.

Since the client had already written their algorithm in C++ and tested it, they did not want to have to rewrite it for this app. That’s where the Beacon team stepped in. The Beacon team showed the client how to use a simplified wrapper and interface generator to wrap the code in Python and call it from the web application. The application itself was built in just a few days using Beacon’s Glint development framework and visualization tools. Using the compute dashboard, the team could easily specify how many virtual machines to spin up, and tested several different setups. Using 15 virtual machines the client achieved the results they wanted, dropping the processing time from 5 minutes to under 10 seconds.

Scaling Database Access for Large Scenarios

Another Beacon client was running extensive investment scenario analyses that were both compute and data intensive. Scaling database access was the critical challenge for this client. Getting timely results involved thousands of nearly simultaneous database queries. They managed to achieve this scale with their existing data cloud platform using more than 80 parallel nodes, but the cost was very high and they wanted a less expensive option.

Working with Beacon, the client setup a series of 20 parallel open source databases behind a memory cache and load balancer. Using out-of-the-box capabilities of Beacon Core, they built a scalable, production-quality setup that provided a much less expensive way to support massive parallelization of the database. With this approach they produced their scenario analyses in a similar amount of time, but at 100 times less cost.

Delivering Flexibility and Scale

Beacon’s cloud and data agnostic services provide the essential tools to take advantage of the scale and flexibility of cloud computing. A wide range of connectors and interfaces make it easy to ingest your existing data sources, import code libraries, and link to existing systems. Whether you want to build new applications or extend the capabilities of existing ones, Beacon’s enterprise-scale cloud infrastructure and comprehensive management dashboards let you harness the scale of cloud computing to get the analysis you need when you need it.