April 3, 2025

Data science x financial services. Why QED invested in Hydrolix
At QED Investors, we have always believed in the transformative power of data to drive financial services forward. But data alone isn’t enough—its value depends entirely on how it is collected, processed and applied to drive decisions and actions. The sheer volume of data being generated today is orders of magnitude greater than ever before, and advances in data platforms have given companies the ability to turn that data into superior financial products. Yet many existing solutions struggle to keep pace, forcing companies to rethink how they store, process and analyze data at scale.
This is precisely why we invested in Hydrolix. Hydrolix allows companies to ingest, store and query log data at massive scale — keeping the data hot and accessible for extended periods while maintaining cost efficiency. The result is a streaming data lake for log and event data that is easily searchable and cheap to query, making Hydrolix the ideal data platform for storing and analyzing log data in real-time. This infrastructure is a game-changer for businesses that rely on large-scale data, enabling them to extract value from their information faster, reduce costs and stay ahead in an increasingly data-centric world.

To contextualize the significance of this technology, it is important to examine the evolution of data in financial services over time.
Evolution of Data Science for Financial Services:
When QED co-founder Nigel Morris co-founded Capital One in the late 1980s, the bank’s data was stored on physical tapes and wiped clean on a regular basis due to the high cost of keeping the data accessible. This was a massive, missed opportunity for unique data insights—but what if that same data, rather than being a cost burden, could drive competitive advantage and power smarter financial products? That insight led to the birth of Capital One.
Capital One’s ability to leverage data was made possible by the rise of relational database management systems (RDBMS) and data warehouses pioneered by IBM, Oracle and Microsoft in the 1980s. This technology enabled Capital One to store data longer and maneuver it in ways never before possible, leading to the largest Oracle database of its time. Oracle’s infrastructure powered Capital One’s Information-Based Strategy—scientific data testing to deliver “the right product to the right customer, at the right time and at the right price.” This transformed banking by expanding financial inclusion, proving that disciplined data usage could drive competitive advantage. The industry followed suit, and the use cases exploded throughout other industries. However, these legacy data warehouses remained expensive, slow and inflexible, creating an opportunity for disruption.
Then came the birth of the cloud in the 2000s, which revolutionized data access, allowing smaller-scale companies to leverage data warehouses and big data analytics. With the launch of AWS and other cloud providers, businesses no longer needed costly data centers or on-prem infrastructure, leveling the playing field for younger, nimbler fintechs. This shift enabled companies like Credit Karma, one of QED’s earliest investments, to process the vast amounts of data needed to deliver personalized financial recommendations, paving the way for a new era of data-driven financial services.
The 2010s brought a wave of innovation that redefined data warehousing. Companies like Snowflake and Databricks transformed how financial services firms stored, processed and analyzed data. Capital One, an early investor and power user of Snowflake, leveraged its architecture to decouple storage from compute—reducing costs while dynamically scaling resources. Snowflake enabled real-time analysis, faster querying and seamless data sharing across organizations, moving financial services from static reporting to dynamic, data-driven decision-making in areas like marketing, fraud detection and risk assessment.
Databricks offered a more flexible framework for processing both structured and unstructured data, making it particularly valuable for AI and machine learning. Unlike traditional warehouses, Databricks allowed firms to run complex computations across big datasets, integrating real-time analytics with advanced modeling. This capability was crucial for risk modeling and predictive analytics, enabling firms to move beyond retrospective analysis to proactive decision-making.
Fast forward to today, technological innovation has driven step-function improvements in how data is used, transforming financial services and every other industry on the planet. Yet, significant challenges remain, creating opportunities for disruption:
- Enterprises struggle with short data retention due to high storage costs, limiting their ability to analyze long-term trends
- Even when data is retained, it is often aggregated and stripped of detail, making it difficult to extract meaningful insights as business needs evolve
- Inflexible data architectures slow down queries and hinder adaptability, while real-time decision-making remains constrained by complex, latency-heavy pipelines
- Existing data infrastructures are either too costly—over-provisioning resources for peak loads—or too rigid, leading to slow queries and data loss when demand spikes
Hydrolix is purpose-built to solve these challenges.

Why Hydrolix?
Unlike traditional data architectures that force companies to choose between cost, speed and flexibility, Hydrolix enables real-time analytics on massive datasets without requiring aggregation or expensive storage solutions. Its advanced compression and columnar storage allow enterprises to retain raw, high-fidelity log data indefinitely while maintaining lightning-fast query speeds. By dynamically adapting to changing workloads, Hydrolix ensures cost-efficient scaling without unnecessary infrastructure costs. Unlike batch-based systems that introduce latency, Hydrolix powers a real-time data pipeline, enabling instant decision-making.
Hydrolix is expanding its library of integrations to allow companies to get more value out of their log data. That includes connectors with Splunk, Kibana (the visualization tool traditionally used with the ELK stack) and Databricks. We've already talked about the tremendous value of Databricks—by combining Hydrolix with the AI and machine learning capabilities of Databricks, enterprises will be able to get deeper insights into their data for a wide range of use cases, including fraud and anomaly detection.
Hydrolix’s technology transformed the economics and analytics of log data retention, powering security, observability, ecommerce, adtech and other log-intensive use cases for 400+ customers in numerous industries. For the financial services industry specifically, while we’re still in the early innings, we believe technological breakthroughs like this will naturally drive the next generation of smarter, more advanced financial products. Hydrolix enables companies to analyze long-term trends, react to market events in real time and extract granular insights—all critical for financial services platforms.
At QED Investors, we back companies that are reshaping financial services—and Hydrolix has the potential to do just that. As data volumes explode, financial institutions that efficiently extract value from that data will have a clear advantage. Hydrolix is building the essential infrastructure to make that possible, and we’re proud to lead their $80MM Series C and support them on their journey.
Alex Taub: Alex.taub@qedinvestors.com; Connect on Linkedin