CircadifyCircadify
Insurance Technology12 min read

What Is an Insurtech Stack? Where Contactless Vitals Fit In

A breakdown of the modern insurtech stack for life insurance carriers, from core systems to API layers, and where contactless vitals screening fits in.

ayhealthbenefits.com Research Team·
What Is an Insurtech Stack? Where Contactless Vitals Fit In

The insurtech stack is one of those terms that gets used constantly but rarely defined with any precision. Ask five people at a conference what it means and you'll get five different answers, most of them involving hand-waving about "the cloud" and "AI." But for life insurance carriers trying to modernize underwriting, the stack is a specific, layered architecture, and understanding where each piece fits determines whether a technology investment actually produces faster policy issuance or just generates another integration headache. Contactless vitals screening, the newest entrant in the data layer, is a good example of a technology that only works when the surrounding stack is built to receive it.

"75% of insurtech funding in Q3 2025 targeted AI-centered firms, with the market shifting from future-gazing to pragmatic deployment of tools that operationalize data and strip out administrative friction." — Gallagher Re, Q3 2025 Global InsurTech Report

What an insurtech stack actually looks like

The term "insurtech stack" refers to the full set of software systems, APIs, data services, and infrastructure that a life insurance carrier uses to quote, underwrite, issue, and manage policies. Older carriers built these as monolithic platforms. Everything lived in one system, tightly coupled, hard to change. The modern approach, and the one that most carriers are migrating toward, is a composable, API-first architecture where each function operates as an independent service.

According to a 2025 analysis by Golden Door Asset Management, the transition from monolithic core systems to composable, API-first architecture "is no longer a strategic option for InsurTechs and carriers; it is the central determinant of future market leadership." That's a strong claim, but the data backs it up. Carriers with modular stacks can integrate new data sources in weeks. Those running legacy systems often need 12 to 18 months for the same integration.

The stack breaks down into distinct layers, each handling a different function:

Layer Function Example technologies How it connects
Core administration Policy lifecycle, billing, claims Guidewire, Duck Creek, Majesco, EIS Central system of record
Underwriting engine Risk assessment, decision rules, pricing CAPE Analytics, Verisk, proprietary rule engines Consumes data, produces decisions
Data services External data feeds for risk scoring LexisNexis, MIB, Rx databases, EHR platforms, rPPG screening API calls from underwriting engine
Distribution/POS Agent portals, consumer applications, embedded insurance Custom portals, Bolt, Socotra Front end that triggers underwriting
Analytics/BI Mortality analysis, book performance, fraud detection Tableau, Snowflake, custom ML models Reads from all layers
Integration/orchestration API gateway, event bus, workflow automation MuleSoft, AWS API Gateway, Kafka Connects everything else

Each layer can be swapped, upgraded, or extended independently. That's the whole point. When a carrier wants to add a new data source, say contactless vitals screening, they shouldn't need to rewrite their core administration system. They add it at the data services layer and pipe the results into the existing underwriting engine through a standardized API.

The data services layer is where the action is

For life insurance underwriting specifically, the data services layer is where the most meaningful innovation is happening. This is the layer that feeds information into underwriting decisions, and it has expanded considerably in the past three years.

Traditionally, the data inputs for life insurance underwriting were limited: an application questionnaire, an attending physician statement, maybe a paramedical exam. The digital underwriting movement introduced electronic data sources that could replace or supplement those manual inputs.

Today's data services layer typically includes:

  • Prescription drug histories (Rx databases via Milliman IntelliScript or ExamOne)
  • Motor vehicle records
  • MIB check codes
  • Credit-based insurance scores
  • Electronic health records (via LexisNexis Health Intelligence, formerly Human API)
  • Medical claims data
  • Criminal and public records
  • And increasingly, biometric screening data from rPPG-based contactless vitals

RGA's 2025 research on digital underwriting evidence demonstrated that combining multiple data sources produces better mortality outcomes than any single source alone. EHRs showed the largest individual impact on reducing mortality slippage in accelerated underwriting programs, but the best results came from layering EHRs with claims data and lab databases together.

The implication for stack architecture is clear: the data services layer needs to be designed for easy addition of new sources. A carrier that hardcoded their Rx data integration five years ago now faces a rebuild to add EHRs. Carriers that built a proper API abstraction layer can plug in new data sources without touching the underwriting engine itself.

Where contactless vitals screening fits

Contactless vitals screening through remote photoplethysmography (rPPG) occupies a specific and somewhat unusual position in the insurtech stack. It sits in the data services layer, but unlike most other data sources, it generates data at the point of application rather than pulling historical records.

Here's how that works in practice: an insurance applicant opens the carrier's digital application on their phone. At some point during the application flow, they're prompted to complete a 30 to 60 second facial scan. The phone's front camera captures subtle color changes in their skin caused by blood flow, and from that signal, the system extracts heart rate, heart rate variability, respiratory rate, and blood pressure indicators.

That data then flows through the integration layer as a structured API response, same as any other data source, into the underwriting engine where it's weighed alongside Rx data, EHR records, and everything else.

Data source Data timing Data type Integration effort What it tells underwriters
Rx database Historical (years) Medication fill records Low (established APIs) Chronic conditions, treatment adherence
Electronic health records Historical (years) Clinical notes, labs, diagnoses Medium (varies by aggregator) Comprehensive medical history
MIB check codes Historical (years) Prior application flags Low (industry standard) Previous insurance disclosures
Paramedical exam Point-in-time Blood, urine, vitals High (logistics, scheduling) Current physiological state
Contactless rPPG scan Point-in-time HR, HRV, RR, BP indicators Low (SDK/API) Current cardiovascular indicators

The interesting architectural question is where the rPPG processing happens. Some implementations run the signal processing on-device, in the applicant's phone, and send only the extracted vital signs to the carrier's systems. Others send the raw video signal to a cloud processing service. The choice affects latency, privacy architecture, and bandwidth requirements, all of which matter at the integration layer.

For carriers already running an API-first stack, adding contactless vitals is a relatively lightweight integration. RGA noted in their 2025 health technologies analysis that rPPG "is a non-contact method for measuring physiological signals by analyzing subtle color changes in the skin captured through standard video cameras," and that the technology has potential applications across the insurance value chain, from underwriting to ongoing policyholder engagement.

Why stack architecture determines which carriers can innovate

This is the part that often gets lost in discussions about specific technologies. The technology itself, whether it's EHRs, rPPG, or AI-driven decisioning, is only useful if the carrier's stack can actually consume it. And many carriers still can't.

Gen Re's 2024 U.S. Individual Life Accelerated Underwriting Survey, covering 38 carriers, found that 82% had implemented accelerated underwriting in some form. But the sophistication of those programs varied enormously. Some carriers were running fully automated straight-through processing with multiple data sources. Others had "accelerated" programs that were really just traditional underwriting with one or two electronic data checks bolted on.

The difference almost always came down to architecture. Carriers with modern, composable stacks could experiment with new data sources, run A/B tests on underwriting rules, and iterate quickly. Carriers running 20-year-old administration systems needed major projects just to add a single new data feed.

The insurtech market has recognized this gap. According to Fintech Global, insurtech funding topped $1 billion in February 2026 alone, with investment "increasingly directed toward underwriting technology, automation, and systems designed to support carriers, brokers, and managing general agents." Much of that money is going toward middleware and integration platforms, the connective tissue that lets different stack components talk to each other.

Building a stack that can absorb new data sources

For carriers evaluating their technology architecture, there are a few practical principles that determine how easily they can integrate new underwriting data sources like contactless vitals:

API standardization matters more than vendor choice. Whether you're using Guidewire or Duck Creek for core administration is less important than whether your data services communicate through well-documented, versioned APIs. The carriers that struggle with integration usually have proprietary, undocumented interfaces between systems.

The underwriting engine should be rule-configurable, not code-dependent. When a new data source comes in, underwriters need to create rules for how to weight that data in decisions. If every rule change requires a code deployment, the stack is too rigid. Modern underwriting engines let actuarial and underwriting teams adjust rules through configuration interfaces.

Data normalization is the unglamorous bottleneck. An Rx database returns medication names. An EHR returns diagnosis codes. An rPPG scan returns numeric vital signs. The underwriting engine needs all of this in a common format to make decisions. The data normalization layer, often overlooked in architecture discussions, is what makes or breaks the integration experience.

Latency budgets need planning. A consumer applying on their phone expects a near-instant decision. If the stack calls five external data services sequentially, each taking two to three seconds, the applicant is waiting 15 seconds. Modern stacks make parallel API calls and use asynchronous processing to keep the consumer experience fast.

Current research and evidence

The academic and industry research on insurtech stack architecture has matured considerably. A 2025 paper in Sensors (MDPI) on integrating remote photoplethysmography with machine learning noted that rPPG research has progressed from laboratory proof-of-concept to practical deployment scenarios, with researchers beginning to explore applications in "previously inaccessible areas of healthcare monitoring."

Munich Re's late 2024 research on the accelerated underwriting landscape observed that while program structures have stabilized, digital health data tool usage continues to expand. The implication: carriers aren't redesigning their AUW programs from scratch anymore. They're plugging better data into frameworks that already work.

The Business Research Company projects the insurtech market will reach $739.69 billion by 2035, with AI-driven underwriting as one of the fastest-growing segments. That growth depends heavily on carriers having stack architectures that can actually deploy AI models in production, not just in proofs of concept that never leave the innovation lab.

The future of the insurtech stack

The next phase of stack evolution is likely to be about orchestration rather than individual components. Most of the pieces exist: cloud-native core systems, rich data APIs, configurable underwriting engines, real-time biometric screening. The challenge is making them work together smoothly at scale.

Openkoda's 2026 insurance technology trends analysis pointed to "composable insurance platforms" as the direction carriers are moving, platforms where underwriting workflows can be assembled from modular components rather than built as custom monoliths. In this model, a carrier could swap in a different EHR provider, add contactless vitals, or change their decisioning logic without affecting the rest of the system.

For carriers still running legacy architectures, the gap between what's possible and what they can actually deploy keeps widening. The underwriting technology landscape in 2026 rewards carriers that invested in flexible architecture years ago and creates mounting pressure on those that didn't.

Frequently asked questions

What is an insurtech stack?

An insurtech stack is the complete set of software systems, APIs, data services, and infrastructure that an insurance carrier uses to manage the policy lifecycle. For life insurance, this typically includes core administration, an underwriting engine, data services (Rx, EHR, MIB, biometric screening), distribution platforms, analytics, and an integration layer that connects everything.

How does contactless vitals screening integrate into an existing insurtech stack?

Contactless vitals screening through rPPG integrates at the data services layer, the same way other electronic data sources like prescription databases or EHRs connect. The applicant completes a brief facial scan during the digital application, and the resulting vital signs (heart rate, respiratory rate, HRV, blood pressure indicators) flow into the underwriting engine through a standardized API.

Why can't some carriers adopt new underwriting data sources quickly?

The main barrier is stack architecture. Carriers running monolithic, tightly coupled legacy systems often need 12 to 18 months to integrate a new data source. Carriers with modern, API-first composable architectures can add new data services in weeks because the integration patterns are standardized and the underwriting engine is rule-configurable rather than code-dependent.

What data sources are most commonly used in accelerated underwriting?

According to Gen Re's 2024 survey and RGA's 2025 research, the most common data sources include prescription drug histories, MIB check codes, motor vehicle records, and credit-based insurance scores. Electronic health records and medical claims data are growing rapidly. Contactless biometric screening through rPPG is in early-stage adoption but gaining carrier interest for its ability to capture real-time physiological data at the point of application.

Solutions like Circadify are working in this space, providing rPPG-based screening that plugs into existing insurtech stacks through standard API integrations, giving carriers a new source of underwriting data without requiring applicants to visit a lab or wear a device.

insurtech stackcontactless vitalslife insurance technologyinsurance API
Request a Demo