Intuizi Cut Storage Costs 50%+ with Akave on Avalanche

50%+ lower storage costs. Zero egress fees. 60% faster analytics. Intuizi migrated to Akave Cloud with Snowflake querying Iceberg tables—existing Parquet structures and pipelines unchanged.
Stefaan Vervaet
March 27, 2026

60% Faster Analytics Turnaround Time, Without Rebuilding Pipelines

Intuizi helps brands and agencies target and measure audiences using privacy-safe data. It processes consented consumer signals with personal identifiers removed before analysis. At 680 billion signals per month, the data was growing faster than the storage economics could keep up. Egress fees turned routine data operations into budget decisions.

Intuizi integrated Akave Cloud, an enterprise-grade, S3-compatible object storage platform built on it’s own dedicated Avalanche Layer 1 (L1) blockchain. With Snowflake querying Apache Iceberg tables stored on Akave, Intuizi modernized its data lake without reworking ingestion pipelines.

Outcomes (reported by Intuizi):

  • 50%+ lower storage costs
  • Zero egress fees for data transfers and customer exports
  • 60% faster analytics turnaround time

When Petabytes Turn Exports Into Egress Bills

As Intuizi's data lake grew to multi-petabyte scale, the storage layer started showing stress.

Rising query costs. As Intuizi's Snowflake workloads grew larger and more complex, querying data against standard cloud storage became progressively more expensive. Metadata management through AWS Glue added operational overhead and introduced inconsistencies between catalogs.

Vendor lock-in. Storage and compute were tightly coupled to a single cloud provider. Moving data to test a new analytics engine (or simply exporting customer datasets) meant absorbing egress fees every time. Experimentation was expensive. Architectural flexibility was constrained.

A stalled modernization path. Intuizi wanted to adopt Apache Iceberg, an open table format that adds a structured metadata layer on top of Parquet data files. Iceberg would enable faster partition pruning and more efficient queries at petabyte scale. But converting to Iceberg without disrupting the pipelines, catalogs, and downstream AI models already in production posed real migration risk.

The storage setup that had served Intuizi well at smaller scale was limiting what they wanted to build next.

Same Snowflake Queries, New Storage Layer on Akave

Akave Cloud is an enterprise-grade, distributed object storage platform built on an Avalanche L1. Its API is S3-compatible and Snowflake-qualified, meaning enterprises integrate it without changing existing pipelines, partitioning logic, or downstream tools.

For Intuizi, migration centered on a copy operation. Using standard Rclone or AWS CLI commands, their data moved from existing cloud storage into Akave Cloud. Bucket paths, object names, and Parquet partition structures were preserved as-is. To handle the volume efficiently, Akave and Intuizi established a dedicated connection for large data transfers, reducing network overhead and increasing throughput during the dataset migration.

Once in Akave Cloud, native Apache Iceberg support applied a structured metadata layer on top of Intuizi's existing Parquet collections, without modifying the underlying files. Snowflake external tables pointed at those Iceberg datasets gained access to modern metadata. That meant faster partition pruning, faster query planning, and less scan overhead on multi-terabyte tables.

The resulting architecture preserves Intuizi's existing data flow. Their ingestion pipeline writes partitioned Parquet files structured by country, day, and provider. Akave Cloud distributes that data across its Akave-operated Oasis node network using erasure coding. It also replicates data across an additional durability layer, managed entirely by Akave Cloud.

Access control and policy events are written to Avalanche L1, creating cryptographically verifiable audit logs. Snowflake queries run against Iceberg tables stored on Akave Cloud, reading only the relevant partitions rather than scanning full datasets.

Existing compute workflows continued without modification, with Snowflake as the core analytics engine.

Why Avalanche: Verifiable Audit and Policy Controls at Scale

Akave Cloud is built on Avalanche because Avalanche's L1 architecture provides three capabilities that vendor-controlled infrastructure does not.

Tamper-evident audit trails. Policy and audit events processed through Akave Cloud are recorded on Avalanche L1 with cryptographic integrity. These records are tamper-evident. No single party can alter them, including Akave. This provides an alternative to vendor-controlled dashboards, because any party can independently verify the on-chain record using public blockchain explorers.

On-chain access control. Akave Cloud uses Avalanche's smart contract layer to enforce access policies in code rather than service tickets. Unlike ticket-based access changes, these policies execute exactly as written every time and are verifiable on-chain.

Fast finality and low fees. Avalanche is designed for near-instant on-chain transactions with low fees, making high-frequency audit and policy events practical where traditional audit workflows rely on periodic exports.

What goes on-chain: access, policy, and audit metadata only. Object contents stay in the storage layer.

Akave Cloud uses eCID, an encrypted content identifier computed after encryption and erasure coding, to verify data integrity. It also uses Proof of Data Possession (PDP), where storage nodes prove they hold a dataset without retransmitting it. Durability comes from erasure coding and cross-network replication.

Akave Cloud builds on Avalanche's L1 foundation to deliver storage that is verifiable by design, serving enterprise customers like Intuizi who need that verifiability alongside S3 compatibility and Snowflake qualification.

Results: 50% Lower Costs, 60% Faster Analytics

After integrating Akave Cloud, Intuizi's costs dropped because of Akave's storage pricing and zero egress fees. Performance improved because Iceberg metadata made Snowflake queries faster and more efficient.

Cost:
  • Storage costs decreased by more than 50% compared to previous cloud infrastructure
  • Zero egress fees for data transfers and customer exports
Performance:
  • Analytics turnaround time improved by 60%
  • Iceberg-backed queries ran approximately 50% faster than non-Iceberg equivalents
  • Performance aligned with Intuizi's internal benchmarks
  • Faster metadata access and partition operations across all compute engines
Flexibility:
  • Compute runs against Akave Cloud storage from any S3-compatible engine without reformatting data
  • Existing Parquet partitioning by country, day, and provider preserved without modification
  • No ingestion pipeline rearchitecture required for migration
"The partnership with Akave/Snowflake is a game-changer for our customers. By leveraging this modern, high-performance architecture, we've removed friction and accelerated our Core Storage for AI training. This means Intuizi can now deliver the actionable insights our clients need to power their segmentation, measurement, and AI initiatives with greater value." Ron Donaire, CEO, Intuizi
"Intelligence platforms, such as Intuizi benefit from Snowflake's unique architecture that combines analytics and AI capabilities in the cloud and support for externally hosted Iceberg tables on Akave. That gave Snowflake customers the Iceberg metadata benefits without changing how their Parquet data is organized." Stefaan Vervaet, CEO & Co-founder, Akave

What This Means for Enterprise Data Infrastructure?

Most enterprise storage decisions still come down to cost and compatibility. This deployment adds a third variable: verifiability. Can the storage layer produce audit evidence that no single party controls?

That question is becoming harder to defer. In the EU, DORA and NIS2 require financial institutions and critical infrastructure operators to demonstrate auditable data handling. The EU AI Act introduces traceability requirements for training data. These regulations reward infrastructure that produces independently verifiable records by default, not as an aftermarket add-on. Akave Cloud on Avalanche aligns with that direction because the audit layer is built into the storage protocol, not bolted on after the fact.

For enterprise teams evaluating storage infrastructure, cost and performance still matter. But procurement teams are increasingly asking: can we prove what happened to our data, and can we verify that proof independently? Avalanche-based infrastructure like Akave Cloud is built to answer both, at petabyte scale, without replacing the analytics tools teams already rely on.

What Enterprise Teams Can Copy?

  1. Migration risk stayed low. Akave Cloud's S3 compatibility and Snowflake qualification meant Intuizi kept existing pipelines, partitioning, and external table workflows.
  2. Iceberg delivered performance gains without rewriting Parquet. Iceberg metadata enabled partition pruning and reduced scan overhead for Snowflake external tables.
  3. Avalanche contributed verifiability. Audit trails and access policies are verifiable on-chain, reducing reliance on vendor-controlled logs.
  4. Outcomes were measurable. 50%+ storage cost reduction, 60% faster analytics turnaround, and performance aligned with Intuizi's internal benchmarks.

FAQs

How did Intuizi reduce Snowflake storage costs by 50%?

Intuizi migrated their multi-petabyte data lake to Akave Cloud, an S3-compatible object storage platform with flat-rate pricing at $14.99/TB/month and zero egress fees. Snowflake queries Apache Iceberg tables stored on Akave via external tables. Existing Parquet partitioning, ingestion pipelines, and downstream workflows remained unchanged.

What is Apache Iceberg and why does it improve Snowflake query performance?

Apache Iceberg is an open table format that adds a structured metadata layer on top of Parquet data files. It enables faster partition pruning, more efficient query planning, and reduced scan overhead at petabyte scale. Intuizi saw approximately 50% faster queries on Iceberg-backed datasets compared to non-Iceberg equivalents, and 60% faster analytics turnaround overall.

How do you migrate data to Akave Cloud without disrupting Snowflake pipelines?

Migration is a standard S3-to-S3 copy using Rclone or AWS CLI. Bucket paths, object names, and Parquet partition structures are preserved as-is. Akave Cloud's S3-compatible API is Snowflake-qualified, meaning external tables work without changing existing pipelines, partitioning logic, or downstream tools. Intuizi completed migration without reworking ingestion pipelines.

What does zero egress fees mean for enterprise data operations?

Zero egress fees means no per-GB charges for moving data out of storage. For Intuizi, this eliminated the cost of routine data operations—exporting customer datasets, testing new analytics engines, and running cross-platform queries. Experimentation became cheaper and architectural flexibility increased without egress penalties.

How does Akave Cloud provide verifiable audit trails for compliance?

Akave Cloud is built on an Avalanche Layer 1 blockchain. Policy and audit events are recorded on-chain with cryptographic integrity—tamper-evident records that no single party can alter, including Akave. Any party can independently verify these records using public blockchain explorers, providing audit evidence that doesn't rely on vendor-controlled logs.

What is eCID and how does it verify data integrity?

eCID (encrypted content identifier) is computed after encryption and erasure coding. It provides a cryptographic fingerprint that verifies data integrity without exposing contents. Combined with Proof of Data Possession (PDP), where storage nodes prove they hold a dataset without retransmitting it, Akave Cloud delivers verifiable storage at the protocol level.

Does Akave Cloud work with Snowflake and other analytics engines?

Yes. Akave Cloud's S3-compatible API is Snowflake-qualified for seamless integration via external tables. The same Parquet and Iceberg data is accessible from any S3-compatible engine—Snowflake, Databricks, Presto, Flink, Trino—without reformatting or duplication. Intuizi runs Snowflake as their core analytics engine against data stored on Akave Cloud.

What compliance frameworks does verifiable storage support?

Verifiable audit trails align with DORA (Digital Operational Resilience Act), NIS2 (Network and Information Security Directive), and EU AI Act traceability requirements for training data. These regulations require auditable data handling with independently verifiable records. Akave Cloud's on-chain audit layer is built into the storage protocol by default, not added as an aftermarket feature.

Why did Akave choose Avalanche for its storage infrastructure?
Avalanche's L1 architecture provides tamper-evident audit trails with cryptographic integrity, on-chain access control via smart contracts (policies execute exactly as written and are verifiable), and fast finality with low fees for high-frequency audit and policy events. Object contents stay in the storage layer—only access, policy, and audit metadata goes on-chain.

What results did Intuizi achieve after migrating to Akave Cloud?

Intuizi reported 50%+ lower storage costs, zero egress fees on all data transfers and customer exports, 60% faster analytics turnaround time, and approximately 50% faster Iceberg-backed queries versus non-Iceberg equivalents. Migration required no ingestion pipeline rearchitecture, and existing Parquet partitioning by country, day, and provider was preserved.

Try Akave Cloud Risk Free

Akave Cloud is an enterprise-grade, distributed and scalable object storage designed for large-scale datasets in AI, analytics, and enterprise pipelines. It offers S3 object compatibility, cryptographic verifiability, immutable audit trails, and SDKs for agentic agents; all with zero egress fees and no vendor lock-in saving up to 80% on storage costs vs. hyperscalers.

Akave Cloud works with a wide ecosystem of partners operating hundreds of petabytes of capacity, enabling deployments across multiple countries and powering sovereign data infrastructure. The stack is also pre-qualified with key enterprise apps such as Snowflake and others. 

About Intuizi

Intuizi is a U.S.-based audience intelligence platform for human-derived signals data. The platform ingests consented consumer signals with personal identifiers removed. It transforms them into privacy-safe insights for financial teams, marketers, brands, and AI-driven analytics teams. Intuizi does not store raw PII and maintains strict compliance standards across all data activation and measurement use cases.

About Avalanche

Avalanche is an ultra-fast, low-latency blockchain platform designed for builders who need high performance at scale. The network’s architecture allows for the creation of sovereign, efficient and fully interoperable public and private layer 1 (L1) blockchains which leverage the Avalanche Consensus Mechanism to achieve high throughput and near-instant transaction finality. The ease and speed of launching an L1, and the breadth of architectural customization choices, make Avalanche the perfect environment for a composable multi-chain future. 

Supported by a global community of developers and validators, Avalanche offers a fast, low-cost environment for building decentralized applications (dApps). With its combination of speed, flexibility, and scalability, Avalanche is the platform of choice for innovators pushing the boundaries of blockchain technology.

Moderne infra. Verifieerbaar door ontwerp

Whether you're scaling your AI infrastructure, handling sensitive records, or modernizing your cloud stack, Akave Cloud is ready to plug in. It feels familiar, but works fundamentally better.