Introducing ZaKi

Published on: 
May 8, 2024

Revolutionizing ZK proving with optimized hardware solutions

TL;DR: We are launching a new, vertically integrated ZK hosting service.

ZaKi software is based on ICICLE with hardware optimally configured to run accelerated ZK workloads. It delivers unparalleled cost-performance advantages in ZK computation.

Join the waitlist: https://www.ingonyama.com/zaki

ZaKi’s Technological Edge

ZaKi leverages the capabilities of ICICLE, a state-of-the-art ZKP acceleration library. A new variant of ICICLE, named ICICLE-NG (ICICLE No-GPU) enables seamless transition from standard computational setups such as local dev environments to ones that are optimized for ZK-specific workloads, involving high core-count CPUs, cutting-edge Nvidia GPUs, and substantial RAM allocations.

Here we measured the effective cost of EspressoSystems verifiable information disperse protocol. The primary bottleneck for this protocol is a KZG commitment, which we ported to the GPU using ICICLE. Under worst case assumptions, we achieve a 5X improvement in effective-cost-performance compared to the second best instance and 12.7X compared to CPU-only instance.

ZK development is commonly done on traditional dev environments where ZK circuits are typically designed and tested on local CPUs or standard cloud-based CPU instances. As projects scale, the need to enhance the performance of ZK provers becomes critical, prompting a shift towards hardware acceleration.

Here’s where ZaKi makes a difference. By providing a managed environment that is already optimized for ZK computations, we remove the hurdles of hardware setup and configuration, allowing teams to focus solely on their ZK applications.

  1. ICICLE-NG: Initially, developers can utilize ICICLE without requiring direct GPU access. This phase is designed to help developers gauge the potential performance benefits of moving to GPU-accelerated instances without the immediate need for hardware knowledge.
  2. Hardware-Accelerated Deployment: ZaKi operates on a pay-per-proof model, reminiscent of AWS Lambda, allowing developers to run their provers or specific sub-protocols on our optimized ZK instances. We ensure cost efficiency by achieving performance parity with the ICICLE-NG performance estimator through our tailored hardware solutions.
  3. Continuous Improvement and Support: As developers become more familiar with the platform, they benefit from continuous updates to both ICICLE software and hardware configurations, which are handled entirely by our backend — free from the overhead typically associated with such upgrades.

How ZaKi is Different

  • Effective Compute Cost: We define our proof cost metric as the nominal cost divided by instance utilization and further divided by prover efficiency. This demonstrates ZaKi’s ability to drastically reduce operational costs. By optimizing hardware utilization and efficiency, ZaKi offers a significant economic advantage over traditional cloud providers.
  • Developer-Focused: Designed with developers in mind, ZaKi simplifies the transition to using powerful computational resources, ensuring that even teams without deep hardware expertise can leverage the benefits of advanced ZK proving.

Understanding the Need for ZaKi

Generating Zero-Knowledge (ZK) proofs is notoriously data and compute-intensive, requiring substantial computational resources to achieve optimal performance. The deployment of ZK applications necessitates the use of specialized hardware such as GPUs, which excel in handling parallel computations. Maximizing these performance gains demands a sophisticated approach to software/hardware co-design — a process fraught with complexity and specialized knowledge.

As ZK technology increasingly integrates into products transforming our daily lives, the demand for more accessible and powerful computational solutions is becoming apparent. Our aim with ZaKi is to empower developers to utilize hardware accelerators like GPUs, without the intricacies of mastering hardware-specific configurations or the nuances of GPU programming.

Get Involved

As we prepare for a wider rollout, we invite developers to join our waitlist for early access. By participating in the early stages, you can influence the future development of ZaKi and ensure it meets your specific needs. Join the waitlist to sign up for early access.

Frequently Asked Questions

Q1: What is the status of ICICLE’s capabilities?

A1: Since its launch in March 2023, ICICLE, an MIT licensed GPU library, has made significant advancements. It supports languages like Golang, Rust, and C++ and is being integrated with frameworks such as Gnark and Lambdaworks. The library is now adopted by leaders in the field, including EZKL, Brevis, Espresso Systems, Orbiter Finance, Lurk Labs, EigenDA and ZKWasm. The recent Version 2 update has expanded ICICLE functionality to include polynomial arithmetic and small fields, allowing for full end-to-end GPU-based coding of provers.

Q2: How is a silicon company managing infrastructure challenges?

A2: Our approach to managing complex infrastructure challenges leverages the agility of our team and strategic guidance from experts with experience at leading firms like Netflix and AWS. Ingonyama does not own physical data centers; instead, we have established strong, long-standing partnerships with leading web3 infrastructure providers across Europe and the US over the past few years. These relationships ensure that our infrastructure is scalable and capable of growing with our partners’ needs, delivering high performance without the complexities of traditional data center operations

Q3: Has ZaKi been tested in real-world environments?

A3: Yes, ZaKi has undergone rigorous testing. Initially by Ingonyama engineers on our lab machines, then through extended trials involving external contributors via our grant program, and finally, together with our data center partners. This has allowed us to refine and optimize ZaKi across various hardware configurations, ensuring robust performance under diverse operational conditions.

Q4: How does ZaKi contribute to the decentralization of ZK provers?

A4: ZaKi serves as a bridge towards decentralization. While initially centralized, the architecture allows for the evolution into more decentralized configurations, such as running on commodity hardware or dedicated ASICs, depending on the maturity and needs of the prover ecosystem.

Q5: What sets ZaKi apart from other ZK clouds?

A5: ZaKi offers a platform specifically tailored for developers who require the ability to fine-tune hardware performance for specialized provers. ZaKi supports deep customization, enabling users to experiment with various computational strategies and optimize their applications to their unique needs. Other pay-per-proof solutions offer standard APIs for circuit setup and execution. These services can benefit from using ZaKi.

Q6: What are ZaKi’s plans for ensuring data privacy?

A6: The initial version of ZaKi does not include witness privacy features; however, future versions are set to leverage technologies like Nvidia Confidential Computing. We are also exploring innovations in ZK combined with Multi-Party Computation (MPC) to enhance privacy protections.

Q7: Can developers get bare-metal access to ZaKi’s infrastructure?

A7: We plan to offer various levels of access to our infrastructure, including options for bare-metal utilization. Developers who require specific configurations are encouraged to contact us to discuss potential custom setups.

Q8: What does the introduction of ZaKi mean for ZKContainers?

A8: The launch of ZaKi signifies the phasing out of ZKContainers. By building on the foundational technology of ZKContainers, ZaKi represents its natural evolution, offering more advanced and scalable ZK proving capabilities.

Q9: Can you share more details on the specs and benchmarks of your machines?

A9: We launch with a single configuration. Our GPU of choice is Nvidia L4, where each machine is connected to four L4s. This way users can leverage the powerful ICICLE Multi-GPU feature. In a recent experiment on our GPU Gnark Groth16 implementation, adding a second GPU cut the prover running time in half.

For each L4 we designate at least 64 GB of RAM and 24 CPU threads running with high frequency clock. The table below compares ZaKi to a popular CPU-only instance and the best AWS and GCP GPU instances (best for ZK workloads).

Here we measured the effective cost of EspressoSystems verifiable information disperse protocol. The primary bottleneck for this protocol is a KZG commitment, which we ported to the GPU using ICICLE. Under worst case assumptions, we achieve a 5X improvement in effective-cost-performance compared to the second best instance and 12.7X compared to CPU-only instance.

Follow Ingonyama

Twitter: https://twitter.com/Ingo_zk

YouTube: https://www.youtube.com/@ingo_zk

GitHub: https://github.com/ingonyama-zk

LinkedIn: https://www.linkedin.com/company/ingonyama

Join us: https://www.ingonyama.com/career

light

Written by

Table of Contents

Want to discuss further?

Ingonyama is commited to developing hardware for a private future using Zero Knowledge Proofs.

Get in touch
Get our RSS feed