Originally published on HackMD
In our previous blog post we described Blaze — a Rust library for ZK acceleration on Xilinx FPGAs.
Since the release of Blaze, we have been actively working on its architecture and applying an API of our NTT primitives implementation. Today we are ready to introduce a new module for working with NTT.
What is the NTT Module in Nutshell?
Blaze architecture makes it easy to add new modules. In our introductory Blaze blog post we described the Poseidon hash function, and here we will describe the NTT module.
NTT, or Number Theoretic Transform, is the term used to describe a Discrete Fourier Transform (DFT) over finite fields. Our module provides an API to the calculation of NTT of size 2²⁷. To use it, the input byte vector of elements must be specified. Each element in the input vector must be represented in little-endian. The result will be a similar byte vector in which each element is represented as little-endian bytes.
How is NTT Structured from a Developer’s Point of View?
In this brief blogpost we will not dive in depth about how and why the calculations are built. You can read about this in a series of posts previously published:
- NTT 201 -Foundations of NTT Hardware Design
- Foundations of NTT Hardware Design, Chapter 2: NTT in Practice
More information on this subject will be released as part of our upcoming NTT Webinar.
The important thing for us is that on-device memory (in our case we’re working with HBM) is divided into two buffers:
- In one of the buffers the host writes data — our input vector
- In the other buffer a computation is taking place, and then they swap places.
The main advantage of this design (especially having two buffers) is that it supports NTT back 2 back computing.
So our calculation involves the following steps:
- Host writes input vector to card/device memory (can be HBM or DDR)
- Previously written data is read to FPGA
- Data is processed in FPGA
- Processed data is written back to card/device memory
- Host gets result from HBM
Our design supports the feature of writing the new vector and getting the result in parallel.
Additionally in the current version of the driver, the input byte vector must be divided into 16 segments, which we will call banks. The partitioning into banks is done inconsistently, and based on how further calculations will be done. At this stage of implementation, Blaze is responsible for all required conversions, so additional application integration or data manipulation is not required by the end user.
A detailed description of partitioning can be found in Section 2.4.1 Data Organization in our White Paper.
Using Blaze
A full description of the tests, which include the binary loading process and calculations will be available in the latest release. In addition, the binary file for NTT will be located there as well.
Adding Blaze to an Existing Rust Project
First and foremost, let’s connect Blaze to your project. To do this, run cargo command:
After this, you will see Blaze in your dependencies:
Create Connection to FPGA using DriverClient
The blaze architecture is designed so that we can load different drivers on the same FPGA. For this purpose we separate connecting to the hardware from communicating with it (directly to the module API)
To create a connection, it is necessary to specify the slot and type of card with which we will work. So far we support only the Xilinx C1100/U250 installed locally, but in the future we will add support for other cards as well and AWS F1 Instances.
Load program for NTT on FPGA
After opening the connection, let’s load our driver (a program that describes how to perform specific calculations on the FPGA). To do this we need to specify the path to our file and load it into memory:
Next we need to check if our FPGA is ready to load the driver, and then directly load it on the FPGA:
An important note is that we can replace the loaded FPGA binary/image at run-time. That means you can reuse one connection for different versions of one driver or for other drivers (MSM for example). Keep in mind — only a single driver can be loaded at a time.
Create the client for NTT module
After we succesfully conected to our FPGA and set up driver, we need to use this connection somewhere. As we mentioed before, each module must implement an trait DriverPrimitive based on the needs of a particular computation. So let’s further discuss what’s hidden under each traid function for NTT.
The first step is always the creation of the client module itself. To do this, we need to specify its type and pass an already open connection:
There is only one type for ntt for now: NTT::Ntt, but we can extend this module in the future.
If we look inside NTTClient , like other modules it is described by the following structures:
where driver_client includes general addresses for FPGA, and NTTConfig which represents address memory space specific for NTT:
Initialize the FPGA
For the NTT module, the initializations currently allow us to configure the execution both in whole NTT computation mode only, as well as partial execution that we use to debug NTT.
However, only full calculations are available to users. You can have a look inside the NTT initialize method.
Reading/Writing to the FPGA
NTT, like other modules, implements functions to write and read data from the FPGA.
Let’s dive a bit into what happens to our original byte vector after we pass it to write.
The NTTClient, after receiving inputs, starts preprocess computation. In this function the initial vector is distributed to the 16 banks in a particular order.
Next, each bank is written to the corresponding memory address.
You can see that the memory address depends on which memory buffer the host (buf_host) is working with:
In terms of the result, the FPGA does not actually receive a whole vector, but 16 banks that need to be processed:
So just as with writing, we now need to calculate the address again depending on the function. We then transfer our banks to postprocess. You can see how the function is organized here.
Run computation
While our read and write data functions depend on the host buffer, the start of the computation process is tied directly to the FPGA. So by swapping the buf_host and buf_kernel values we choose which section to start the calculation on.
The starting itself looks like this:
Conclusion
We are excited to see what the community builds with Blaze! And we welcome your contributions to the project on Github.
Follow Ingonyama
Twitter: https://twitter.com/Ingo_zk
YouTube: https://www.youtube.com/@ingo_zk
LinkedIn: https://www.linkedin.com/company/ingonyama
Join us: https://www.ingonyama.com/careers