C++#

This section presents examples that use the SmartRedis C++ API to interact with the RedisAI tensor, model, and script data types. Additionally, this section demonstrates an example of utilizing the SmartRedis DataSet API.

Note

The C++ API examples rely on the SSDB environment variable being set to the address and port of the Redis database.

Note

The C++ API examples are written to connect to a clustered database or clustered SmartSim Orchestrator. Update the Client constructor cluster flag to false to connect to a single shard (single compute host) database.

Tensors#

The following example shows how to send and receive a tensor using the SmartRedis C++ client API.

 1/*
 2 * BSD 2-Clause License
 3 *
 4 * Copyright (c) 2021-2024, Hewlett Packard Enterprise
 5 * All rights reserved.
 6 *
 7 * Redistribution and use in source and binary forms, with or without
 8 * modification, are permitted provided that the following conditions are met:
 9 *
10 * 1. Redistributions of source code must retain the above copyright notice, this
11 *    list of conditions and the following disclaimer.
12 *
13 * 2. Redistributions in binary form must reproduce the above copyright notice,
14 *    this list of conditions and the following disclaimer in the documentation
15 *    and/or other materials provided with the distribution.
16 *
17 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
18 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
19 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
21 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
23 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
24 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
26 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27 */
28
29#include "client.h"
30#include <vector>
31#include <string>
32
33int main(int argc, char* argv[]) {
34
35    // Initialize tensor dimensions
36    size_t dim1 = 3;
37    size_t dim2 = 2;
38    size_t dim3 = 5;
39    std::vector<size_t> dims = {3, 2, 5};
40
41    // Initialize a tensor to random values.  Note that a dynamically
42    // allocated tensor via malloc is also useable with the client
43    // API.  The std::vector is used here for brevity.
44    size_t n_values = dim1 * dim2 * dim3;
45    std::vector<double> input_tensor(n_values, 0);
46    for(size_t i=0; i<n_values; i++)
47        input_tensor[i] = 2.0*rand()/(double)RAND_MAX - 1.0;
48
49    // Initialize a SmartRedis client
50    SmartRedis::Client client(__FILE__);
51
52    // Put the tensor in the database
53    std::string key = "3d_tensor";
54    client.put_tensor(key, input_tensor.data(), dims,
55                      SRTensorTypeDouble, SRMemLayoutContiguous);
56
57    // Retrieve the tensor from the database using the unpack feature.
58    std::vector<double> unpack_tensor(n_values, 0);
59    client.unpack_tensor(key, unpack_tensor.data(), {n_values},
60                        SRTensorTypeDouble, SRMemLayoutContiguous);
61
62    // Print the values retrieved with the unpack feature
63    std::cout<<"Comparison of the sent and "\
64                "retrieved (via unpack) values: "<<std::endl;
65    for(size_t i=0; i<n_values; i++)
66        std::cout<<"Sent: "<<input_tensor[i]<<" "
67                 <<"Received: "<<unpack_tensor[i]<<std::endl;
68
69
70    // Retrieve the tensor from the database using the get feature.
71    SRTensorType get_type;
72    std::vector<size_t> get_dims;
73    void* get_tensor;
74    client.get_tensor(key, get_tensor, get_dims, get_type, SRMemLayoutNested);
75
76    // Print the values retrieved with the unpack feature
77    std::cout<<"Comparison of the sent and "\
78                "retrieved (via get) values: "<<std::endl;
79    for(size_t i=0, c=0; i<dims[0]; i++)
80        for(size_t j=0; j<dims[1]; j++)
81            for(size_t k=0; k<dims[2]; k++, c++) {
82                std::cout<<"Sent: "<<input_tensor[c]<<" "
83                         <<"Received: "
84                         <<((double***)get_tensor)[i][j][k]<<std::endl;
85    }
86
87    return 0;
88}

DataSets#

The C++ client can store and retrieve tensors and metadata in datasets. For further information about datasets, please refer to the Dataset section of the Data Structures documentation page.

The code below shows how to store and retrieve tensors and metadata that belong to a DataSet.

  1/*
  2 * BSD 2-Clause License
  3 *
  4 * Copyright (c) 2021-2024, Hewlett Packard Enterprise
  5 * All rights reserved.
  6 *
  7 * Redistribution and use in source and binary forms, with or without
  8 * modification, are permitted provided that the following conditions are met:
  9 *
 10 * 1. Redistributions of source code must retain the above copyright notice, this
 11 *    list of conditions and the following disclaimer.
 12 *
 13 * 2. Redistributions in binary form must reproduce the above copyright notice,
 14 *    this list of conditions and the following disclaimer in the documentation
 15 *    and/or other materials provided with the distribution.
 16 *
 17 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
 18 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 19 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
 20 * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
 21 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
 22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
 23 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
 24 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
 25 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 26 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 27 */
 28
 29#include "client.h"
 30#include <vector>
 31#include <string>
 32
 33int main(int argc, char* argv[]) {
 34
 35    // Initialize tensor dimensions
 36    size_t dim1 = 3;
 37    size_t dim2 = 2;
 38    size_t dim3 = 5;
 39    size_t n_values = dim1 * dim2 * dim3;
 40    std::vector<size_t> dims = {3, 2, 5};
 41
 42    // Initialize two tensors to random values
 43    std::vector<double> tensor_1(n_values, 0);
 44    std::vector<int64_t> tensor_2(n_values, 0);
 45
 46    for(size_t i=0; i<n_values; i++) {
 47        tensor_1[i] = 2.0*rand()/(double)RAND_MAX - 1.0;
 48        tensor_2[i] = rand();
 49    }
 50
 51    // Initialize three metadata values we will add
 52    // to the DataSet
 53    uint32_t meta_scalar_1 = 1;
 54    uint32_t meta_scalar_2 = 2;
 55    int64_t meta_scalar_3 = 3;
 56
 57    // Initialize a SmartRedis client
 58    SmartRedis::Client client(__FILE__);
 59
 60    // Create a DataSet
 61    SmartRedis::DataSet dataset("example_dataset");
 62
 63    // Add tensors to the DataSet
 64    dataset.add_tensor("tensor_1", tensor_1.data(), dims,
 65                       SRTensorTypeDouble, SRMemLayoutContiguous);
 66
 67    dataset.add_tensor("tensor_2", tensor_2.data(), dims,
 68                       SRTensorTypeInt64, SRMemLayoutContiguous);
 69
 70    // Add metadata scalar values to the DataSet
 71    dataset.add_meta_scalar("meta_field_1", &meta_scalar_1, SRMetadataTypeUint32);
 72    dataset.add_meta_scalar("meta_field_1", &meta_scalar_2, SRMetadataTypeUint32);
 73    dataset.add_meta_scalar("meta_field_2", &meta_scalar_3, SRMetadataTypeInt64);
 74
 75
 76    // Put the DataSet in the database
 77    client.put_dataset(dataset);
 78
 79    // Retrieve the DataSet from the database
 80    SmartRedis::DataSet retrieved_dataset =
 81        client.get_dataset("example_dataset");
 82
 83    // Retrieve one of the tensors
 84    std::vector<int64_t> unpack_dataset_tensor(n_values, 0);
 85    retrieved_dataset.unpack_tensor("tensor_2",
 86                                    unpack_dataset_tensor.data(),
 87                                    {n_values},
 88                                    SRTensorTypeInt64,
 89                                    SRMemLayoutContiguous);
 90
 91    // Print out the retrieved values
 92    std::cout<<"Comparing sent and received "\
 93               "values for tensor_2: "<<std::endl;
 94
 95    for(size_t i=0; i<n_values; i++)
 96        std::cout<<"Sent: "<<tensor_2[i]<<" "
 97                 <<"Received: "
 98                 <<unpack_dataset_tensor[i]<<std::endl;
 99
100    //Retrieve a metadata field
101    size_t get_n_meta_values;
102    void* get_meta_values;
103    SRMetaDataType get_type;
104    dataset.get_meta_scalars("meta_field_1",
105                             get_meta_values,
106                             get_n_meta_values,
107                             get_type);
108
109    // Print out the metadata field values
110    for(size_t i=0; i<get_n_meta_values; i++)
111        std::cout<<"meta_field_1 value "<<i<<" = "
112                 <<((uint32_t*)get_meta_values)[i]<<std::endl;
113
114    return 0;
115}

Models#

The following example shows how to store and use a DL model in the database with the C++ Client. The model is stored as a file in the ../../../common/mnist_data/ path relative to the compiled executable. Note that this example also sets and executes a preprocessing script.

 1/*
 2 * BSD 2-Clause License
 3 *
 4 * Copyright (c) 2021-2024, Hewlett Packard Enterprise
 5 * All rights reserved.
 6 *
 7 * Redistribution and use in source and binary forms, with or without
 8 * modification, are permitted provided that the following conditions are met:
 9 *
10 * 1. Redistributions of source code must retain the above copyright notice, this
11 *    list of conditions and the following disclaimer.
12 *
13 * 2. Redistributions in binary form must reproduce the above copyright notice,
14 *    this list of conditions and the following disclaimer in the documentation
15 *    and/or other materials provided with the distribution.
16 *
17 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
18 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
19 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
21 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
23 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
24 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
26 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27 */
28
29#include "client.h"
30#include <vector>
31#include <fstream>
32
33int main(int argc, char* argv[]) {
34
35    // Initialize a vector that will hold input image tensor
36    size_t n_values = 1*1*28*28;
37    std::vector<float> img(n_values, 0);
38
39    // Load the mnist image from a file
40    std::string image_file = "mnist_data/one.raw";
41    std::ifstream fin(image_file, std::ios::binary);
42    std::ostringstream ostream;
43    ostream << fin.rdbuf();
44    fin.close();
45
46    const std::string tmp = ostream.str();
47    std::memcpy(img.data(), tmp.data(), img.size()*sizeof(float));
48
49    // Initialize a SmartRedis client to connect to the Redis database
50    SmartRedis::Client client(__FILE__);
51
52    // Use the client to set a model in the database from a file
53    std::string model_key = "mnist_model";
54    std::string model_file = "mnist_data/mnist_cnn.pt";
55    client.set_model_from_file(model_key, model_file, "TORCH", "CPU", 20);
56
57    // Use the client to set a script from the database form a file
58    std::string script_key = "mnist_script";
59    std::string script_file = "mnist_data/data_processing_script.txt";
60    client.set_script_from_file(script_key, "CPU", script_file);
61
62    // Declare keys that we will use in forthcoming client commands
63    std::string in_key = "mnist_input";
64    std::string script_out_key = "mnist_processed_input";
65    std::string out_key = "mnist_output";
66
67    // Put the tensor into the database that was loaded from file
68    client.put_tensor(in_key, img.data(), {1,1,28,28},
69                      SRTensorTypeFloat, SRMemLayoutContiguous);
70
71
72    // Run the preprocessing script on the input tensor
73    client.run_script("mnist_script", "pre_process", {in_key}, {script_out_key});
74
75    // Run the model using the output of the preprocessing script
76    client.run_model("mnist_model", {script_out_key}, {out_key});
77
78    // Retrieve the output of the model
79    std::vector<float> result(10, 0);
80    client.unpack_tensor(out_key, result.data(), {10},
81                         SRTensorTypeFloat, SRMemLayoutContiguous);
82
83    // Print out the results of the model evaluation
84    for(size_t i=0; i<result.size(); i++) {
85        std::cout<<"Result["<<i<<"] = "<<result[i]<<std::endl;
86    }
87
88    return 0;
89}

Scripts#

The example in Models shows how to store and use a PyTorch script in the database with the C++ Client. The script is stored as a file in the ../../../common/mnist_data/ path relative to the compiled executable. Note that this example also sets and executes a PyTorch model.

Parallel (MPI) execution#

In this example, the example shown in Models and Scripts is adapted to run in parallel using MPI. This example has the same functionality, however, it shows how keys can be prefixed to prevent key collisions across MPI ranks. Note that only one model and script are set, which is shared across all ranks.

For completeness, the pre-processing script source code is also shown.

C++ program

  1/*
  2 * BSD 2-Clause License
  3 *
  4 * Copyright (c) 2021-2024, Hewlett Packard Enterprise
  5 * All rights reserved.
  6 *
  7 * Redistribution and use in source and binary forms, with or without
  8 * modification, are permitted provided that the following conditions are met:
  9 *
 10 * 1. Redistributions of source code must retain the above copyright notice, this
 11 *    list of conditions and the following disclaimer.
 12 *
 13 * 2. Redistributions in binary form must reproduce the above copyright notice,
 14 *    this list of conditions and the following disclaimer in the documentation
 15 *    and/or other materials provided with the distribution.
 16 *
 17 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
 18 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 19 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
 20 * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
 21 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
 22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
 23 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
 24 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
 25 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 26 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 27 */
 28
 29#include "client.h"
 30#include <mpi.h>
 31
 32void run_mnist(const std::string& model_name,
 33               const std::string& script_name,
 34               SmartRedis::Client& client)
 35{
 36    // Get the MPI rank
 37    int rank;
 38    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
 39
 40    // Initialize a vector that will hold input image tensor
 41    size_t n_values = 1*1*28*28;
 42    std::vector<float> img(n_values, 0);
 43
 44    // Load the mnist image from a file using MPI rank 0
 45    if (rank == 0) {
 46        std::string image_file = "mnist_data/one.raw";
 47        std::ifstream fin(image_file, std::ios::binary);
 48        std::ostringstream ostream;
 49        ostream << fin.rdbuf();
 50        fin.close();
 51
 52        const std::string tmp = ostream.str();
 53        std::memcpy(img.data(), tmp.data(), img.size()*sizeof(float));
 54    }
 55
 56    // Broadcast the image to all MPI ranks.  This is more efficient
 57    // thank all ranks loading the same file.  This is specific
 58    // to this example.
 59    MPI_Bcast(img.data(), 28*28, MPI_FLOAT, 0, MPI_COMM_WORLD);
 60    MPI_Barrier(MPI_COMM_WORLD);
 61
 62    if(rank==0)
 63        std::cout<<"All ranks have MNIST image"<<std::endl;
 64
 65    // Declare keys that we will use in forthcoming client commands
 66    std::string in_key = "mnist_input_rank_" + std::to_string(rank);
 67    std::string script_out_key = "mnist_processed_input_rank_" +
 68                                 std::to_string(rank);
 69    std::string out_key = "mnist_output_rank_" + std::to_string(rank);
 70
 71    // Put the image tensor on the database
 72    client.put_tensor(in_key, img.data(), {1,1,28,28},
 73                      SRTensorTypeFloat, SRMemLayoutContiguous);
 74
 75    // Run the preprocessing script
 76    client.run_script(script_name, "pre_process",
 77                      {in_key}, {script_out_key});
 78
 79    // Run the model
 80    client.run_model(model_name, {script_out_key}, {out_key});
 81
 82    // Get the result of the model
 83    std::vector<float> result(1*10);
 84    client.unpack_tensor(out_key, result.data(), {10},
 85                         SRTensorTypeFloat, SRMemLayoutContiguous);
 86
 87    // Print out the results of the model for Rank 0
 88    if (rank == 0)
 89        for(size_t i=0; i<result.size(); i++)
 90            std::cout<<"Rank 0: Result["<<i<<"] = "<<result[i]<<std::endl;
 91
 92    return;
 93}
 94
 95int main(int argc, char* argv[]) {
 96
 97    // Initialize the MPI comm world
 98    MPI_Init(&argc, &argv);
 99
100    // Retrieve the MPI rank
101    int rank;
102    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
103    std::string logger_name("Client ");
104    logger_name += std::to_string(rank);
105
106    // Initialize a Client object
107    SmartRedis::Client client(logger_name);
108
109    // Set the model and script that will be used by all ranks
110    // from MPI rank 0.
111    if (rank == 0) {
112        // Build model key, file name, and then set model
113        // from file using client API
114        std::string model_key = "mnist_model";
115        std::string model_file = "mnist_data/mnist_cnn.pt";
116        client.set_model_from_file(model_key, model_file,
117                                "TORCH", "CPU", 20);
118
119        // Build script key, file name, and then set script
120        // from file using client API
121        std::string script_key = "mnist_script";
122        std::string script_file = "mnist_data/data_processing_script.txt";
123        client.set_script_from_file(script_key, "CPU", script_file);
124
125        // Get model and script to illustrate client API
126        // functionality, but this is not necessary for this example.
127        std::string_view model = client.get_model(model_key);
128        std::string_view script = client.get_script(script_key);
129    }
130
131    // Run the MNIST model
132    MPI_Barrier(MPI_COMM_WORLD);
133    run_mnist("mnist_model", "mnist_script", client);
134
135    if (rank == 0)
136        std::cout<<"Finished SmartRedis MNIST example."<<std::endl;
137
138    // Finalize MPI Comm World
139    MPI_Finalize();
140
141    return 0;
142}

Python Pre-Processing

1def pre_process(inp):
2    mean = torch.zeros(1).float().to(inp.device)
3    mean[0] = 2.0
4    temp = inp.float() * mean
5    return temp