Txeo v0.1
A Developer-Friendly TensorFlow C++ Wrapper
Loading...
Searching...
No Matches
txeo::Predictor< T > Class Template Reference

Class that deals with the main tasks of prediction (inference) More...

#include <Predictor.h>

Collaboration diagram for txeo::Predictor< T >:
Collaboration graph

Public Types

using TensorInfo = std::vector< std::pair< std::string, txeo::TensorShape > >
 
using TensorIdent = std::vector< std::pair< std::string, txeo::Tensor< T > > >
 

Public Member Functions

 Predictor ()=delete
 
 Predictor (const Predictor &)=delete
 
 Predictor (Predictor &&)=delete
 
Predictoroperator= (const Predictor &)=delete
 
Predictoroperator= (Predictor &&)=delete
 
 ~Predictor ()
 
 Predictor (std::filesystem::path model_path)
 Constructs a Predictor from a TensorFlow SavedModel directory.
 
const TensorInfoget_input_metadata () const noexcept
 Returns the input tensor metadata for the loaded model.
 
const TensorInfoget_output_metadata () const noexcept
 Returns the output tensor metadata for the loaded model.
 
std::optional< txeo::TensorShapeget_input_metadata_shape (const std::string &name) const
 Returns shape for specific input tensor by name.
 
std::optional< txeo::TensorShapeget_output_metadata_shape (const std::string &name) const
 Get shape for specific output tensor by name.
 
std::vector< DeviceInfoget_devices () const
 Returns the available compute devices.
 
txeo::Tensor< T > predict (const txeo::Tensor< T > &input) const
 Perform single input/single output inference.
 
std::vector< txeo::Tensor< T > > predict_batch (const TensorIdent &inputs) const
 Perform batch inference with multiple named inputs.
 
void enable_xla (bool enable)
 Enable/disable XLA (Accelerated Linear Algebra) compilation.
 

Detailed Description

template<typename T = float>
class txeo::Predictor< T >

Class that deals with the main tasks of prediction (inference)

Template Parameters
TSpecifies the data type of the model involved

Definition at line 45 of file Predictor.h.

Member Typedef Documentation

◆ TensorIdent

template<typename T = float>
using txeo::Predictor< T >::TensorIdent = std::vector<std::pair<std::string, txeo::Tensor<T> >>

Definition at line 48 of file Predictor.h.

◆ TensorInfo

template<typename T = float>
using txeo::Predictor< T >::TensorInfo = std::vector<std::pair<std::string, txeo::TensorShape> >

Definition at line 47 of file Predictor.h.

Constructor & Destructor Documentation

◆ Predictor() [1/4]

template<typename T = float>
txeo::Predictor< T >::Predictor ( )
explicitdelete

◆ Predictor() [2/4]

template<typename T = float>
txeo::Predictor< T >::Predictor ( const Predictor< T > &  )
delete

◆ Predictor() [3/4]

template<typename T = float>
txeo::Predictor< T >::Predictor ( Predictor< T > &&  )
delete

◆ ~Predictor()

template<typename T = float>
txeo::Predictor< T >::~Predictor ( )

◆ Predictor() [4/4]

template<typename T = float>
txeo::Predictor< T >::Predictor ( std::filesystem::path  model_path)
explicit

Constructs a Predictor from a TensorFlow SavedModel directory.

The directory must contain a valid SavedModel (typically with a .pb file). For best performance, use models with frozen weights.

Parameters
model_pathPath to the directory of the .pb saved model
Exceptions
PredictorError
Note
Freezing Recommendation (Python example):
*import tensorflow as tf
# Load SavedModel
model = tf.saved_model.load("path/to/trained_model")
# Freeze and save
concrete_func = model.signatures["serving_default"]
frozen_func = tf.python.framework.convert_to_constants.convert_variables_to_constants_v2(
concrete_func
)
tf.io.write_graph(
frozen_func.graph.as_graph_def(),
"path/to/frozen_model",
"frozen.pb",
as_text=False
)

Member Function Documentation

◆ enable_xla()

template<typename T = float>
void txeo::Predictor< T >::enable_xla ( bool  enable)

Enable/disable XLA (Accelerated Linear Algebra) compilation.

Parameters
enableWhether to enable XLA optimizations
Note
Requires model reloading - prefer calling before first inference
Example:
predictor.enable_xla(true); // Enable hardware acceleration
auto result = predictor.predict(input); // Uses XLA-optimized graph

◆ get_devices()

template<typename T = float>
std::vector< DeviceInfo > txeo::Predictor< T >::get_devices ( ) const

Returns the available compute devices.

Returns
Vector of DeviceInfo structures
Example:
for (const auto& device : predictor.get_devices()) {
std::cout << device.device_type << " device: " << device.name << "\n";
}
std::vector< DeviceInfo > get_devices() const
Returns the available compute devices.

◆ get_input_metadata()

template<typename T = float>
const TensorInfo & txeo::Predictor< T >::get_input_metadata ( ) const
noexcept

Returns the input tensor metadata for the loaded model.

Returns
const reference to vector of (name, shape) pairs
Example:
txeo::Predictor<float> predictor("model_dir");
for (const auto& [name, shape] : predictor.get_input_metadata()) {
std::cout << "Input: " << name << " Shape: " << shape << "\n";
}
Class that deals with the main tasks of prediction (inference)
Definition Predictor.h:45
const TensorInfo & get_input_metadata() const noexcept
Returns the input tensor metadata for the loaded model.

◆ get_input_metadata_shape()

template<typename T = float>
std::optional< txeo::TensorShape > txeo::Predictor< T >::get_input_metadata_shape ( const std::string &  name) const

Returns shape for specific input tensor by name.

Parameters
nameTensor name from model signature
Returns
std::optional containing shape if found
Example:
if (auto shape = predictor.get_input_metadata_shape("image_input")) {
std::cout << "Input requires shape: " << *shape << "\n";
}

◆ get_output_metadata()

template<typename T = float>
const TensorInfo & txeo::Predictor< T >::get_output_metadata ( ) const
noexcept

Returns the output tensor metadata for the loaded model.

Returns
const reference to vector of (name, shape) pairs
Example:
auto outputs = predictor.get_output_metadata();
std::cout << "Model produces " << outputs.size() << " outputs\n";

◆ get_output_metadata_shape()

template<typename T = float>
std::optional< txeo::TensorShape > txeo::Predictor< T >::get_output_metadata_shape ( const std::string &  name) const

Get shape for specific output tensor by name.

Parameters
nameTensor name from model signature
Returns
std::optional containing shape if found
Example:
auto output_shape = predictor.get_output_metadata_shape("embeddings")
.value_or(txeo::TensorShape{0});
The shape of a tensor is an ordered collection of dimensions of mathematical vector spaces.
Definition TensorShape.h:30

◆ operator=() [1/2]

template<typename T = float>
Predictor & txeo::Predictor< T >::operator= ( const Predictor< T > &  )
delete

◆ operator=() [2/2]

template<typename T = float>
Predictor & txeo::Predictor< T >::operator= ( Predictor< T > &&  )
delete

◆ predict()

template<typename T = float>
txeo::Tensor< T > txeo::Predictor< T >::predict ( const txeo::Tensor< T > &  input) const

Perform single input/single output inference.

Parameters
inputInput tensor matching model's expected shape
Returns
Output tensor with inference results
Exceptions
PredictorError
Example:
Tensor<float> input({2, 2}, {1.0f, 2.0f, 3.0f, 4.0f});
auto output = predictor.predict(input);
std::cout << "Prediction: " << output(0) << "\n";
Implements the mathematical concept of tensor, which is a magnitude of multiple order....
Definition Tensor.h:48

◆ predict_batch()

template<typename T = float>
std::vector< txeo::Tensor< T > > txeo::Predictor< T >::predict_batch ( const TensorIdent inputs) const

Perform batch inference with multiple named inputs.

Parameters
inputsVector of (name, tensor) pairs
Returns
Vector of output tensors
Exceptions
PredictorError
Example:
std::vector<std::pair<std::string, txeo::Tensor<float>>> inputs {
{"image", image_tensor},
{"metadata", meta_tensor}
};
auto results = predictor.predict_batch(inputs);

The documentation for this class was generated from the following files: