Model Extras¶
In addition to interface and onnx-related objects, there are a few objects that may be of use when interactions with models.
HuggingFaceORTModel¶
The HuggingFaceORTModel
is an enum that allows you to specify an ORT type for a HuggingFaceModel
. Refer to the source code below for the available options.
from opsml import HuggingFaceORTModel, HuggingFaceOnnxArgs
HuggingFaceOnnxArgs(
ort_type=HuggingFaceORTModel.ORT_SEQUENCE_CLASSIFICATION.value,
provider="CPUExecutionProvider",
quantize=False,
)
opsml.HuggingFaceORTModel
¶
Bases: str
, Enum
Source code in opsml/types/huggingface.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
HuggingFaceTask¶
HuggingFaceTask is an enum that allows you to specify a task type for a HuggingFaceModel
. Refer to the source code below for the available options.
from opsml import HuggingFaceTask, HuggingFaceModel
HuggingFaceModel(
model=model,
task_type=HuggingFaceTask.SEQUENCE_CLASSIFICATION.value,
)
opsml.HuggingFaceTask
¶
Bases: str
, Enum
Source code in opsml/types/huggingface.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
TorchSaveArgs¶
Optional TorchModel
arguments for saving a TorchModel
object. Only as_state_dict
is currently supported. If True, the TorchModel
model object's state dict will be
opsml.types.TorchSaveArgs
¶
Bases: BaseModel
Torch save arguments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
as_state_dict |
Indicates to save the torch model in state_dict format. If True, the model architecture will need to be provided at load time. |
required |
Source code in opsml/types/model.py
168 169 170 171 172 173 174 175 176 177 |
|
OnnxModel¶
OnnxModel is a pydantic class that is used to store converted onnx models. In the case of a BYO onnx model, you will need to supply an OnnxModel
object to the ModelInterface
class.
from opsml import OnnxModel, TorchModel
import onnx
import onnxruntime as ort
# Super Resolution model definition in PyTorch
import torch.nn as nn
import torch.nn.init as init
import torch.onnx
import torch.utils.model_zoo as model_zoo
from torch import nn
class SuperResolutionNet(nn.Module):
def __init__(self, upscale_factor, inplace=False):
super(SuperResolutionNet, self).__init__()
self.relu = nn.ReLU(inplace=inplace)
self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
self.conv4 = nn.Conv2d(32, upscale_factor**2, (3, 3), (1, 1), (1, 1))
self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
self._initialize_weights()
def forward(self, x):
x = self.relu(self.conv1(x))
x = self.relu(self.conv2(x))
x = self.relu(self.conv3(x))
x = self.pixel_shuffle(self.conv4(x))
return x
def _initialize_weights(self):
init.orthogonal_(self.conv1.weight, init.calculate_gain("relu"))
init.orthogonal_(self.conv2.weight, init.calculate_gain("relu"))
init.orthogonal_(self.conv3.weight, init.calculate_gain("relu"))
init.orthogonal_(self.conv4.weight)
# Create the super-resolution model by using the above model definition.
torch_model = SuperResolutionNet(upscale_factor=3)
# Load pretrained model weights
model_url = "https://s3.amazonaws.com/pytorch/test_data/export/superres_epoch100-44c6958e.pth"
batch_size = 1 # just a random number
# Initialize model with the pretrained weights
def map_location(storage, loc):
return storage
if torch.cuda.is_available():
map_location = None
torch_model.load_state_dict(model_zoo.load_url(model_url, map_location=map_location))
# set the model to inference mode
torch_model.eval()
# Input to the model
x = torch.randn(batch_size, 1, 224, 224, requires_grad=True)
torch_model(x)
with tempfile.TemporaryDirectory() as tmpdir:
onnx_path = f"{tmpdir}/super_resolution.onnx"
# Export the model
torch.onnx.export(
torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
onnx_path, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=["input"], # the model's input names
output_names=["output"], # the model's output names
dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}}, # variable length axes
)
onnx_model = onnx.load(onnx_path)
ort_sess = ort.InferenceSession(onnx_model.SerializeToString())
onnx_model = OnnxModel(onnx_version="1.14.0", sess=ort_sess)
interface = TorchModel(
model=torch_model,
sample_data=x,
onnx_model=onnx_model,
save_args={"as_state_dict": True},
)
opsml.types.OnnxModel
¶
Bases: BaseModel
Source code in opsml/types/model.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
sess_to_path(path)
¶
Helper method for taking existing onnx model session and saving to path
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path
|
Path to save onnx model |
required |
Source code in opsml/types/model.py
134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|