Export¶
Export Molfun models to ONNX and TorchScript formats for deployment in production environments.
Quick Start¶
from molfun import MolfunStructureModel
model = MolfunStructureModel.from_pretrained("openfold_v2")
# Export to ONNX
model.export_onnx("model.onnx", opset_version=17)
# Export to TorchScript
model.export_torchscript("model.pt")
# Or use the standalone functions
from molfun.export import export_onnx, export_torchscript
export_onnx(model, "model.onnx")
export_torchscript(model, "model.pt")
export_onnx¶
export_onnx ¶
export_onnx(model, path: str, seq_len: int = 256, opset_version: int = 17, dynamic_axes: dict | None = None, simplify: bool = False, check: bool = True, device: str = 'cpu') -> Path
Export a MolfunStructureModel to ONNX format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
MolfunStructureModel or nn.Module with adapter. |
required | |
path
|
str
|
Output .onnx file path. |
required |
seq_len
|
int
|
Sequence length for the dummy input. |
256
|
opset_version
|
int
|
ONNX opset version (default 17). |
17
|
dynamic_axes
|
dict | None
|
Custom dynamic axes mapping. If None, seq_len dimension is dynamic by default. |
None
|
simplify
|
bool
|
Run onnx-simplifier after export (requires onnxsim). |
False
|
check
|
bool
|
Validate the exported model with onnx.checker. |
True
|
device
|
str
|
Device for tracing ("cpu" recommended). |
'cpu'
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the exported ONNX file. |
Raises:
| Type | Description |
|---|---|
ImportError
|
If onnx package is not installed. |
Export a model to ONNX format.
from molfun.export import export_onnx
export_onnx(
model=model,
output_path="model.onnx",
opset_version=17,
dynamic_axes={"sequence": {0: "batch", 1: "length"}},
sample_sequence="MKFLILLFNILCLFPVLAADNH",
)
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
MolfunStructureModel \| nn.Module |
required | Model to export |
output_path |
str \| Path |
required | Output file path |
opset_version |
int |
17 |
ONNX opset version |
dynamic_axes |
dict \| None |
None |
Dynamic axes for variable-length inputs |
sample_sequence |
str \| None |
None |
Sample sequence for tracing |
simplify |
bool |
True |
Run ONNX simplifier after export |
Returns: Path to the exported ONNX file.
Loading an ONNX Model¶
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
inputs = {"aatype": aatype_np, "residue_index": residue_index_np}
outputs = session.run(None, inputs)
export_torchscript¶
export_torchscript ¶
export_torchscript(model, path: str, seq_len: int = 256, mode: Literal['trace', 'script'] = 'trace', optimize: bool = True, device: str = 'cpu', check: bool = True) -> Path
Export a MolfunStructureModel to TorchScript.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
MolfunStructureModel or nn.Module with adapter. |
required | |
path
|
str
|
Output .pt file path. |
required |
seq_len
|
int
|
Sequence length for tracing dummy input. |
256
|
mode
|
Literal['trace', 'script']
|
"trace" (default, more compatible) or "script". |
'trace'
|
optimize
|
bool
|
Apply torch.jit.optimize_for_inference. |
True
|
device
|
str
|
Device for tracing ("cpu" recommended). |
'cpu'
|
check
|
bool
|
Run a validation forward pass after export. |
True
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the exported TorchScript file. |
Export a model to TorchScript format via tracing or scripting.
from molfun.export import export_torchscript
export_torchscript(
model=model,
output_path="model.pt",
method="trace",
sample_sequence="MKFLILLFNILCLFPVLAADNH",
)
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
MolfunStructureModel \| nn.Module |
required | Model to export |
output_path |
str \| Path |
required | Output file path |
method |
str |
"trace" |
Export method: "trace" or "script" |
sample_sequence |
str \| None |
None |
Sample sequence for tracing |
optimize |
bool |
True |
Apply TorchScript optimizations |
Returns: Path to the exported TorchScript file.
Loading a TorchScript Model¶
Comparison¶
| Feature | ONNX | TorchScript |
|---|---|---|
| Runtime | ONNX Runtime, TensorRT | PyTorch, LibTorch |
| Language support | Python, C++, C#, Java, JS | Python, C++ |
| Optimization | Graph optimizations, quantization | JIT optimizations |
| Dynamic shapes | Via dynamic_axes | Native support |
| Best for | Cross-platform deployment | PyTorch ecosystem |