Skip to content

PEFT -- Parameter-Efficient Fine-Tuning

The molfun.training.peft module provides utilities for applying parameter-efficient fine-tuning methods (LoRA, IA3) to any Molfun model.

Quick Start

from molfun import MolfunStructureModel
from molfun.training.peft import MolfunPEFT

model = MolfunStructureModel.from_pretrained("openfold_v2")

# Apply LoRA adapters
peft = MolfunPEFT.lora(model, rank=8, alpha=16, target_modules=["q_proj", "v_proj"])

# Check trainable parameters
peft.summary()
# Total params: 93.2M | Trainable: 0.3M (0.32%)

# After training, merge adapters into base weights
peft.merge()

# Or save/load adapters separately
peft.save("./lora_adapters")
peft = MolfunPEFT.load("./lora_adapters", model)

MolfunPEFT

MolfunPEFT

Unified PEFT interface. Uses HuggingFace PEFT when available, falls back to built-in LoRALinear otherwise.

Supports: LoRA, IA³ (via HF PEFT), built-in LoRA (fallback).

Usage

adapter = OpenFoldAdapter(model=model)

LoRA

peft = MolfunPEFT.lora(rank=8, target_modules=["linear_q", "linear_v"]) peft.apply(adapter.model.evoformer)

IA³ (requires HF PEFT)

peft = MolfunPEFT.ia3(target_modules=["linear_v"], feedforward_modules=["ff_linear1"]) peft.apply(adapter.model.evoformer)

Training: only adapted params

optimizer = torch.optim.Adam(peft.trainable_parameters(), lr=1e-4)

Export: merge into base weights

peft.merge()

lora classmethod

lora(rank: int = 8, alpha: float = 16.0, dropout: float = 0.0, target_modules: list[str] | None = None, use_hf: bool = True) -> MolfunPEFT

Create a LoRA adapter.

ia3 classmethod

ia3(target_modules: list[str] | None = None, feedforward_modules: list[str] | None = None) -> MolfunPEFT

Create an IA³ adapter (requires HuggingFace PEFT).

apply

apply(model: Module) -> nn.Module

Apply PEFT method to the model. Freezes base params automatically.

Parameters:

Name Type Description Default
model Module

nn.Module to adapt (e.g. adapter.model or adapter.model.evoformer).

required

Returns:

Type Description
Module

The adapted model (may be wrapped by PeftModel if using HF backend).

trainable_parameters

trainable_parameters() -> list[nn.Parameter]

Return only the trainable (PEFT) parameters.

merge

merge() -> None

Merge adapted weights into base model (for inference/export).

unmerge

unmerge() -> None

Restore base weights (undo merge).

save

save(path: str) -> None

Save PEFT adapter weights.

load

load(path: str) -> None

Load PEFT adapter weights.

lora (class method)

Apply LoRA (Low-Rank Adaptation) to the model.

peft = MolfunPEFT.lora(
    model,
    rank=8,
    alpha=16,
    dropout=0.05,
    target_modules=["q_proj", "v_proj"],
)
Parameter Type Default Description
model MolfunStructureModel required Model to adapt
rank int 8 LoRA rank (lower = fewer parameters)
alpha float 16.0 LoRA scaling factor
dropout float 0.0 Dropout applied to LoRA layers
target_modules list[str] \| None None Module name patterns to target. None targets all linear layers.

Returns: MolfunPEFT


ia3 (class method)

Apply IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations).

peft = MolfunPEFT.ia3(
    model,
    target_modules=["k_proj", "v_proj", "down_proj"],
)
Parameter Type Default Description
model MolfunStructureModel required Model to adapt
target_modules list[str] \| None None Module name patterns to target

Returns: MolfunPEFT


apply

Apply the PEFT configuration to the model. Called automatically by lora() and ia3().

peft.apply()

trainable_parameters

Return an iterator over only the trainable (adapter) parameters.

for name, param in peft.trainable_parameters():
    print(f"{name}: {param.shape}")

Returns: Iterator[tuple[str, Parameter]]


merge

Merge adapter weights into the base model weights.

peft.merge()

After merging, the model behaves as a standard model with no adapter overhead at inference time.


unmerge

Reverse a previous merge(), restoring the separate adapter weights.

peft.unmerge()

save

Save adapter weights to disk (without the base model).

peft.save("./lora_adapters")
Parameter Type Description
path str \| Path Directory to save adapter weights and config

load (class method)

Load adapter weights from disk and apply them to a model.

peft = MolfunPEFT.load("./lora_adapters", model)
Parameter Type Description
path str \| Path Directory containing adapter weights
model MolfunStructureModel Base model to attach adapters to

Returns: MolfunPEFT


summary

Print a summary of total vs trainable parameters.

peft.summary()
# Total parameters:     93,215,488
# Trainable parameters:    294,912 (0.32%)
# PEFT method: LoRA (rank=8, alpha=16.0)

LoRALinear

LoRALinear

Bases: Module

Drop-in replacement for nn.Linear with low-rank adaptation. W_effective = W_frozen + (alpha/rank) * A @ B

Low-rank adapter layer that wraps a standard nn.Linear.

from molfun.training.peft import LoRALinear

# Wrap an existing linear layer
lora_layer = LoRALinear(
    original=model.trunk.layers[0].attention.q_proj,
    rank=8,
    alpha=16.0,
    dropout=0.05,
)
Parameter Type Default Description
original nn.Linear required The linear layer to wrap
rank int 8 Rank of the low-rank decomposition
alpha float 16.0 Scaling factor (effective scale = alpha / rank)
dropout float 0.0 Dropout probability on the LoRA path

The forward pass computes: output = original(x) + (dropout(x) @ A^T @ B^T) * (alpha / rank)

Where A is shape (rank, in_features) and B is shape (out_features, rank).