Skip to content

Cleanup examples #10097

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 50 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
5e73c3c
cleanup 9 gnn models on cora dataset
xnuohz Mar 1, 2025
81ee836
dna, super_gat
xnuohz Mar 1, 2025
7540ad8
gat, gcn
xnuohz Mar 1, 2025
4ac9df4
node2vec
xnuohz Mar 1, 2025
29b6766
align with ogbn train
xnuohz Mar 1, 2025
9742f25
update readme
xnuohz Mar 2, 2025
1a8909e
update
xnuohz Mar 2, 2025
cb944ba
Merge branch 'master' into examples/cleanup
xnuohz Mar 5, 2025
b47cf2d
update
xnuohz Mar 5, 2025
a122eb5
update
xnuohz Mar 5, 2025
765d296
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 5, 2025
ad43f2c
Merge branch 'master' into examples/cleanup
xnuohz Mar 6, 2025
f3962c4
add readme
xnuohz Mar 7, 2025
6c10b3b
add type hints
xnuohz Mar 8, 2025
1497633
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 8, 2025
768b257
update
xnuohz Mar 8, 2025
47cd6f4
Merge branch 'examples/cleanup' of github.com:xnuohz/pytorch_geometri…
xnuohz Mar 8, 2025
01ebfc0
update
xnuohz Mar 8, 2025
a96ec03
update
xnuohz Mar 8, 2025
abe9260
update
xnuohz Mar 9, 2025
f8113a3
update
xnuohz Mar 9, 2025
b7fd052
update
xnuohz Mar 9, 2025
d4dcfdc
update
xnuohz Mar 9, 2025
2151f4f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 9, 2025
65e67bf
update
xnuohz Mar 10, 2025
c51ba59
Merge branch 'master' into examples/cleanup
puririshi98 Mar 13, 2025
eb66552
Update CHANGELOG.md
xnuohz Mar 14, 2025
9bdfc6e
delete useless files and update changelog
xnuohz Mar 15, 2025
a941d82
fix broken tj-action/changed-files
xnuohz Mar 16, 2025
c29ffbd
update readme
xnuohz Mar 16, 2025
6b287d7
restore
xnuohz Mar 18, 2025
3074d70
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 18, 2025
a5945f7
Restore individual examples (#2)
xnuohz Mar 18, 2025
73a6380
update
xnuohz Mar 18, 2025
7d9ba54
update
xnuohz Mar 18, 2025
c445ffb
Merge branch 'master' into examples/cleanup
puririshi98 Mar 19, 2025
44293f9
Merge branch 'master' into examples/cleanup
xnuohz Mar 20, 2025
d829ec6
update
xnuohz Mar 22, 2025
242fa69
Merge branch 'master' into examples/cleanup
puririshi98 Mar 26, 2025
6f4d0ab
Update README.md
puririshi98 Mar 26, 2025
84b1d26
update
xnuohz Mar 27, 2025
e2d2ff6
Merge branch 'master' into examples/cleanup
xnuohz Apr 7, 2025
09d4f62
Merge branch 'master' into examples/cleanup
xnuohz Apr 7, 2025
48d29cb
Merge branch 'master' into examples/cleanup
xnuohz Apr 8, 2025
219eece
Merge branch 'master' into examples/cleanup
xnuohz Apr 11, 2025
cc2d718
Merge branch 'master' into examples/cleanup
puririshi98 Apr 22, 2025
a60007e
Merge branch 'master' into examples/cleanup
puririshi98 May 1, 2025
976cd71
update
xnuohz May 1, 2025
0e7430a
update
xnuohz May 1, 2025
aba6642
Merge branch 'master' into examples/cleanup
puririshi98 May 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Changed

- Combined 15 examples related to planetoid dataset into `planetoid_train.py` and applied softmax for PMLP model's output ([#10097](https://github.com/pyg-team/pytorch_geometric/pull/10097))
- Chained exceptions explicitly instead of implicitly ([#10242](https://github.com/pyg-team/pytorch_geometric/pull/10242))
- Updated cuGraph examples to use buffered sampling which keeps data in memory and is significantly faster than the deprecated buffered sampling ([#10079](https://github.com/pyg-team/pytorch_geometric/pull/10079))
- Updated Dockerfile to use latest from NVIDIA ([#9794](https://github.com/pyg-team/pytorch_geometric/pull/9794))
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ for epoch in range(200):

</details>

More information about evaluating final model performance can be found in the corresponding [example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py).
More information about evaluating final model performance can be found in the corresponding [example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py). [example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/planetoid_train.py) offers a unified codebase covering GCN, GAT, and Cheb Convs, as well as many more.

### Create your own GNN layer

Expand Down Expand Up @@ -169,7 +169,7 @@ These GNN layers can be stacked together to create Graph Neural Network models.
<summary><b>Expand to see all implemented GNN layers...</b></summary>

- **[GCN2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCN2Conv.html)** from Chen *et al.*: [Simple and Deep Graph Convolutional Networks](https://arxiv.org/abs/2007.02133) (ICML 2020) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_ppi.py)\]
- **[SplineConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SplineConv.html)** from Fey *et al.*: [SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels](https://arxiv.org/abs/1711.08920) (CVPR 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/faust.py)\]
- **[SplineConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SplineConv.html)** from Fey *et al.*: [SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels](https://arxiv.org/abs/1711.08920) (CVPR 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/spline_gnn.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/faust.py)\]
- **[NNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.NNConv.html)** from Gilmer *et al.*: [Neural Message Passing for Quantum Chemistry](https://arxiv.org/abs/1704.01212) (ICML 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_nn_conv.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_nn_conv.py)\]
- **[CGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.CGConv.html)** from Xie and Grossman: [Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.145301) (Physical Review Letters 120, 2018)
- **[ECConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ECConv.html)** from Simonovsky and Komodakis: [Edge-Conditioned Convolution on Graphs](https://arxiv.org/abs/1704.02901) (CVPR 2017)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/dataset_splitting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Node Prediction
.. note::

In this section, we'll learn how to use :class:`~torch_geometric.transforms.RandomNodeSplit` of :pyg:`PyG` to randomly divide nodes into training, validation, and test sets.
A fully working example on dataset :class:`~torch_geometric.datasets.Planetoid` is available in `examples/cora.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cora.py>`_.
A fully working example on dataset :class:`~torch_geometric.datasets.Planetoid` is available in `examples/spline_gnn.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/spline_gnn.py>`_ and `examples/planetoid_train.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/planetoid_train.py>`_.

The :class:`~torch_geometric.transforms.RandomNodeSplit` is initialized to split nodes for both a :pyg:`PyG` :class:`~torch_geometric.data.Data` and :class:`~torch_geometric.data.HeteroData` object.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/neighbor_loader.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Scaling GNNs via Neighbor Sampling

One of the challenges of Graph Neural Networks is to scale them to large graphs, *e.g.*, in industrial and social applications.
Traditional deep neural networks are known to scale well to large amounts of data by decomposing the training loss into individual samples (called a *mini-batch*) and approximating exact gradients stochastically.
In contrast, applying stochastic mini-batch training in GNNs is challenging since the embedding of a given node depends recursively on all its neighbors embeddings, leading to high inter-dependency between nodes that grows exponentially with respect to the number of layers.
In contrast, applying stochastic mini-batch training in GNNs is challenging since the embedding of a given node depends recursively on all its neighbor's embeddings, leading to high inter-dependency between nodes that grows exponentially with respect to the number of layers.
This phenomenon is often referred to as *neighbor explosion*.
As a simple workaround, GNNs are typically executed in a full-batch fashion (see `here <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py>`_ for an example), where the GNN has access to all hidden node representations in all its layers.
However, this is not feasible in large-scale graphs due to memory limitations and slow convergence.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/shallow_node_embeddings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Node2Vec
.. note::

In this section of the tutorial, we will learn node embeddings for **homogenous graphs** using the :class:`~torch_geometric.nn.models.Node2Vec` module of :pyg:`PyG`.
The code is available in `examples/node2vec.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/node2vec.py>`_ and as a `Google Colab tutorial notebook <https://colab.research.google.com/github/AntonioLonga/PytorchGeometricTutorial/blob/main/Tutorial11/Tutorial11.ipynb>`_.
The code is available in `examples/node2vec.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/node2vec.py>`_ and `examples/planetoid_train.py <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/planetoid_train.py>`_ and as a `Google Colab tutorial notebook <https://colab.research.google.com/github/AntonioLonga/PytorchGeometricTutorial/blob/main/Tutorial11/Tutorial11.ipynb>`_.

:class:`~torch_geometric.nn.models.Node2Vec` is a method for learning shallow node embeddings, which allows for flexible
control of random walk procedures based on breadth-first or depth-first samplers.
Expand Down
2 changes: 2 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ This readme highlights some key examples.

A great and simple example to start with is [`gcn.py`](./gcn.py), showing a user how to train a [`GCN`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GCN.html) model for node-level prediction on small-scale homogeneous data.

For an unified and extendable example for node classification on small planetoid style datasets, see [`planetoid_train.py`](./planetoid_train.py).

For a simple link prediction example, see [`link_pred.py`](./link_pred.py).

For an improved link prediction approach using Attract-Repel embeddings that can significantly boost accuracy (up to 23% improvement in AUC), see [`ar_link_pred.py`](./ar_link_pred.py). This approach is based on [Pseudo-Euclidean Attract-Repel Embeddings for Undirected Graphs](https://arxiv.org/abs/2106.09671).
Expand Down
25 changes: 25 additions & 0 deletions examples/agnn.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
import os.path as osp
import time

import torch
import torch.nn.functional as F

import torch_geometric.transforms as T
from torch_geometric import seed_everything
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import AGNNConv

wall_clock_start = time.perf_counter()
seed_everything(123)

dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
Expand All @@ -21,6 +26,12 @@ def __init__(self):
self.prop2 = AGNNConv(requires_grad=True)
self.lin2 = torch.nn.Linear(16, dataset.num_classes)

def reset_parameters(self):
self.lin1.reset_parameters()
self.prop1.reset_parameters()
self.prop2.reset_parameters()
self.lin2.reset_parameters()

def forward(self):
x = F.dropout(data.x, training=self.training)
x = F.relu(self.lin1(x))
Expand All @@ -33,6 +44,7 @@ def forward(self):

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model, data = Net().to(device), data.to(device)
model.reset_parameters()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)


Expand All @@ -54,12 +66,25 @@ def test():
return accs


print(f'Total time before training begins took '
f'{time.perf_counter() - wall_clock_start:.4f}s')
print('Training...')
times = []
best_val_acc = test_acc = 0
for epoch in range(1, 201):
start = time.perf_counter()
train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, '
f'Val: {best_val_acc:.4f}, Test: {test_acc:.4f}')
times.append(time.perf_counter() - start)

print(f'Average Epoch Time: {torch.tensor(times).mean():.4f}s')
print(f'Median Epoch Time: {torch.tensor(times).median():.4f}s')
print(f'Best Validation Accuracy: {100.0 * best_val_acc:.2f}%')
print(f'Test Accuracy: {100.0 * test_acc:.2f}%')
print(f'Total Program Runtime: '
f'{time.perf_counter() - wall_clock_start:.4f}s')
23 changes: 23 additions & 0 deletions examples/arma.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
import os.path as osp
import time

import torch
import torch.nn.functional as F

import torch_geometric.transforms as T
from torch_geometric import seed_everything
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import ARMAConv

wall_clock_start = time.perf_counter()
seed_everything(123)

dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
Expand All @@ -24,6 +29,10 @@ def __init__(self, in_channels, hidden_channels, out_channels):
num_layers=2, shared_weights=True, dropout=0.25,
act=lambda x: x)

def reset_parameters(self):
self.conv1.reset_parameters()
self.conv2.reset_parameters()

def forward(self, x, edge_index):
x = F.dropout(x, training=self.training)
x = F.relu(self.conv1(x, edge_index))
Expand All @@ -41,6 +50,7 @@ def forward(self, x, edge_index):

model, data = Net(dataset.num_features, 16,
dataset.num_classes).to(device), data.to(device)
model.reset_parameters()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)


Expand All @@ -63,12 +73,25 @@ def test():
return accs


print(f'Total time before training begins took '
f'{time.perf_counter() - wall_clock_start:.4f}s')
print('Training...')
times = []
best_val_acc = test_acc = 0
for epoch in range(1, 401):
start = time.perf_counter()
train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, '
f'Val: {best_val_acc:.4f}, Test: {test_acc:.4f}')
times.append(time.perf_counter() - start)

print(f'Average Epoch Time: {torch.tensor(times).mean():.4f}s')
print(f'Median Epoch Time: {torch.tensor(times).median():.4f}s')
print(f'Best Validation Accuracy: {100.0 * best_val_acc:.2f}%')
print(f'Test Accuracy: {100.0 * test_acc:.2f}%')
print(f'Total Program Runtime: '
f'{time.perf_counter() - wall_clock_start:.4f}s')
19 changes: 19 additions & 0 deletions examples/dna.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
import os.path as osp
import time

import torch
import torch.nn.functional as F
from sklearn.model_selection import StratifiedKFold

from torch_geometric import seed_everything
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import DNAConv

wall_clock_start = time.perf_counter()
seed_everything(123)

dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset)
Expand Down Expand Up @@ -68,6 +73,7 @@ def forward(self, x, edge_index):
model = Net(in_channels=dataset.num_features, hidden_channels=128,
out_channels=dataset.num_classes, num_layers=5, heads=8, groups=16)
model, data = model.to(device), data.to(device)
model.reset_parameters()
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=0.0005)


Expand All @@ -91,12 +97,25 @@ def test():
return accs


print(f'Total time before training begins took '
f'{time.perf_counter() - wall_clock_start:.4f}s')
print('Training...')
times = []
best_val_acc = test_acc = 0
for epoch in range(1, 201):
start = time.perf_counter()
train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, '
f'Val: {best_val_acc:.4f}, Test: {test_acc:.4f}')
times.append(time.perf_counter() - start)

print(f'Average Epoch Time: {torch.tensor(times).mean():.4f}s')
print(f'Median Epoch Time: {torch.tensor(times).median():.4f}s')
print(f'Best Validation Accuracy: {100.0 * best_val_acc:.2f}%')
print(f'Test Accuracy: {100.0 * test_acc:.2f}%')
print(f'Total Program Runtime: '
f'{time.perf_counter() - wall_clock_start:.4f}s')
24 changes: 21 additions & 3 deletions examples/gat.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

import torch_geometric
import torch_geometric.transforms as T
from torch_geometric import seed_everything
from torch_geometric.datasets import Planetoid
from torch_geometric.logging import init_wandb, log
from torch_geometric.nn import GATConv
Expand All @@ -30,6 +31,9 @@
init_wandb(name=f'GAT-{args.dataset}', heads=args.heads, epochs=args.epochs,
hidden_channels=args.hidden_channels, lr=args.lr, device=device)

wall_clock_start = time.perf_counter()
seed_everything(123)

path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'Planetoid')
dataset = Planetoid(path, args.dataset, transform=T.NormalizeFeatures())
data = dataset[0].to(device)
Expand All @@ -43,6 +47,10 @@ def __init__(self, in_channels, hidden_channels, out_channels, heads):
self.conv2 = GATConv(hidden_channels * heads, out_channels, heads=1,
concat=False, dropout=0.6)

def reset_parameters(self):
self.conv1.reset_parameters()
self.conv2.reset_parameters()

def forward(self, x, edge_index):
x = F.dropout(x, p=0.6, training=self.training)
x = F.elu(self.conv1(x, edge_index))
Expand All @@ -53,6 +61,7 @@ def forward(self, x, edge_index):

model = GAT(dataset.num_features, args.hidden_channels, dataset.num_classes,
args.heads).to(device)
model.reset_parameters()
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=5e-4)


Expand All @@ -77,15 +86,24 @@ def test():
return accs


print(f'Total time before training begins took '
f'{time.perf_counter() - wall_clock_start:.4f}s')
print('Training...')
times = []
best_val_acc = final_test_acc = 0
for epoch in range(1, args.epochs + 1):
start = time.time()
start = time.perf_counter()
loss = train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
log(Epoch=epoch, Loss=loss, Train=train_acc, Val=val_acc, Test=test_acc)
times.append(time.time() - start)
print(f"Median time per epoch: {torch.tensor(times).median():.4f}s")
times.append(time.perf_counter() - start)

print(f'Average Epoch Time: {torch.tensor(times).mean():.4f}s')
print(f'Median Epoch Time: {torch.tensor(times).median():.4f}s')
print(f'Best Validation Accuracy: {100.0 * best_val_acc:.2f}%')
print(f'Test Accuracy: {100.0 * test_acc:.2f}%')
print(f'Total Program Runtime: '
f'{time.perf_counter() - wall_clock_start:.4f}s')
24 changes: 21 additions & 3 deletions examples/gcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

import torch_geometric
import torch_geometric.transforms as T
from torch_geometric import seed_everything
from torch_geometric.datasets import Planetoid
from torch_geometric.logging import init_wandb, log
from torch_geometric.nn import GCNConv
Expand All @@ -30,6 +31,9 @@
device=device,
)

wall_clock_start = time.perf_counter()
seed_everything(123)

path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'Planetoid')
dataset = Planetoid(path, args.dataset, transform=T.NormalizeFeatures())
data = dataset[0].to(device)
Expand All @@ -54,6 +58,10 @@ def __init__(self, in_channels, hidden_channels, out_channels):
self.conv2 = GCNConv(hidden_channels, out_channels,
normalize=not args.use_gdc)

def reset_parameters(self):
self.conv1.reset_parameters()
self.conv2.reset_parameters()

def forward(self, x, edge_index, edge_weight=None):
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv1(x, edge_index, edge_weight).relu()
Expand All @@ -67,6 +75,7 @@ def forward(self, x, edge_index, edge_weight=None):
hidden_channels=args.hidden_channels,
out_channels=dataset.num_classes,
).to(device)
model.reset_parameters()

optimizer = torch.optim.Adam([
dict(params=model.conv1.parameters(), weight_decay=5e-4),
Expand Down Expand Up @@ -96,14 +105,23 @@ def test():


best_val_acc = test_acc = 0
print(f'Total time before training begins took '
f'{time.perf_counter() - wall_clock_start:.4f}s')
print('Training...')
times = []
for epoch in range(1, args.epochs + 1):
start = time.time()
start = time.perf_counter()
loss = train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
log(Epoch=epoch, Loss=loss, Train=train_acc, Val=val_acc, Test=test_acc)
times.append(time.time() - start)
print(f'Median time per epoch: {torch.tensor(times).median():.4f}s')
times.append(time.perf_counter() - start)

print(f'Average Epoch Time: {torch.tensor(times).mean():.4f}s')
print(f'Median Epoch Time: {torch.tensor(times).median():.4f}s')
print(f'Best Validation Accuracy: {100.0 * best_val_acc:.2f}%')
print(f'Test Accuracy: {100.0 * test_acc:.2f}%')
print(f'Total Program Runtime: '
f'{time.perf_counter() - wall_clock_start:.4f}s')
Loading
Loading