Quickstart to logging
Start by installing Neptune and configuring your Neptune API token and project, as described in Get started.
Then, you can use the following script to log some mocked metadata:
from random import random
from neptune_scale import Run
custom_id = random()
offset = custom_id / 5
def hello_neptune():
run = Run(
api_token="eyJhcGlfYWRkcmVz...In0=", # not needed if using environment variable
project="team-alpha/project-x", # not needed if using environment variable
experiment_name="seabird-flying-skills",
run_id=f"seagull-{custom_id}",
)
run.log_configs(
{
"parameters/use_preprocessing": True,
"parameters/learning_rate": 0.002,
"parameters/batch_size": 64,
"parameters/optimizer": "Adam",
}
)
for step in range(20):
acc = 1 - 2**-step - random() / (step + 1) - offset
loss = 2**-step + random() / (step + 1) + offset
run.log_metrics(
data={
"accuracy": acc,
"loss": loss,
},
step=step,
)
run.add_tags(["purple", "blue"])
print(run.get_experiment_url())
run.close()
if __name__ == "__main__":
hello_neptune()
The line
if __name__ == "__main__":
ensures safe importing of the main module. For details, see the Python documentation.
To inspect the logged metadata in the web app:
- In the Neptune project, click on the run to explore all its metadata.
- If you log multiple runs, enable compare mode by toggling eye icons ().