Skip to main content

Report API: Sample scripts

Two Python scripts are available to help you work with the Report API: one that fetches real settlement data, and one that generates synthetic data for testing.

note

These scripts are starting-point examples, not production-ready code. They demonstrate the basic patterns for authentication and data retrieval, but do not include pagination, robust error handling, or retry logic.

Prerequisites​

  • Python 3.x

  • For the fetch script: install the requests library:

    pip install requests
  • For the fetch script: API credentials (see API keys)

Retrieve data from the Report API​

This script authenticates, retrieves all your ledgers, then pulls both funds and fees entries for today from the Report API and saves them to a CSV file named by date (for example, 2026-03-03.csv).

Configure the script​

Open the script and set the following variables before running it:

VariableDescription
USE_MIAMI_APISet to True for accounting partners using the MIAMI API; False for merchants using their own API keys.
CLIENT_IDYour client ID.
CLIENT_SECRETYour client secret.
SUBSCRIPTION_KEYYour Ocp-Apim-Subscription-Key. Required for merchants (USE_MIAMI_API = False).
MERCHANT_SERIAL_NUMBERYour merchant serial number. Required for merchants (USE_MIAMI_API = False).

Run the script​

python3 fetch-report-api-data.py

The settlement data is saved to a CSV file named after today's date (for example, 2026-03-03.csv).

CSV columns​

The output file contains one row per ledger entry with these columns:

ColumnAPI field
Reference (Vipps PSP)pspReference
Timetime
Ledger DateledgerDate
Entry TypeentryType
Order ID/Reference (merchant)reference
Currencycurrency
Amountamount
Balance BeforebalanceBefore
Balance AfterbalanceAfter
Recipient HandlerecipientHandle
Messagemessage
Namename
Masked PhonemaskedPhoneNo

Generate fake data​

This script creates synthetic data that matches the shape of GET /report/v2/ledgers/{ledgerId}/{topic}/dates/{ledgerDate} responses. This is useful for testing data pipelines without connecting to the live API. It requires no external libraries and streams output to disk to avoid loading everything into memory at once.

Output files​

The script writes one gzipped JSON file per day for each topic:

  • out/funds/YYYY-MM-DD.json.gz
  • out/fees/YYYY-MM-DD.json.gz

Arguments​

ArgumentDefaultDescription
--start(required)Start date in YYYY-MM-DD format.
--end(required)End date in YYYY-MM-DD format.
--payments1000000Total number of payments to distribute across the date range.
--out./outOutput directory.
--ledger-id302321Ledger ID to use in the generated data.
--recipient-handleNO:123455Recipient handle to use in the generated data.
--currencyNOKCurrency code.
--net-settlement(flag)Simulate net settlement: adds daily fees-retained entries (negative in funds, positive in fees) and a payout-scheduled when the balance is positive. If omitted, gross settlement is simulated with a monthly fees-invoiced entry on the last day of each month.
--include-gdpr0.10Probability (0–1) that a transaction includes GDPR fields (name, message, maskedPhoneNo).
--seed42Random seed for reproducible output.
--tzZTimestamp suffix.

Example​

python generate-fake-report-api-data.py \
--payments 1000000 \
--start 2025-10-01 \
--end 2025-10-31 \
--out ./out \
--ledger-id 302321 \
--recipient-handle NO:123455 \
--currency NOK \
--net-settlement \
--include-gdpr 0.12 \
--seed 42

This creates sample data for 1,000,000 payments across October 2025, distributed by weekday (Fridays and Saturdays are slightly busier). It produces realistic mixes of entry types including captures, refunds, disputes, and corrections for both funds and fees topics.