Skip to main content

Using the Reporting API

The MX Reporting API enables you to track changes for all data held on the MX platform for your clients without having to read this data individually for each user. This is done by providing daily change files which indicate how objects have changed throughout the day.

This guide provides best practices on consuming and staging the data in your system.

Consuming Daily Reporting Files

Objects on the MX platform are organized in a hierarchy. This means that your systems must consume daily reporting files in a particular order so that the objects are created, updated, and deleted in the proper order in your data store/warehouse.

First, you must consume all files with create actions in this order:

  1. Users
  2. Members
  3. Accounts
  4. Transactions
  5. Holdings
  6. Categories
  7. Tags
  8. Taggings
  9. Goals
  10. Budgets
  11. Notification Profiles
  12. Beat
  13. Beat Feedback
  14. Devices
  15. Analytics Events
  16. Analytics Page Views
  17. Analytics Screen Views
  18. Analytics Timed Events
  19. Insights

Second, you must consume all update files in the same order as above.

Third, you must consume all delete files in the reverse order. This ensures transactions are deleted on your systems before the account they belong to is deleted, and so forth.

Generating Sample Data for the Integrations Environment

Files in the Reporting API are generated from system events which represent user activity and aggregated account and transactional data. This presents a challenge in the integrations environment because there are no users creating activity on the system.

If you need a more robust set of test data, add your own by creating internal test users that add accounts and use the system for a few days to generate log events.

The following steps describe this process:

  1. Create test users in your integration client using whichever MX API you use for this purpose, e.g., the Platform API or MDX v5 Real Time.
  2. Generate master_widget URLs for those users with the appropriate API, e.g., the Platform API the the SSO API.
  3. Copy the URL from the master_widget response and paste it in a browser window.
  4. Have your test user(s) use the system. Some recommended actions are:
    • Add savings, checking, loans, and investment accounts;
    • Categorize transactions;
    • Add tags to transactions;
    • Create custom categories;
    • Create and update goals and budgets;
  5. The next day, new files containing the event logs of the actions performed will become available to download via the download daily files.

Byte Serving for Large Files

Avro files can become very large (multiple gigabytes) which can result in partial downloads. This can be resolved by using byte serving, which allows you to request data in a set of ranged chunks that can later be assembled into the full raw Avro file response.

Scroll or select a step to display example requests and commands.

Step 1: Check the Size of Available Files

Use the list daily files endpoint to get a list of all the files that are available from the last 7 days. This list will include the size of each file. All files larger than 1GB should be downloaded using byte serving.

1
Language:shell

_10
curl -X GET https://int-logs.moneydesktop.com/download/{client_id} \
_10
-H 'Accept: application/vnd.mx.logs.v1+json' \
_10
-H 'MD-API-KEY: {api_key}'

Step 2: Download File Segments Using the curl Command

The command line tool curl can be used to download HTTP ranges by specifying the -r or --range option. This example shows a scenario where the Avro file is larger than 1GB. The first curl command specifies the range for the first gigabyte (0-1073741823) and the second command specifies the range for the rest of the data (1073741824-).

1
2
Language:shell

_11
# Download the first part of the file
_11
_11
curl -X GET -r 0-1073741823 https://int-logs.moneydesktop.com/download/{client_id}/2019-10-07/transactions/created -o 20191007-transactions-created.avro.part1 \
_11
-H 'Accept: application/vnd.mx.logs.v1+avro' \
_11
-H 'MD-API-KEY: {api_key}'
_11
_11
# Download the second part of the file
_11
_11
curl -X GET -r 1073741824- https://int-logs.moneydesktop.com/download/{client_id}/2019-10-07/transactions/created -o 20191007-transactions-created.avro.part2 \
_11
-H 'Accept: application/vnd.mx.logs.v1+avro' \
_11
-H 'MD-API-KEY: {api_key}'

Step 3: Assemble the Segments Back into a Single File

The next step is to assemble the two file partials into a single file. The cat command cat input1 input2 > output can be used for this purpose. In this case input1 and input2 are the file segments downloaded in the previous step. The output file will be a complete Avro file.

1
2
3
Language:shell

_10
cat 20191007-transactions-created.avro.part1 20191007-transactions-created.avro.part2 > 20191007-transactions-created.avro

Step 1: Check the Size of Available Files

Use the list daily files endpoint to get a list of all the files that are available from the last 7 days. This list will include the size of each file. All files larger than 1GB should be downloaded using byte serving.

Step 2: Download File Segments Using the curl Command

The command line tool curl can be used to download HTTP ranges by specifying the -r or --range option. This example shows a scenario where the Avro file is larger than 1GB. The first curl command specifies the range for the first gigabyte (0-1073741823) and the second command specifies the range for the rest of the data (1073741824-).

Step 3: Assemble the Segments Back into a Single File

The next step is to assemble the two file partials into a single file. The cat command cat input1 input2 > output can be used for this purpose. In this case input1 and input2 are the file segments downloaded in the previous step. The output file will be a complete Avro file.

1
Language:shell

_10
curl -X GET https://int-logs.moneydesktop.com/download/{client_id} \
_10
-H 'Accept: application/vnd.mx.logs.v1+json' \
_10
-H 'MD-API-KEY: {api_key}'

Parsing Avro Files

Avro files are built in a way that they can be parsed and serialized easily into other formats. Avro’s documentation provides guidance on parsing these files using different methods.

note

Decimal numbers may be represented using exponential notation. Your implementation should accept decimals in exponential notation to avoid conversion errors.

Below we show how to read an Avro file into a Ruby script and parse the output to JSON and CSV. There is an Avro gem available from rubygems.org which we use to parse the Avro file.

Avro to JSON


_11
require 'avro'
_11
require 'json'
_11
_11
json_array = []
_11
avro_file_path = "some_avro_file.avro"
_11
_11
Avro::DataFile.open(avro_file_path, "r") do |reader|
_11
reader.each do |row|
_11
json_array << row.to_json
_11
end
_11
end

Avro to CSV


_14
require 'avro'
_14
require 'csv'
_14
_14
result_file_path = "some_csv_file.csv"
_14
avro_file_path = "some_avro_file.avro"
_14
_14
Avro::DataFile.open(avro_file_path, "r") do |reader|
_14
CSV.open(result_file_path, "a+") do |csv|
_14
reader.each_with_index do |row, index|
_14
csv << row.keys if index == 0
_14
csv << row.values
_14
end
_14
end
_14
end