How to ingest billing data from specified S3 buckets

Note: The functionality described on this page is intended as an alternative. The preferred approach is to set up CUR (Cost and Usage Report) cost exports to an S3 bucket and allow Hyperglance to detect the configuration automatically.

Ordinarily Hyperglance will look for CUR Exports configured in AWS. If there are active exports then Hyperglance will automatically ingest their billing CSVs.

When it is not possible to setup CUR exports or if it is not possible to grant Hyperglance access to the CUR service then you can manually point Hyperglance directly at one or more S3 buckets.

For this to work the data in the S3 bucket must be in same layout that AWS CUR would use for the "Overwrite Report" export mode, with a layout like this:

<example-report-name>/yyyymmdd-yyyymmdd/<example-report-name>-<file-number>.csv.<zip|gz>

 

Follow these steps to specify the S3 bucket(s) that Hyperglance should poll for billing CSVs:

1) SSH into the Instance

2) Edit file:  /var/lib/data/hyperglance/config.env

3) Set the S3 Bucket name(s) to use.  You may specify multiple buckets by separating their names with a comma. You may also include a folder path to scope the probing to specific sub-directories.

AWS_COST_BUCKET=name_of_bucketA, name_of_bucketB/folderName

4) Restart services for the config to take effect:

sudo docker-compose -f /etc/docker-compose.yml up -d