Your AWS account needs to be configured to generate Cost and Usage reports and save those reports in S3, and the application needs to authenticate with AWS and run queries on that data using Amazon Athena.
Ensure your aws account has the correct permissions
- You will need an IAM user that can create access-keys and modify your billing settings.
- You can use the CloudFormation template file ccf-app.yaml to automate the creation of a role that allows the Cloud Carbon Footprint application to read Cost and Usage Reports via AWS Athena. Note: the section that asks you to specify the "AssumeRolePolicyDocument" is where you define the user or role that will have permissions to assume the "ccf-app" role.
- This role name will be used for the value in the environment variable:
Enable the Cost and Usage Billing AWS feature.
- This feature needs to be enabled so your account can start generating cost and usage reports. To enable, navigate to your account's billing section, and click on the "Cost and Usage Reports" tab. Make sure to select “Amazon Athena” for report data integration. Reference Cost and Usage Reports documentation here.
Setup Athena DB to save the Cost and Usage Reports.
- In addition to generating reports, we use Athena DB to save the details of those reports in a DB, so we can run queries on them. This is a standard AWS integration, outlined here. There is a lot of helpful detail in the guides that goes beyond the specific needs for our app, but once you are able to successfully query the Cost and Usage Reports from Athena, you should be set up sufficiently.
Configure aws credentials locally, using awscli.
After installing awscli, run
aws configureand provide your access key and secret access key. Also make sure you select the same region as the one you created your cost and usage reports in.
We optionally support alternative methods of authenticating with AWS, which you can read about here.
Configure environmental variables for the api and client.
After configuring your credentials, we need to set a number of environmental variables in the app, so it can authenticate with AWS. We use .env files to manage this. Reference packages/api/.env.template for a template .env file. Rename this file as .env, optionally remove the comments and then set the environment variables for the “Billing Data” approach. If you are only using one of these cloud providers, you can remove the environment variables associated with the other cloud provider in your
There is also a
packages/client/.envfile that allows you to set some configuration for the data range the application requests data for. See client/.env.template for a template. Rename this file as .env, optionally remove the comments and then set the environment variables.
By default, the client uses AWS, GCP and Azure. If you are only using one of these cloud providers, please update the appConfig object in the client Config file to only include your provider in the CURRENT_PROVIDERS array.
Finally, after performing a
yarn install, start up the application
⚠️ This will incur some cost. Use this sparingly if you wish to test with live data.
DISCLAIMER: If your editor of choice is VS Code, we recommend to use either your native or custom terminal of choice (i.e. iterm) instead. Unexpected authentication issues have occured when starting up the server in VS Code terminals.
Unsupported Usage Types
The application has a file containing supported usage types located here. The current lists consist of types the application has faced, so there are likely to be some types not yet handled. When querying your data, you may come across unsupported types with a warning like this:
2021-03-31T09:48:38.815Z [CostAndUsageReports] warn: Unexpected usage type for storage service: EU-WarmStorage-ByteHrs-EFS
If you come across a similar warning message, congratulations! You have found a usage type that the app currently isn’t aware of - this is a great opportunity for you to improve Cloud Carbon Footprint!
The steps to resolve are:
- Determine the type in question based on the warning message
- Add the type to the respective list in the
estimates.cache.jsonfile and restart the application server
- Submit an issue or pull request with the update
Options for AWS Authentication
We currently support three modes of authentication with AWS, that you can see in packages/aws/src/application/AWSCredentialsProvider.ts:
"default" - this uses the AWS credentials that exist in the environment the application is running in, for example if you configure your local environment.
"AWS" - this is used to authenticate via an AWS role that has the necessary permissions to query the CloudWatch and Cost Explorer APIs.
"GCP" - this is used by GCP Service Accounts that authenticate via a temporary AWS STS token. This method is used by the application when deployed to Google App Engine.
"EC2-METADATA" - this uses the AWS credentials that are automatically provided via an Instance Profile when you run the application on an EC2 instance. In order for this to work, you need to make sure that the appropriate IAM role is already created (as specified in step 1 of this document), and associated with the EC2 instance. See more information here.
"ECS-METADATA" - this uses the AWS credentials that are provided to containers in ECS. You need to make sure that the appropriate IAM role is created, and supplied as the task role to the container in the task definition. See more information here.
The authentication mode is set inside packages/common/src/Config.ts.
api/.env is where you configure the options for the "GCP" mode, and set the AWS Accounts you want to run the application against. You can read more about this mode of authentication in .adr/adr_5_aws_authentication.txt, as well as this article: https://cevo.com.au/post/2019-07-29-using-gcp-service-accounts-to-access-aws/