This page is part of archived documentation for openHAB 3.0. Go to the current stable version
# Amazon DynamoDB Persistence
This service allows you to persist state updates using the Amazon DynamoDB (opens new window) database. Query functionality is also fully supported.
Features:
- Writing/reading information to relational database systems
- Configurable database table names
- Automatic table creation
# Disclaimer
This service is provided "AS IS", and the user takes full responsibility of any charges or damage to Amazon data.
# Table of Contents
# Prerequisites
You must first set up an Amazon account as described below.
Users are recommended to familiarize themselves with AWS pricing before using this service. Please note that there might be charges from Amazon when using this service to query/store data to DynamoDB. See Amazon DynamoDB pricing pages (opens new window) for more details. Please also note possible Free Tier (opens new window) benefits.
# Setting Up an Amazon Account
- Sign up (opens new window) for Amazon AWS.
- Select the AWS region in the AWS console (opens new window) using these instructions (opens new window). Note the region identifier in the URL (e.g.
https://eu-west-1.console.aws.amazon.com/console/home?region=eu-west-1
means that region id iseu-west-1
). - Create user for openHAB with IAM
- Open Services -> IAM -> Users -> Create new Users. Enter
openhab
to User names, keep Generate an access key for each user checked, and finally click Create. - Show User Security Credentials and record the keys displayed
- Open Services -> IAM -> Users -> Create new Users. Enter
- Configure user policy to have access for dynamodb
- Open Services -> IAM -> Policies
- Check AmazonDynamoDBFullAccess and click Policy actions -> Attach
- Check the user created in step 2 and click Attach policy
# Configuration
This service can be configured in the file services/dynamodb.cfg
.
# Basic configuration
Property | Default | Required | Description |
---|---|---|---|
accessKey | Yes | access key as shown in Setting up Amazon account. | |
secretKey | Yes | secret key as shown in Setting up Amazon account. | |
region | Yes | AWS region ID as described in Setting up Amazon account. The region needs to match the region that was used to create the user. |
# Configuration Using Credentials File
Alternatively, instead of specifying accessKey
and secretKey
, one can configure a configuration profile file.
Property | Default | Required | Description |
---|---|---|---|
profilesConfigFile | Yes | path to the credentials file. For example, /etc/openhab2/aws_creds . Please note that the user that runs openHAB must have approriate read rights to the credential file. For more details on the Amazon credential file format, see Amazon documentation (opens new window). | |
profile | Yes | name of the profile to use | |
region | Yes | AWS region ID as described in Step 2 in Setting up Amazon account. The region needs to match the region that was used to create the user. |
Example of service configuration file (services/dynamodb.cfg
):
profilesConfigFile=/etc/openhab2/aws_creds
profile=fooprofile
region=eu-west-1
Example of credentials file (/etc/openhab2/aws_creds
):
[fooprofile]
aws_access_key_id=testAccessKey
aws_secret_access_key=testSecretKey
# Advanced Configuration
In addition to the configuration properties above, the following are also available:
Property | Default | Required | Description |
---|---|---|---|
readCapacityUnits | 1 | No | read capacity for the created tables |
writeCapacityUnits | 1 | No | write capacity for the created tables |
tablePrefix | openhab- | No | table prefix used in the name of created tables |
bufferCommitIntervalMillis | 1000 | No | Interval to commit (write) buffered data. In milliseconds. |
bufferSize | 1000 | No | Internal buffer size in datapoints which is used to batch writes to DynamoDB every bufferCommitIntervalMillis . |
Typically you should not need to modify parameters related to buffering.
Refer to Amazon documentation on provisioned throughput (opens new window) for details on read/write capacity.
All item- and event-related configuration is done in the file persistence/dynamodb.persist
.
# Details
# Tables Creation
When an item is persisted via this service, a table is created (if necessary).
Currently, the service will create at most two tables for different item types.
The tables will be named <tablePrefix><item-type>
, where the <item-type>
is either bigdecimal
(numeric items) or string
(string and complex items).
Each table will have three columns: itemname
(item name), timeutc
(in ISO 8601 format with millisecond accuracy), and itemstate
(either a number or string representing item state).
# Buffering
By default, the service is asynchronous which means that data is not written immediately to DynamoDB but instead buffered in-memory.
The size of the buffer, in terms of datapoints, can be configured with bufferSize
.
Every bufferCommitIntervalMillis
the whole buffer of data is flushed to DynamoDB.
It is recommended to have the buffering enabled since the synchronous behaviour (writing data immediately) might have adverse impact to the whole system when there is many items persisted at the same time.
The buffering can be disabled by setting bufferSize
to zero.
The defaults should be suitable in many use cases.
# Caveats
When the tables are created, the read/write capacity is configured according to configuration. However, the service does not modify the capacity of existing tables. As a workaround, you can modify the read/write capacity of existing tables using the Amazon console (opens new window).
# Developer Notes
# Updating Amazon SDK
- Clean
lib/*
- Update SDK version in
scripts/fetch_sdk_pom.xml
. You can use the maven online repository browser (opens new window) to find the latest version available online. scripts/fetch_sdk.sh
- Copy
scripts/target/site/dependencies.html
andscripts/target/dependency/*.jar
tolib/
- Generate
build.properties
entriesls lib/*.jar | python -c "import sys; print(' ' + ',\\\\\\n '.join(map(str.strip, sys.stdin.readlines())))"
- Generate
META-INF/MANIFEST.MF
Bundle-ClassPath
entriesls lib/*.jar | python -c "import sys; print(' ' + ',\\n '.join(map(str.strip, sys.stdin.readlines())))"
- Generate
.classpath
entriesls lib/*.jar | python -c "import sys;pre='<classpathentry exported=\"true\" kind=\"lib\" path=\"';post='\"/>'; print('\\t' + pre + (post + '\\n\\t' + pre).join(map(str.strip, sys.stdin.readlines())) + post)"
After these changes, it's good practice to run integration tests (against live AWS DynamoDB) in org.openhab.persistence.dynamodb.test
bundle.
See README.md (opens new window) in the test bundle for more information how to execute the tests.
# Running integration tests
To run integration tests, one needs to provide AWS credentials.
Eclipse instructions
- Run all tests (in package org.openhab.persistence.dynamodb.internal) as JUnit Tests
- Configure the run configuration, and open Arguments sheet
- In VM arguments, provide the credentials for AWS
-DDYNAMODBTEST_REGION=REGION-ID
-DDYNAMODBTEST_ACCESS=ACCESS-KEY
-DDYNAMODBTEST_SECRET=SECRET
The tests will create tables with prefix dynamodb-integration-tests-
.
Note that when tests are begun, all data is removed from that table!