This page is part of archived documentation for openHAB 3.0. Go to the current stable version
# Amazon DynamoDB Persistence
This service allows you to persist state updates using the Amazon DynamoDB (opens new window) database. Query functionality is also fully supported.
- Writing/reading information to relational database systems
- Configurable database table names
- Automatic table creation
This service is provided "AS IS", and the user takes full responsibility of any charges or damage to Amazon data.
# Table of Contents
- Developer Notes
You must first set up an Amazon account as described below.
Users are recommended to familiarize themselves with AWS pricing before using this service. Please note that there might be charges from Amazon when using this service to query/store data to DynamoDB. See Amazon DynamoDB pricing pages (opens new window) for more details. Please also note possible Free Tier (opens new window) benefits.
# Setting Up an Amazon Account
- Sign up (opens new window) for Amazon AWS.
- Select the AWS region in the AWS console (opens new window) using these instructions (opens new window). Note the region identifier in the URL (e.g.
https://eu-west-1.console.aws.amazon.com/console/home?region=eu-west-1means that region id is
- Create user for openHAB with IAM
- Open Services -> IAM -> Users -> Create new Users. Enter
openhabto User names, keep Generate an access key for each user checked, and finally click Create.
- Show User Security Credentials and record the keys displayed
- Open Services -> IAM -> Users -> Create new Users. Enter
- Configure user policy to have access for dynamodb
- Open Services -> IAM -> Policies
- Check AmazonDynamoDBFullAccess and click Policy actions -> Attach
- Check the user created in step 2 and click Attach policy
This service can be configured in the file
# Basic configuration
|accessKey||Yes||access key as shown in Setting up Amazon account.|
|secretKey||Yes||secret key as shown in Setting up Amazon account.|
|region||Yes||AWS region ID as described in Setting up Amazon account. The region needs to match the region that was used to create the user.|
# Configuration Using Credentials File
Alternatively, instead of specifying
secretKey, one can configure a configuration profile file.
|profilesConfigFile||Yes||path to the credentials file. For example, |
|profile||Yes||name of the profile to use|
|region||Yes||AWS region ID as described in Step 2 in Setting up Amazon account. The region needs to match the region that was used to create the user.|
Example of service configuration file (
profilesConfigFile=/etc/openhab2/aws_creds profile=fooprofile region=eu-west-1
Example of credentials file (
[fooprofile] aws_access_key_id=testAccessKey aws_secret_access_key=testSecretKey
# Advanced Configuration
In addition to the configuration properties above, the following are also available:
|readCapacityUnits||1||No||read capacity for the created tables|
|writeCapacityUnits||1||No||write capacity for the created tables|
|tablePrefix|| ||No||table prefix used in the name of created tables|
|bufferCommitIntervalMillis||1000||No||Interval to commit (write) buffered data. In milliseconds.|
|bufferSize||1000||No||Internal buffer size in datapoints which is used to batch writes to DynamoDB every |
Typically you should not need to modify parameters related to buffering.
Refer to Amazon documentation on provisioned throughput (opens new window) for details on read/write capacity.
All item- and event-related configuration is done in the file
# Tables Creation
When an item is persisted via this service, a table is created (if necessary).
Currently, the service will create at most two tables for different item types.
The tables will be named
<tablePrefix><item-type>, where the
<item-type> is either
bigdecimal (numeric items) or
string (string and complex items).
Each table will have three columns:
itemname (item name),
timeutc (in ISO 8601 format with millisecond accuracy), and
itemstate (either a number or string representing item state).
By default, the service is asynchronous which means that data is not written immediately to DynamoDB but instead buffered in-memory.
The size of the buffer, in terms of datapoints, can be configured with
bufferCommitIntervalMillis the whole buffer of data is flushed to DynamoDB.
It is recommended to have the buffering enabled since the synchronous behaviour (writing data immediately) might have adverse impact to the whole system when there is many items persisted at the same time.
The buffering can be disabled by setting
bufferSize to zero.
The defaults should be suitable in many use cases.
When the tables are created, the read/write capacity is configured according to configuration. However, the service does not modify the capacity of existing tables. As a workaround, you can modify the read/write capacity of existing tables using the Amazon console (opens new window).
# Developer Notes
# Updating Amazon SDK
- Update SDK version in
scripts/fetch_sdk_pom.xml. You can use the maven online repository browser (opens new window) to find the latest version available online.
ls lib/*.jar | python -c "import sys; print(' ' + ',\\\\\\n '.join(map(str.strip, sys.stdin.readlines())))"
ls lib/*.jar | python -c "import sys; print(' ' + ',\\n '.join(map(str.strip, sys.stdin.readlines())))"
ls lib/*.jar | python -c "import sys;pre='<classpathentry exported=\"true\" kind=\"lib\" path=\"';post='\"/>'; print('\\t' + pre + (post + '\\n\\t' + pre).join(map(str.strip, sys.stdin.readlines())) + post)"
After these changes, it's good practice to run integration tests (against live AWS DynamoDB) in
See README.md (opens new window) in the test bundle for more information how to execute the tests.
# Running integration tests
To run integration tests, one needs to provide AWS credentials.
- Run all tests (in package org.openhab.persistence.dynamodb.internal) as JUnit Tests
- Configure the run configuration, and open Arguments sheet
- In VM arguments, provide the credentials for AWS
-DDYNAMODBTEST_REGION=REGION-ID -DDYNAMODBTEST_ACCESS=ACCESS-KEY -DDYNAMODBTEST_SECRET=SECRET
The tests will create tables with prefix
Note that when tests are begun, all data is removed from that table!