This page is part of archived documentation for openHAB 3.0. Go to the current stable version
# Multimedia
# Volume
The framework supports some base functions (opens new window) to control the audio sinks' volume.
# Actions
setMasterVolume(float volume)
: Sets the volume of the host machine (volume in range 0-1)setMasterVolume(PercentType percent)
: Sets the volume of the host machineincreaseMasterVolume(float percent)
: Increases the volume by the given percentdecreaseMasterVolume(float percent)
: Decreases the volume by the given percentfloat getMasterVolume()
: Returns the current volume as a float between 0 and 1
# Audio
openHAB is able to play sound either from the file system (files need to be put in the folder $OPENHAB_CONF/sounds
), from URLs (e.g. Internet radio streams) or generated by text-to-speech engines (which are available as optional Voice add-ons).
There are different options for output devices (so called audio sinks):
The distribution comes with these options built-in:
Output Device | Audio Sink | Description |
---|---|---|
enhancedjavasound | System Speaker (with mp3 support) | This uses the JRE sound drivers plus an additional 3rd party library, which adds support for mp3 files. |
webaudio | Web Audio | Convenient, if sounds should not be played on the server, but on the client: This sink sends the audio stream through HTTP to web clients, which then cause it to be played back by the browser. Obviously, the browser needs to be opened and have a compatible openHAB UI running. Currently, this feature is only supported by HABPanel. |
Additionally, certain bindings register their supported devices as audio sinks, e.g. Sonos speakers.
# Console commands
To check, which audio sinks are available, you can use the console:
openhab> openhab:audio sinks
enhancedjavasound
webaudio
You can define the default audio sink either by textual configuration in $OPENHAB_CONF/services/runtime.cfg
or in the UI in Settings->Audio
.
In order to play a sound, you can use the following command on the console:
openhab> openhab:audio play doorbell.mp3
openhab> openhab:audio stream example.com
# Actions
Alternatively the playSound()
(opens new window) or playStream()
(opens new window) functions can be used in DSL rules:
playSound(String filename)
: plays a sound from the sounds folder to the default sinkplaySound(String filename, PercentType volume)
: plays a sound with the given volume from the sounds folder to the default sinkplaySound(String sink, String filename)
: plays a sound from the sounds folder to the given sink(s)playSound(String sink, String filename, PercentType volume)
: plays a sound with the given volume from the sounds folder to the given sink(s)playStream(String url)
: plays an audio stream from an url to the default sink (set url tonull
if streaming should be stopped)playStream(String sink, String url)
: plays an audio stream from an url to the given sink(s) (set url tonull
if streaming should be stopped)
# Examples
playSound("doorbell.mp3")
playSound("doorbell.mp3", new PercentType(25))
playSound("sonos:PLAY5:kitchen", "doorbell.mp3")
playSound("sonos:PLAY5:kitchen", "doorbell.mp3", new PercentType(25))
playStream("example.com")
playStream("sonos:PLAY5:kitchen", "example.com")
# Voice
# Text-to-Speech
In order to use text-to-speech, you need to install at least one TTS service.
# Console Commands
Once you have done so, you will find voices available in your system:
openhab> openhab:voice voices
mactts:Jorge Jorge (es_ES)
mactts:Moira Moira (en_IE)
mactts:Alice Alice (it_IT)
mactts:Ioana Ioana (ro_RO)
mactts:Kanya Kanya (th_TH)
You can define a default TTS service and a default voice to use either by textual configuration in $OPENHAB_CONF/services/runtime.cfg
or in the UI in Settings->Voice
.
In order to say a text, you can enter such a command on the console (The default voice and default audio sink will be used):
openhab> openhab:voice say Hello world!
# Actions
Alternatively you can execute such commands within DSL rules by using the say()
(opens new window) function:
say("Hello world!")
say("Hello world!", new PercentType(25))
say("Hello world!", "voicerss:enGB")
say("Hello world!", "voicerss:enGB", new PercentType(25))
say("Hello world!", "voicerss:enUS", "sonos:PLAY5:kitchen")
say("Hello world!", "voicerss:enUS", "sonos:PLAY5:kitchen", new PercentType(25))
You can select a particular voice (second parameter) and a particular audio sink (third parameter). If no voice or no audio sink is provided, the default voice and default audio sink will be used.
# Speech-to-Text
Although there are already interfaces defined in openHAB for speech-to-text, up to now there is no add-on available for this functionality. So the only choice that is available right now is to use the Android voice recognition feature that is built into the openHAB app for Android.
# Human Language Interpreter
Human language interpreters are meant to process prose that e.g. is a result of voice recognition or from other sources.
There are two implementations available by default:
Interpreter | Type | Description |
---|---|---|
rulehli | Rule-based Interpreter | This mimics the behavior of the Android app - it sends the string as a command to a (configurable, default is "VoiceCommand") item and expects a rule to pick it up and further process it. |
system | Built-in Interpreter | This is a simple implementation that understands basic home automation commands like "turn on the light" or "stop the music". It currently supports only English, German and French and the vocabulary is still very limited. The exact syntax still needs to be documented, for the moment you need to refer to the source code (opens new window). |
opennlp | HABot OpenNLP Interpreter | A machine-learning natural language processor based on Apache OpenNLP for intent classification and entity extraction. |
To test the interpreter, you can enter such a command on the console (assuming you have an item with label 'light'):
openhab> openhab:voice interpret turn on the light
The default human language interpreter will be used. In case of interpretation error, the error message will be said using the default voice and default audio sink.
Again, such a command can also be entered within DSL rules (using the interpret()
(opens new window) function)
interpret("turn on the light")
var String result = interpret("turn on the light", "system")
result = interpret("turn on the light", "system", null)
result = interpret(VoiceCommand.state, "system", "sonos:PLAY5:kitchen")
You can select a particular human language interpreter (second parameter) and a particular audio sink (third parameter). The audio sink parameter is used when the interpretation fails; in this case, the error message is said using the default voice and the provided audio sink. If the provided audio sink is set to null, the error message will not be said. If no human language interpreter or no audio sink is provided, the default human language interpreter and default audio sink will be used. The interpretation result is returned as a string. Note that this result is always a null string with the rule-based Interpreter (rulehli).