Create automations using the Home APIs on Android

1. Before you begin

This is the second codelab in series on building an Android app using the Google Home APIs. In this codelab we walk through how to create home automations and provide some tips on the best practices using the APIs. If you haven't yet completed the first codelab, Build a mobile app using the Home APIs on Android, we recommend you complete that one before starting this codelab.

The Google Home APIs provide a set of libraries for Android developers to control smart home devices within the Google Home ecosystem. With these new APIs, developers will be able to set automations for a smart home that can control device capabilities based on predefined conditions. Google also provides a Discovery API that lets you query devices to find out what attributes and commands they support.

Prerequisites

What you'll learn

  • How to create automations for smart home devices using the Home APIs.
  • How to use the Discovery APIs to explore the supported device capabilities.
  • How to employ the best practices when building your apps with the Home APIs.

2. Setting up your project

The following diagram illustrates the architecture of a Home APIs app:

Architecture of the Home APIs for an Android app

  • App Code: The core code that developers work on to build the app's user interface and the logic for interacting with the Home APIs SDK.
  • Home APIs SDK: The Home APIs SDK provided by Google works with the Home APIs Service in GMSCore to control smart home devices. Developers build apps that work with the Home APIs by bundling them with the Home APIs SDK.
  • GMSCore on Android: GMSCore, also known as Google Play services, is a Google platform that provides core system services, enabling key functionality on all certified Android devices. The home module of Google Play services contains the services that interact with the Home APIs.

In this codelab we will build on what we covered in Build a mobile app using the Home APIs on Android.

Make sure you have a structure with at least two supported devices set up and working on the account. As we are going to set up automations in this codelab (a change on a device state triggers an action on another) you will need two devices to see the results.

Get the Sample App

The source code for the Sample App is available on GitHub in the google-home/google-home-api-sample-app-android repository.

This codelab uses the examples from the codelab-branch-2 branch of the Sample App.

Navigate to where you want to save the project and clone the codelab-branch-2 branch:

$ git clone -b codelab-branch-2 https://github.com/google-home/google-home-api-sample-app-android.git

Note that this is a different branch than is used in Build a mobile app using the Home APIs on Android. This branch of the codebase builds upon where the first codelab left off. This time, the examples walk you through how to create automations. If you completed the prior codelab and were able to get all the functionality working, you can may choose to use the same Android Studio project to complete this codelab instead of using codelab-branch-2.

Once you have the source code compiled and ready to run on your mobile device, continue with the next section.

3. Learn about Automations

Automations are a set of "if this, then that" statements that can control device states based on selected factors, in an automated manner. Developers can use automations to build advanced interactive features in their APIs.

Automations are made up of three different types of components known as nodes: starters, actions, and conditions. These nodes work together to automate behaviors using smart home devices. Typically, they are evaluated in the following order:

  1. Starter — Defines the initial conditions that activate the automation, such as a change to a trait value. An automation must have a Starter.
  2. Condition — Any additional constraints to evaluate after an automation has been triggered. The expression in a Condition must evaluate to true in order for the actions of an automation to execute.
  3. Action — Commands or state updates that are performed when all conditions have been met.

For example, you can have an automation that dims the lights in a room when a switch is toggled, while the TV in that room is turned on. In this example:

  • Starter — The Switch in the room is toggled.
  • Condition— The TV OnOff state is evaluated to be On.
  • Action — The lights in the same room as the Switch are dimmed.

These nodes, are evaluated by the Automation Engine in either a serial or parallel fashion.

image5.png

A Sequential Flow contains nodes that execute in sequential order. Typically, these would be starter, condition, and action.

image6.png

A Parallel Flow may have multiple action nodes executing simultaneously, such as turning on multiple lights at the same time. Nodes following a parallel flow won't execute until all branches of the parallel flow finish.

There are other types of nodes in the automation schema. You can learn more about them in the Nodes section of the Home APIs Developer's Guide. Additionally, developers can combine different types of nodes to create complex automations, such as the following:

image13.png

Developers provide these nodes to the Automation Engine using a domain-specific language (DSL) created specifically for Google Home automations.

Explore the Automation DSL

A domain-specific language (DSL) is a language used to capture system behaviour in code. The compiler generates data classes that are serialized to protocol buffer JSON, and used to make calls to Google's Automation Services.

The DSL looks for the following schema:

automation {
name = "AutomationName"
  description = "An example automation description."
  isActive = true
    sequential {
    val onOffTrait = starter<_>(device1, OnOffLightDevice, OnOff)
    condition() { expression = onOffTrait.onOff equals true }
    action(device2, OnOffLightDevice) { command(OnOff.on()) }
  }
}

The automation in the preceding example synchronizes two lightbulbs. When device1's OnOff state changes to On (onOffTrait.onOff equals true), then device2's OnOff state is changed to On (command(OnOff.on()).

When you are working with automations, know that there are resource limits.

Automations are a very useful tool to create automated capabilities in a smart home. In the most basic use case, you can explicitly code an automation to use specific devices and traits. But a more practical use case is one where the app lets the user configure the devices, commands, and parameters of an automation. The next section explains how to create an automation editor that lets the user do exactly that.

4. Build an automation editor

Within the Sample App, we will create an automation editor with which users can select devices, the capabilities (actions) they want to use, and how the automations are triggered using starters.

img11-01.png img11-02.png img11-03.png img11-04.png

Set up starters

The automation starter is the entry point for automation. A starter triggers an automation when a given event takes place. In the Sample App, we capture the automation starters using the StarterViewModel class, found in the StarterViewModel.kt source file, and display the editor view using the StarterView (StarterView.kt).

A starter node needs the following elements:

  • Device
  • Trait
  • Operation
  • Value

The device and trait can be selected from the objects returned by the Devices API. The commands and parameters for each supported device are a more complex matter, need to be handled separately.

The app defines a pre-set list of operations:

   // List of operations available when creating automation starters:
enum class Operation {
  EQUALS,
  NOT_EQUALS,
  GREATER_THAN,
  GREATER_THAN_OR_EQUALS,
  LESS_THAN,
  LESS_THAN_OR_EQUALS
    }

Then for each supported trait keeps track of supported operations:

// List of operations available when comparing booleans:
 object BooleanOperations : Operations(listOf(
     Operation.EQUALS,
     Operation.NOT_EQUALS
 ))
// List of operations available when comparing values:
object LevelOperations : Operations(listOf(
    Operation.GREATER_THAN,
    Operation.GREATER_THAN_OR_EQUALS,
    Operation.LESS_THAN,
    Operation.LESS_THAN_OR_EQUALS
))

In a similar manner, the Sample App keeps track of values assignable to traits:

enum class OnOffValue {
   On,
   Off,
}
enum class ThermostatValue {
  Heat,
  Cool,
  Off,
}

And keeps track of a mapping between the values defined by the app and those defined by the APIs:

val valuesOnOff: Map<OnOffValue, Boolean> = mapOf(
  OnOffValue.On to true,
  OnOffValue.Off to false,
)
val valuesThermostat: Map<ThermostatValue, ThermostatTrait.SystemModeEnum> = mapOf(
  ThermostatValue.Heat to ThermostatTrait.SystemModeEnum.Heat,
  ThermostatValue.Cool to ThermostatTrait.SystemModeEnum.Cool,
  ThermostatValue.Off to ThermostatTrait.SystemModeEnum.Off,
)

The app then displays a set of view elements that users can use to select the required fields.

Uncomment Step 4.1.1 in the StarterView.kt file to render all starter devices and implement click callback in a DropdownMenu:

val deviceVMs: List<DeviceViewModel> = structureVM.deviceVMs.collectAsState().value
...
DropdownMenu(expanded = expandedDeviceSelection, onDismissRequest = { expandedDeviceSelection = false }) {
// TODO: 4.1.1 - Starter device selection dropdown
// for (deviceVM in deviceVMs) {
//     DropdownMenuItem(
//         text = { Text(deviceVM.name) },
//         onClick = {
//             scope.launch {
//                 starterDeviceVM.value = deviceVM
//                 starterType.value = deviceVM.type.value
//                 starterTrait.value = null
//                 starterOperation.value = null
//             }
//             expandedDeviceSelection = false
//         }
//     )
// }
}

Uncomment Step 4.1.2 in the StarterView.kt file to render all traits of the starter device and implement click callback in a DropdownMenu:

// Selected starter attributes for StarterView on screen:
val starterDeviceVM: MutableState<DeviceViewModel?> = remember {
mutableStateOf(starterVM.deviceVM.value) }
...
DropdownMenu(expanded = expandedTraitSelection, onDismissRequest = { expandedTraitSelection = false }) {
// TODO: 4.1.2 - Starter device traits selection dropdown
// val deviceTraits = starterDeviceVM.value?.traits?.collectAsState()?.value!!
// for (trait in deviceTraits) {
//     DropdownMenuItem(
//         text = { Text(trait.factory.toString()) },
//         onClick = {
//             scope.launch {
//                 starterTrait.value = trait.factory
//                 starterOperation.value = null
//             }
//             expandedTraitSelection = false
//         }
//     )
}
}

Uncomment Step 4.1.3 in the StarterView.kt file to render all operations of the selected trait and implement click callback in a DropdownMenu:

val starterOperation: MutableState<StarterViewModel.Operation?> = remember {
  mutableStateOf(starterVM.operation.value) }
  ...
  DropdownMenu(expanded = expandedOperationSelection, onDismissRequest = { expandedOperationSelection = false }) {
    // ...
    if (!StarterViewModel.starterOperations.containsKey(starterTrait.value))
    return@DropdownMenu
    // TODO: 4.1.3 - Starter device trait operations selection dropdown
      // val operations: List<StarterViewModel.Operation> = StarterViewModel.starterOperations.get(starterTrait.value ?: OnOff)?.operations!!
    //  for (operation in operations) {
    //      DropdownMenuItem(
    //          text = { Text(operation.toString()) },
    //          onClick = {
    //              scope.launch {
    //                  starterOperation.value = operation
    //              }
    //              expandedOperationSelection = false
    //          }
    //      )
    //  }
}

Uncomment Step 4.1.4 in the StarterView.kt file to render all values of the selected trait and implement click callback in a DropdownMenu:

when (starterTrait.value) {
  OnOff -> {
        ...
    DropdownMenu(expanded = expandedBooleanSelection, onDismissRequest = { expandedBooleanSelection = false }) {
// TODO: 4.1.4 - Starter device trait values selection dropdown
//             for (value in StarterViewModel.valuesOnOff.keys) {
//                 DropdownMenuItem(
//                     text = { Text(value.toString()) },
//                     onClick = {
//                         scope.launch {
//                             starterValueOnOff.value = StarterViewModel.valuesOnOff.get(value)
//                         }
//                         expandedBooleanSelection = false
//                     }
//                 )
//             }
             }
              ...
          }
           LevelControl -> {
              ...
      }
   }

Uncomment Step 4.1.5 in the StarterView.kt file to store all starter ViewModel variables into the draft automation's starter ViewModel (draftVM.starterVMs).

val draftVM: DraftViewModel = homeAppVM.selectedDraftVM.collectAsState().value!!
// Save starter button:
Button(
enabled = isOptionsSelected && isValueProvided,
onClick = {
  scope.launch {
  // TODO: 4.1.5 - store all starter ViewModel variables into draft ViewModel
  // starterVM.deviceVM.emit(starterDeviceVM.value)
  // starterVM.trait.emit(starterTrait.value)
  // starterVM.operation.emit(starterOperation.value)
  // starterVM.valueOnOff.emit(starterValueOnOff.value!!)
  // starterVM.valueLevel.emit(starterValueLevel.value!!)
  // starterVM.valueBooleanState.emit(starterValueBooleanState.value!!)
  // starterVM.valueOccupancy.emit(starterValueOccupancy.value!!)
  // starterVM.valueThermostat.emit(starterValueThermostat.value!!)
  //
  // draftVM.starterVMs.value.add(starterVM)
  // draftVM.selectedStarterVM.emit(null)
  }
})
{ Text(stringResource(R.string.starter_button_create)) }

Running the app and selecting a new automation and starter should show a view like the following:

79beb3b581ec71ec.png

The Sample App only supports starters based on device traits.

Set up actions

The automation action reflects the central purpose of an automation, how it effects a change in the physical world. In the Sample App, we capture the automation actions using the ActionViewModel class , and display the editor view using the ActionView class.

The Sample App uses the following Home APIs entities to define the automation action nodes:

  • Device
  • Trait
  • Command
  • Value (Optional)

Each device command action uses a command, but some will also require a parameter value associated with it, such as MoveToLevel() and a target percentage.

The device and trait can be selected from the objects returned by the Devices API.

The app defines a predefined list of commands:

   // List of operations available when creating automation starters:
enum class Action {
  ON,
  OFF,
  MOVE_TO_LEVEL,
  MODE_HEAT,
  MODE_COOL,
  MODE_OFF,
}

The app keeps track of supported operations for each supported trait:

 // List of operations available when comparing booleans:
object OnOffActions : Actions(listOf(
    Action.ON,
    Action.OFF,
))
// List of operations available when comparing booleans:
object LevelActions : Actions(listOf(
    Action.MOVE_TO_LEVEL
))
// List of operations available when comparing booleans:
object ThermostatActions : Actions(listOf(
    Action.MODE_HEAT,
    Action.MODE_COOL,
    Action.MODE_OFF,
))
// Map traits and the comparison operations they support:
val actionActions: Map<TraitFactory<out Trait>, Actions> = mapOf(
    OnOff to OnOffActions,
    LevelControl to LevelActions,
 // BooleanState - No Actions
 // OccupancySensing - No Actions
    Thermostat to ThermostatActions,
)

For commands that take one or more parameters, there is also a variable:

   val valueLevel: MutableStateFlow<UByte?>

The API displays a set of view elements that users can use to select the required fields.

Uncomment Step 4.2.1 in the ActionView.kt file to render all action devices and implement click callback in a DropdownMenu to set actionDeviceVM.

val deviceVMs = structureVM.deviceVMs.collectAsState().value
...
DropdownMenu(expanded = expandedDeviceSelection, onDismissRequest = { expandedDeviceSelection = false }) {
// TODO: 4.2.1 - Action device selection dropdown
// for (deviceVM in deviceVMs) {
//     DropdownMenuItem(
//         text = { Text(deviceVM.name) },
//         onClick = {
//             scope.launch {
//                 actionDeviceVM.value = deviceVM
//                 actionTrait.value = null
//                 actionAction.value = null
//             }
//             expandedDeviceSelection = false
//         }
//     )
// }
}

Uncomment Step 4.2.2 in the ActionView.kt file to render all traits of actionDeviceVM and implement click callback in a DropdownMenu to set the actionTrait, representing the trait to which the command belongs.

val actionDeviceVM: MutableState<DeviceViewModel?> = remember {
mutableStateOf(actionVM.deviceVM.value) }
...
DropdownMenu(expanded = expandedTraitSelection, onDismissRequest = { expandedTraitSelection = false }) {
// TODO: 4.2.2 - Action device traits selection dropdown
// val deviceTraits: List<Trait> = actionDeviceVM.value?.traits?.collectAsState()?.value!!
// for (trait in deviceTraits) {
//     DropdownMenuItem(
//         text = { Text(trait.factory.toString()) },
//         onClick = {
//             scope.launch {
//                 actionTrait.value = trait
//                 actionAction.value = null
//             }
//             expandedTraitSelection = false
//         }
//     )
// }
}

Uncomment Step 4.2.3 in the ActionView.kt file to render all available actions of actionTrait and implement click callback in a DropdownMenu to set the actionAction, which represents the selected automation action.

DropdownMenu(expanded = expandedActionSelection, onDismissRequest = { expandedActionSelection = false }) {
// ...
if (!ActionViewModel.actionActions.containsKey(actionTrait.value?.factory))
return@DropdownMenu
// TODO: 4.2.3 - Action device trait actions (commands) selection dropdown
// val actions: List<ActionViewModel.Action> = ActionViewModel.actionActions.get(actionTrait.value?.factory)?.actions!!
// for (action in actions) {
//     DropdownMenuItem(
//         text = { Text(action.toString()) },
//         onClick = {
//             scope.launch {
//                 actionAction.value = action
//             }
//             expandedActionSelection = false
//         }
//     )
// }
}

Uncomment Step 4.2.4 in the ActionView.kt file to render the available values of trait action (command) and store the value into actionValueLevel in the value change callback:

when (actionTrait.value?.factory) {
LevelControl -> {
// TODO: 4.2.4 - Action device trait action(command) values selection widget
// Column (Modifier.padding(horizontal = 16.dp, vertical = 8.dp).fillMaxWidth()) {
//   Text(stringResource(R.string.action_title_value), fontSize = 16.sp, fontWeight = FontWeight.SemiBold)
//  }
//
//  Box (Modifier.padding(horizontal = 24.dp, vertical = 8.dp)) {
//      LevelSlider(value = actionValueLevel.value?.toFloat()!!, low = 0f, high = 254f, steps = 0,
//          modifier = Modifier.padding(top = 16.dp),
//          onValueChange = { value : Float -> actionValueLevel.value = value.toUInt().toUByte() }
//          isEnabled = true
//      )
//  }
...
}

Uncomment Step 4.2.5 in the ActionView.kt file to store all action ViewModel's variables in the draft automation's action ViewModel (draftVM.actionVMs):

val draftVM: DraftViewModel = homeAppVM.selectedDraftVM.collectAsState().value!!
// Save action button:
Button(
  enabled = isOptionsSelected,
  onClick = {
  scope.launch {
  // TODO: 4.2.5 - store all action ViewModel variables into draft ViewModel
  // actionVM.deviceVM.emit(actionDeviceVM.value)
  // actionVM.trait.emit(actionTrait.value)
  // actionVM.action.emit(actionAction.value)
  // actionVM.valueLevel.emit(actionValueLevel.value)
  //
  // draftVM.actionVMs.value.add(actionVM)
  // draftVM.selectedActionVM.emit(null)
  }
})
{ Text(stringResource(R.string.action_button_create)) }

Running the app and selecting a new automation and action should result in a view like the following:

6efa3c7cafd3e595.png

We only support actions based on device traits in the Sample App.

Render a draft automation

When the DraftViewModel is complete, it can be rendered by HomeAppView.kt:

fun HomeAppView (homeAppVM: HomeAppViewModel) {
  ...
  // If a draft automation is selected, show the draft editor:
  if (selectedDraftVM != null) {
    DraftView(homeAppVM)
  }
  ...
}

In DraftView.kt:

fun DraftView (homeAppVM: HomeAppViewModel) {
   val draftVM: DraftViewModel = homeAppVM.selectedDraftVM.collectAsState().value!!
    ...
// Draft Starters:
   DraftStarterList(draftVM)
// Draft Actions:
   DraftActionList(draftVM)
}

Create an automation

Now that you learned how to create starters and actions, you are ready to create an automation draft and send it to the Automation API. The API has a createAutomation() function that takes in an automation draft as an argument, and returns a new automation instance.

The draft automation preparation takes place in the DraftViewModel class in the Sample App. Look at the getDraftAutomation() function to learn more about how we structure the automation draft using the starter and action variables in the previous section.

Uncomment Step 4.4.1 in the DraftViewModel.kt file to create the "select" expressions required to create the automation graph when the starter trait is OnOff:

val starterVMs: List<StarterViewModel> = starterVMs.value
val actionVMs: List<ActionViewModel> = actionVMs.value
    ...
fun getDraftAutomation() : DraftAutomation {
    ...
  val starterVMs: List<StarterViewModel> = starterVMs.value
    ...
  return automation {
    this.name = name
    this.description = description
    this.isActive = true
    // The sequential block wrapping all nodes:
    sequential {
    // The select block wrapping all starters:
      select {
    // Iterate through the selected starters:
        for (starterVM in starterVMs) {
        // The sequential block for each starter (should wrap the Starter Expression!)
          sequential {
              ...
              val starterTrait: TraitFactory<out Trait> = starterVM.trait.value!!
              ...
              when (starterTrait) {
                  OnOff -> {
        // TODO: 4.4.1 - Set starter expressions according to trait type
        //   val onOffValue: Boolean = starterVM.valueOnOff.value
        //   val onOffExpression: TypedExpression<out OnOff> =
        //       starterExpression as TypedExpression<out OnOff>
        //   when (starterOperation) {
        //       StarterViewModel.Operation.EQUALS ->
        //           condition { expression = onOffExpression.onOff equals onOffValue }
        //       StarterViewModel.Operation.NOT_EQUALS ->
        //           condition { expression = onOffExpression.onOff notEquals onOffValue }
        //       else -> { MainActivity.showError(this, "Unexpected operation for OnOf
        //   }
        }
   LevelControl -> {
     ...
// Function to allow manual execution of the automation:
manualStarter()
     ...
}

Uncomment Step 4.4.2 in the DraftViewModel.kt file to create the parallel expressions required to create the automation graph when the selected action trait is LevelControl and the selected action is MOVE_TO_LEVEL:

val starterVMs: List<StarterViewModel> = starterVMs.value
val actionVMs: List<ActionViewModel> = actionVMs.value
    ...
fun getDraftAutomation() : DraftAutomation {
      ...
  return automation {
    this.name = name
    this.description = description
    this.isActive = true
    // The sequential block wrapping all nodes:
    sequential {
          ...
    // Parallel block wrapping all actions:
      parallel {
        // Iterate through the selected actions:
        for (actionVM in actionVMs) {
          val actionDeviceVM: DeviceViewModel = actionVM.deviceVM.value!!
        // Action Expression that the DSL will check for:
          action(actionDeviceVM.device, actionDeviceVM.type.value.factory) {
            val actionCommand: Command = when (actionVM.action.value) {
                  ActionViewModel.Action.ON -> { OnOff.on() }
                  ActionViewModel.Action.OFF -> { OnOff.off() }
    // TODO: 4.4.2 - Set starter expressions according to trait type
    // ActionViewModel.Action.MOVE_TO_LEVEL -> {
    //     LevelControl.moveToLevelWithOnOff(
    //         actionVM.valueLevel.value!!,
    //         0u,
    //         LevelControlTrait.OptionsBitmap(),
    //         LevelControlTrait.OptionsBitmap()
    //     )
    // }
      ActionViewModel.Action.MODE_HEAT -> { SimplifiedThermostat
      .setSystemMode(SimplifiedThermostatTrait.SystemModeEnum.Heat) }
          ...
}

The last step to completing an automation is to implement the getDraftAutomation function to create an AutomationDraft.

Uncomment Step 4.4.3 in the HomeAppViewModel.kt file to create the automation by calling the Home APIs and handling exceptions:

fun createAutomation(isPending: MutableState<Boolean>) {
  viewModelScope.launch {
    val structure : Structure = selectedStructureVM.value?.structure!!
    val draft : DraftAutomation = selectedDraftVM.value?.getDraftAutomation()!!
    isPending.value = true
    // TODO: 4.4.3 - Call the Home API to create automation and handle exceptions
    // // Call Automation API to create an automation from a draft:
    // try {
    //     structure.createAutomation(draft)
    // }
    // catch (e: Exception) {
    //     MainActivity.showError(this, e.toString())
    //     isPending.value = false
    //     return@launch
    // }
    // Scrap the draft and automation candidates used in the process:
    selectedCandidateVMs.emit(null)
    selectedDraftVM.emit(null)
    isPending.value = false
  }
}

Now run the app and see the changes on your device!

Once selecting a starter and action, you are ready to create the automation:

ec551405f8b07b8e.png

Make sure that you name your automation something unique, then tap the Create Automation button, which should call the APIs and bring you back to the automations list view with your automation:

8eebc32cd3755618.png

Tap the automation you just created, and see how it's returned by the APIs.

931dba7c325d6ef7.png

Be aware that the API returns a value indicating whether or not an automation is valid and currently active. It is possible to create automations that don't pass validation when they're parsed on the server side. If an automation parsing fails validation, isValid is set to false, indicating that the automation is invalid and inactive. If your automation is invalid, check the automation.validationIssues field for details.

Make sure your automation is set as valid and active, and then you can try out the automation.

Try your automation

Automations can be executed in two ways:

  1. With a starter event. If the conditions match, this triggers the action you set in the automation.
  2. With a manual execution API call.

If a draft automation has a manualStarter() defined in the automation draft DSL block, the automation engine will support manual execution for that automation. This is already present in the code examples in Sample App.

Because you are still on the automation view screen on your mobile device, tap the Manual Execute button. This should call automation.execute(), which runs your action command on the device you selected when setting up the automation.

Once you validate the action command through manual execution using the API, now it's time to see if it's also executing using the starter you defined.

Go to the Devices Tab, select the action device and the trait, and set it to a different value (for example, set light2's LevelControl (brightness) to 50%, as illustrated in the following screenshot:

d0357ec71325d1a8.png

We will now try to trigger the automation using the starter device. Choose the starter device you selected when creating the automation. Toggle the trait you chose (for example, set starter outlet1's OnOff to On):

230c78cd71c95564.png

You will see that this also executes the automation and sets the action device light2's LevelControl trait to the original value, 100%:

1f00292128bde1c2.png

Congratulations, you successfully used the Home APIs to create automations!

To learn more about the Automation API, see Android Automation API.

5. Discover Capabilities

The Home APIs include a dedicated API called the Discovery API, that developers can use to query which automation capable traits are supported in a given device. The Sample App provides an example where you can use this API to discover which commands are available.

Discover Commands

In this section, we discuss how to discover supported CommandCandidates and how to create an automation based on discovered candidate nodes.

In the Sample App, we call device.candidates() to get a list of candidates, which may include instances of CommandCandidate, EventCandidate or TraitAttributesCandidate.

Go to the HomeAppViewModel.kt file and uncomment Step 5.1.1 to retrieve the candidate list and filter with Candidate type:

   fun showCandidates() {

   ...
// TODO: 5.1.1 - Retrieve automation candidates, filtering to include CommandCandidate types only
// // Retrieve a set of initial automation candidates from the device:
// val candidates: Set<NodeCandidate> = deviceVM.device.candidates().first()
//
// for (candidate in candidates) {
//     // Check whether the candidate trait is supported:
//     if(candidate.trait !in HomeApp.supportedTraits)
//         continue
//     // Check whether the candidate type is supported:
//     when (candidate) {
//         // Command candidate type:
//         is CommandCandidate -> {
//             // Check whether the command candidate has a supported command:
//             if (candidate.commandDescriptor !in ActionViewModel.commandMap)
//                 continue
//         }
//         // Other candidate types are currently unsupported:
//         else -> { continue }
//     }
//
//     candidateVMList.add(CandidateViewModel(candidate, deviceVM))
// }
...
           // Store the ViewModels:
selectedCandidateVMs.emit(candidateVMList)
}

See how it filters for the CommandCandidate. The candidates returned by the API belong to different types. The Sample App supports CommandCandidate. Uncomment Step 5.1.2 in the commandMap defined in ActionViewModel.kt to set these supported traits:

    // Map of supported commands from Discovery API:
val commandMap: Map<CommandDescriptor, Action> = mapOf(
    // TODO: 5.1.2 - Set current supported commands
    // OnOffTrait.OnCommand to Action.ON,
    // OnOffTrait.OffCommand to Action.OFF,
    // LevelControlTrait.MoveToLevelWithOnOffCommand to Action.MOVE_TO_LEVEL
)

Now that we are able to call the Discovery API, and filter the results that we support in the Sample App, we'll discuss how we can integrate this into our editor.

8a2f0e8940f7056a.png

To learn more about the Discovery API, visit Leverage device discovery on Android.

Integrate the editor

The most common way to use discovered actions is to present them to an end-user to select from. Just before the user selects the draft automation fields, we can show them the list of discovered actions, and depending on the value they select we can pre-populate the action node in the automation draft.

The CandidatesView.kt file contains the view class that displays the discovered candidates. Uncomment Step 5.2.1 to enable the CandidateListItem's .clickable{} function which sets homeAppVM.selectedDraftVM as candidateVM:

fun CandidateListItem (candidateVM: CandidateViewModel, homeAppVM: HomeAppViewModel) {
    val scope: CoroutineScope = rememberCoroutineScope()
    Box (Modifier.padding(horizontal = 24.dp, vertical = 8.dp)) {
        Column (Modifier.fillMaxWidth().clickable {
        // TODO: 5.2.1 - Set the selectedDraftVM to the selected candidate
        // scope.launch { homeAppVM.selectedDraftVM.emit(DraftViewModel(candidateVM)) }
        }) {
            ...
        }
    }
}

Similar to Step 4.3 in HomeAppView.kt, when the selectedDraftVM is set, it renders the DraftView(...) in DraftView.kt`:

fun HomeAppView (homeAppVM: HomeAppViewModel) {
   ...
  val selectedDraftVM: DraftViewModel? by homeAppVM.selectedDraftVM.collectAsState()
...
  // If a draft automation is selected, show the draft editor:
  if (selectedDraftVM != null) {
  DraftView(homeAppVM)
  }
   ...
}

Try it again by tapping light2 - MOVE_TO_LEVEL, shown in the previous section, which prompts you to create a new automation based on the candidate's command:

15e67763a9241000.png

Now that you are familiar with automation creation in the Sample App, you can integrate automations in your apps.

6. Advanced Automation Examples

Before we wrap up, we'll discuss some additional automation DSL examples. These illustrate some of the advanced capabilities you can achieve with the APIs.

Time of Day as Starter

In addition to the device traits, Google Home APIs offer structure based traits, such as Time. You can create an automation that has a time based starter, like the following:

automation {
  name = "AutomationName"
  description = "An example automation description."
  isActive = true
  description = "Do ... actions when time is up."
  sequential {
    // starter
    val starter = starter<_>(structure, Time.ScheduledTimeEvent) {
      parameter(
        Time.ScheduledTimeEvent.clockTime(
          LocalTime.of(hour, min, sec, 0)
        )
      )
    }
        // action
  ...
  }
}

Assistant Broadcast as Action

The AssistantBroadcast trait is available either as a device-level trait in a SpeakerDevice (if the speaker supports it) or as a structure-level trait (because Google speakers and Android mobile devices can play Assistant broadcasts). For example:

automation {
  name = "AutomationName"
  description = "An example automation description."
  isActive = true
  description = "Broadcast in Speaker when ..."
  sequential {
    // starter
      ...
    // action
    action(structure) {
      command(
      AssistantBroadcast.broadcast("Time is up!!")
      )
    }
  }
}

Use DelayFor and suppressFor

The Automation API also provides advanced operators such as delayFor, which is for delaying commands, and suppressFor, which can suppress an automation from being triggered by the same events within a given span of time. Here are some examples using these operators:

sequential {
  val starterNode = starter<_>(device, OccupancySensorDevice, MotionDetection)
  // only proceed if there is currently motion taking place
  condition { starterNode.motionDetectionEventInProgress equals true }
   // ignore the starter for one minute after it was last triggered
    suppressFor(Duration.ofMinutes(1))
  
    // make announcements three seconds apart
    action(device, SpeakerDevice) {
      command(AssistantBroadcast.broadcast("Intruder detected!"))
    }
    delayFor(Duration.ofSeconds(3))
    action(device, SpeakerDevice) {
    command(AssistantBroadcast.broadcast("Intruder detected!"))
  }
    ...
}

Use AreaPresenceState in a starter

AreaPresenceState is a structure-level trait that detects whether anyone is at home.

For example, the following example demonstrates automatically locking the doors when someone is home after 10pm:

automation {
  name = "Lock the doors when someone is home after 10pm"
  description = "1 starter, 2 actions"
  sequential {
    val unused =
      starter(structure, event = Time.ScheduledTimeEvent) {
        parameter(Time.ScheduledTimeEvent.clockTime(LocalTime.of(22, 0, 0, 0)))
      }
    val stateReaderNode = stateReader<_>(structure, AreaPresenceState)
    condition {
      expression =
        stateReaderNode.presenceState equals
          AreaPresenceStateTrait.PresenceState.PresenceStateOccupied
    }
    action(structure) { command(AssistantBroadcast.broadcast("Locks are being applied")) }
    for (lockDevice in lockDevices) {
      action(lockDevice, DoorLockDevice) {
        command(Command(DoorLock, DoorLockTrait.LockDoorCommand.requestId.toString(), mapOf()))
      }
    }
  }

Now that you are familiar with these advanced automation capabilities, go out and create awesome apps!

7. Congratulations!

Congratulations! You successfully completed the second part of developing an Android App using the Google Home APIs. Throughout this codelab, you explored the Automation and Discovery APIs.

We hope you enjoy building apps that creatively control devices within the Google Home ecosystem and build exciting automation scenarios using the Home APIs!

Next steps

  • Read Troubleshooting to learn how to effectively debug apps and troubleshoot issues involving the Home APIs.
  • You can reach out to us with any recommendations, or report any issues through the Issue Tracker, Smart Home support topic.