Docs

Label Autodiscovery

By default, the agent automatically discovers what platform(s) it is running on and automatically creates labels for the host and containers it discovers from their respective metadata.

Currently, all major cloud providers, docker hosts and containers, and Kubernetes nodes and pods support metadata autodiscovery, and more will be added in future. You may still run the agent on other platforms, but may find that it won’t correctly identify all the metadata about your platform until we add support for them later on. Please contact us if you have a platform we don’t currently support as it is usually very easily to add support if needed.

To make instance and metric discovery consistent across platforms, labels have been normalized across platforms. This means instead of account_id for AWS instances and project_id on GCE instances, all cloud instances will have cloud.account_id so it works across all Cloud providers with the same label key allowing you to easily filter and group your instances and metrics across Outlyer using the same labels regardless of the Cloud provider its running on.

All host labels are inheritted by any containers running on that host so you can easily filter and group both hosts and containers by their metadata. This means when deploying checks be careful to also use the selector instance.type:host if you want a check to run only on the host with a set of labels and not on the containers running on that host as well.

All the labels described below can be used for the following:

  • Check Selectors: You can use any of the labels below to select instances to run a check against.
  • Host Map: You can easily filter and group your instances using any of the labels discovered below.
  • Dashboard & Alert Metrics: Your metrics will inherit most of the labels (some are left out such as instance.mac_address) for the instance they originate from so you can filter your metrics and scopes using the labels below.

Below describes the labels automatically collected for each instance type.

All Instances

At the core, the agent discovers instances. An instance could be a server or VM, or a container running on a host, or remote device. Every instance will have the following labels applied:

Label Description
instance.type The type of instance: host, container or device.
instance.hostname The hostname of the instance, discovered by various means depending on the instance type and platform.
instance.ip The IP address instance relative to the agent itself. This is used by checks so they make requests to the correct relative endpoints. On a host this will usually be 127.0.0.1 (localhost)
instance.alias OPTIONAL. Some instances may override the hostname of the instance shown in the UI with an alias instead.
instance.platform OPTIONAL. If the agent discovers its running on a platform like kubernetes it will set this label so you can easily see all hosts running on a particular platform.
instance.parent OPTIONAL. All container instances will have the parent hostname set with this label so you can easily filter containers by the host they’re running on.

Host Metadata

The host describes the VM/server/node machine the agent is running on. Depending on where the host is running (i.e. on a cloud or as part of a Kubernetes cluster) the agent will add additional labels as needed.

All hosts will have any labels set in the agent.yaml configuration file applied when configuring the agent. They will also all have the following labels by default:

Label Description
instance.agent_version The current agent version of the Outlyer agent running on the host
instance.os_name The operating system running on the host, i.e. Linux or Windows
instance.os_version The specific version of the operating system running on the host, i.e. ubuntu-16.04
instance.mac_address The mac address of the host

Cloud Hosts

If a host is running on one of the major Cloud providers: AWS, Google Cloud, Azure and IBM Cloud, the following labels are automatically discovered for the instance the agent is running on in addition to the default host metadata:

Label Description
cloud.instance.id The unique instance ID for the host
cloud.provider The cloud provider the instance is running on: aws, gce, azure
cloud.service The cloud service the instance is running on, i.e. aws.ec2. If remote discovery via Cloud APIs is used this can also include instances for other services like RDS
cloud.account_id The account ID of the cloud provider the instance is running in. This allows you to monitor several cloud accounts in a single Outlyer account and differentiate them in Outlyer
cloud.instance.region The cloud region the instance is running in, i.e. us-east-1
cloud.instance.az The cloud availablility zone the instance is running in, i.e. us-east-1b
cloud.instance.type The type of instance running, i.e. t2.small
cloud.instance.image_id The VM image ID being run, i.e. ami-6d2f3887
cloud.instance.name OPTIONAL. If provided, the name of the instance
cloud.azure_resource_group OPTIONAL. On Azure only, the resource group the instance is running in.

In addition, any custom tags applied to your Cloud instance are automatically added if discovered. However there are some caveats for certain Cloud providers:

  • AWS: In order to get your instance tags, the agent will look up the instance’s IAM role. If the instance doesn’t have an IAM role or the role doesn’t have permissions to read the EC2 API, no EC2 instance tags will be added to your host instance in Outlyer.
  • Azure: Only instances deployed using the newer Resource Manager deployment model expose instance metadata. Any instances deployed with the Classic deployment model will not show the Cloud labels above.
  • IBM Cloud: IBM Cloud only exposes limited metadata to its instances. Hence labels such as cloud.instance.type, cloud.instance.image_id will not be available automatically.

Docker Hosts

If the agent discovers the Docker daemon running on the host, it will automatically add the following labels to the instance:

Label Description
docker.version The current version of Docker installed on the host
docker.os The operating system the Docker daemon is running on

Kubernetes Nodes

If the agent is deployed on Kubernetes as a pod daemonset (recommended) it will automatically pull in Kubernetes metadata for the node and cluster its running on from the pod environment variables and Kubernetes APIs to add the following labels to a host instance:

Label Description
k8s.node.name The name of the node as defined in Kubernetes.
k8s.cluster The unique name of the Kubernetes cluster the node belongs too so you can add multiple Kubernetes clusters to your Outlyer account and filter between them.
k8s.node.role The role of the node, generally either master or node.
k8s.node.failure_region The configured failure region of the node on Kubernetes.
k8s.node.failure_zone The configured failure availability zone of the node on Kubernetes.
k8s.node.labels.{key} All other custom labels applied to your Kubernetes nodes are added under this label namespace.

Container Metadata

The agent will automatically discover all running containers on a host when its installed and will display these container instances with instance.type: container in the Outlyer UI. In addition the following labels are provided:

Label Description
container.name The name of the container set by the Docker daemon or Kubernetes pod.
container.image_name The name of the container image pulled from the container registry.
container.image_version The version id of the container image pulled from the container registry.

Kubernetes Pod Containers

If a container is running as part of a Kubernetes pod, the agent will automatically discover the pod metadata and apply it to every container in that pod. This means if your pod has multiple containers they will all have the same k8s.pod.name so when deploying checks to pods on Kubernetes make sure you also use the container.name label to select the specific container instance you wish to run a check against in the pod.

Label Description
k8s.pod.name The unique pod name of the pod in Kubernetes.
k8s.pod.namespace The namespace the pod is running in on Kubernetes.
k8s.pod.qos_class The quality of service class of the pod: guaranteed, burstable, besteffort.
k8s.pod.service_account The name of the service account the pod is using to talk to the Kubernetes APIs.
k8s.pod.created_by_kind The type of the controller used to deploy the pod: replicaset, statefulset, daemonset.
k8s.pod.created_by The name of the controller used to deploy the pod.